đ ProtT5-XL-BFD Model
A pre - trained model on protein sequences using masked language modeling (MLM) objective, offering valuable features for protein analysis.
đ Quick Start
ProtT5-XL-BFD is a pre - trained model on protein sequences. It can be used for protein feature extraction or fine - tuned on downstream tasks. Here is a simple example of using it to extract features in PyTorch:
from transformers import T5Tokenizer, T5Model
import re
import torch
tokenizer = T5Tokenizer.from_pretrained('Rostlab/prot_t5_xl_bfd', do_lower_case=False)
model = T5Model.from_pretrained("Rostlab/prot_t5_xl_bfd")
sequences_Example = ["A E T C Z A O","S K T Z P"]
sequences_Example = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences_Example]
ids = tokenizer.batch_encode_plus(sequences_Example, add_special_tokens=True, padding=True)
input_ids = torch.tensor(ids['input_ids'])
attention_mask = torch.tensor(ids['attention_mask'])
with torch.no_grad():
embedding = model(input_ids=input_ids,attention_mask=attention_mask,decoder_input_ids=None)
encoder_embedding = embedding[2].cpu().numpy()
decoder_embedding = embedding[0].cpu().numpy()
⨠Features
- Based on the
t5 - 3b
model and pretrained on a large protein sequence corpus in a self - supervised way.
- Uses a Bart - like MLM denosing objective, different from the original T5 - 3B model's span denosing objective.
- Can capture important biophysical properties governing protein shape through self - supervised learning.
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
The basic way to use this model to extract features of a given protein sequence is shown in the Quick Start section above.
đ Documentation
Model Description
ProtT5-XL-BFD is based on the t5 - 3b
model and was pretrained on a large corpus of protein sequences in a self - supervised fashion. It uses a Bart - like MLM denosing objective, different from the original T5 - 3B model's span denosing objective. The features extracted from this model can capture important biophysical properties governing protein shape.
Intended Uses & Limitations
The model can be used for protein feature extraction or fine - tuned on downstream tasks. For some tasks, fine - tuning the model can achieve higher accuracy than using it as a feature extractor. For feature extraction, it is better to use the features from the encoder rather than the decoder.
Training Data
The ProtT5-XL-BFD model was pretrained on BFD, a dataset consisting of 2.1 billion protein sequences.
Training Procedure
Preprocessing
The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The rare amino acids "U,Z,O,B" are mapped to "X". The inputs of the model are of the form:
Protein Sequence [EOS]
The preprocessing step is performed on the fly, by cutting and padding the protein sequences up to 512 tokens. The masking details are: 15% of the amino acids are masked; in 90% of the cases, the masked amino acids are replaced by [MASK]
token; in 10% of the cases, they are replaced by a random amino acid.
Pretraining
The model was trained on a single TPU Pod V3 - 1024 for 1.2 million steps in total, using sequence length 512 (batch size 4k). It has approximately 3B parameters and uses the encoder - decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre - training.
Evaluation Results
When used for feature extraction, the model achieves the following results:
Task/Dataset |
secondary structure (3 - states) |
secondary structure (8 - states) |
Localization |
Membrane |
CASP12 |
77 |
66 |
|
|
TS115 |
85 |
74 |
|
|
CB513 |
84 |
71 |
|
|
DeepLoc |
|
|
77 |
91 |
BibTeX entry and citation info
@article {Elnaggar2020.07.12.199554,
author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self - Supervised Deep Learning and High Performance Computing},
elocation - id = {2020.07.12.199554},
year = {2020},
doi = {10.1101/2020.07.12.199554},
publisher = {Cold Spring Harbor Laboratory},
abstract = {Computational biology and bioinformatics provide vast data gold - mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto - regressive language models (Transformer - XL, XLNet) and two auto - encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22 - and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3 - 512 or V3 - 1024). We validated the advantage of up - scaling LMs to larger models supported by bigger data by predicting secondary structure (3 - states: Q3 = 76 - 84, 8 states: Q8 = 65 - 73), sub - cellular localization for 10 cellular compartments (Q10 = 74) and whether a protein is membrane - bound or water - soluble (Q2 = 89). Dimensionality reduction revealed that the LM - embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up - scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: <a href="https://github.com/agemagician/ProtTrans">https://github.com/agemagician/ProtTrans</a>Competing Interest StatementThe authors have declared no competing interest.},
URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554},
eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf},
journal = {bioRxiv}
}
đ§ Technical Details
Model Architecture
Based on the t5 - 3b
model, using an encoder - decoder architecture with approximately 3B parameters.
Training Process
Trained on a single TPU Pod V3 - 1024 for 1.2 million steps, with sequence length 512 and batch size 4k. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre - training.
đ License
No license information is provided in the original document.
Created by Ahmed Elnaggar/@Elnaggar_AI | LinkedIn