Instructions to use GleghornLab/AMPLIFY_120M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GleghornLab/AMPLIFY_120M with Transformers:
# Load model directly from transformers import AMPLIFY model = AMPLIFY.from_pretrained("GleghornLab/AMPLIFY_120M", dtype="auto") - Notebooks
- Google Colab
- Kaggle
import torch
from udev.models.amplify.modeling_amplify import AMPLIFY
device = torch.device("cuda") # only cuda is supported
model = AMPLIFY.from_pretrained('GleghornLab/AMPLIFY_120M', token=token).to(device)
tokenizer = EsmTokenizer.from_pretrained('GleghornLab/AMPLIFY_120M', token=token)
sequences = ['SEQWENCE', 'MEAEGAVE'] # list of seqs: str
tokens = tokenizer(sequences, return_tensors='pt', padding=True, pad_to_multiple_of=8)
tokens = {k: v.to(device) for k, v in tokens.items()}
out = model(
src=tokens['input_ids'],
pad_mask=tokens['attention_mask'].float(),
output_hidden_states=True,
output_attentions=True
)
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support