scfengv/TVL-general-layer-dataset
Viewer • Updated • 477k • 122
How to use scfengv/TVL_GeneralLayerClassifier with Adapters:
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("undefined")
model.load_adapter("scfengv/TVL_GeneralLayerClassifier", set_active=True)This model is fine-tuned from google-bert/bert-base-chinese.
Embeddings
Encoder
Pooler
Classifier
The model was trained using the following hyperparameters:
Learning rate: 1e-05
Batch size: 32
Number of epochs: 10
Optimizer: Adam
Loss function: torch.nn.BCEWithLogitsLoss()
This model was trained on the scfengv/TVL-general-layer-dataset.
import torch
from transformers import BertForSequenceClassification, BertTokenizer
model = BertForSequenceClassification.from_pretrained("scfengv/TVL_GeneralLayerClassifier")
tokenizer = BertTokenizer.from_pretrained("scfengv/TVL_GeneralLayerClassifier")
# Prepare your text
text = "Your text here" ## Please refer to Dataset
inputs = tokenizer(text, return_tensors = "pt", padding = True, truncation = True, max_length = 512)
# Make prediction
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.sigmoid(outputs.logits)
# Print predictions
print(predictions)
This model is specifically designed for TVL general layer classification tasks.
It's based on the Chinese BERT model, indicating it's optimized for Chinese text.
Hardware Type: NVIDIA Quadro RTX8000
Library: PyTorch
Hours used: 2hr 56mins
The model was trained using the following hyperparameters:
Learning rate: 1e-05
Batch size: 32
Number of epochs: 10
Optimizer: Adam
Loss function: torch.nn.BCEWithLogitsLoss()
Base model
google-bert/bert-base-chinese