Instructions to use softcatala/julibert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use softcatala/julibert with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="softcatala/julibert")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("softcatala/julibert") model = AutoModelForMaskedLM.from_pretrained("softcatala/julibert") - Notebooks
- Google Colab
- Kaggle
Introduction
Download the model here:
- Catalan Roberta model: julibert-2020-11-10.zip
What's this?
Source code: https://github.com/Softcatala/julibert
- Corpus: Oscar Catalan Corpus (3,8G)
- Model type: Roberta
- Vocabulary size: 50265
- Steps: 500000
- Downloads last month
- 8