Instructions to use Pipper/SolCoderFuncs with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Pipper/SolCoderFuncs with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Pipper/SolCoderFuncs") model = AutoModelForSeq2SeqLM.from_pretrained("Pipper/SolCoderFuncs") - Notebooks
- Google Colab
- Kaggle
| license: bsd-3-clause | |
| base_model: Salesforce/codet5p-220m | |
| tags: | |
| - generated_from_trainer | |
| model-index: | |
| - name: SolCoderFuncs | |
| results: [] | |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You | |
| should probably proofread and complete it, then remove this comment. --> | |
| # SolCoderFuncs | |
| This model is a fine-tuned version of [Salesforce/codet5p-220m](https://huggingface.co/Salesforce/codet5p-220m) on the None dataset. | |
| It achieves the following results on the evaluation set: | |
| - Loss: 0.5574 | |
| ## Model description | |
| More information needed | |
| ## Intended uses & limitations | |
| More information needed | |
| ## Training and evaluation data | |
| More information needed | |
| ## Training procedure | |
| ### Training hyperparameters | |
| The following hyperparameters were used during training: | |
| - learning_rate: 0.0001 | |
| - train_batch_size: 37 | |
| - eval_batch_size: 37 | |
| - seed: 100 | |
| - distributed_type: multi-GPU | |
| - num_devices: 4 | |
| - total_train_batch_size: 148 | |
| - total_eval_batch_size: 148 | |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
| - lr_scheduler_type: linear | |
| - num_epochs: 40 | |
| ### Training results | |
| | Training Loss | Epoch | Step | Validation Loss | | |
| |:-------------:|:-----:|:------:|:---------------:| | |
| | 0.8793 | 1.0 | 3600 | 0.7881 | | |
| | 0.7622 | 2.0 | 7200 | 0.7190 | | |
| | 0.7077 | 3.0 | 10800 | 0.6769 | | |
| | 0.659 | 4.0 | 14400 | 0.6518 | | |
| | 0.6212 | 5.0 | 18000 | 0.6300 | | |
| | 0.589 | 6.0 | 21600 | 0.6119 | | |
| | 0.562 | 7.0 | 25200 | 0.6014 | | |
| | 0.5361 | 8.0 | 28800 | 0.5905 | | |
| | 0.5171 | 9.0 | 32400 | 0.5799 | | |
| | 0.4973 | 10.0 | 36000 | 0.5747 | | |
| | 0.4772 | 11.0 | 39600 | 0.5666 | | |
| | 0.4619 | 12.0 | 43200 | 0.5610 | | |
| | 0.4443 | 13.0 | 46800 | 0.5588 | | |
| | 0.4335 | 14.0 | 50400 | 0.5571 | | |
| | 0.4192 | 15.0 | 54000 | 0.5534 | | |
| | 0.4062 | 16.0 | 57600 | 0.5512 | | |
| | 0.3977 | 17.0 | 61200 | 0.5513 | | |
| | 0.3864 | 18.0 | 64800 | 0.5515 | | |
| | 0.3791 | 19.0 | 68400 | 0.5507 | | |
| | 0.3718 | 20.0 | 72000 | 0.5510 | | |
| | 0.4132 | 21.0 | 75600 | 0.5551 | | |
| | 0.4079 | 22.0 | 79200 | 0.5499 | | |
| | 0.3957 | 23.0 | 82800 | 0.5522 | | |
| | 0.3895 | 24.0 | 86400 | 0.5482 | | |
| | 0.3797 | 25.0 | 90000 | 0.5477 | | |
| | 0.3686 | 26.0 | 93600 | 0.5486 | | |
| | 0.3628 | 27.0 | 97200 | 0.5491 | | |
| | 0.3518 | 28.0 | 100800 | 0.5502 | | |
| | 0.3452 | 29.0 | 104400 | 0.5494 | | |
| | 0.3379 | 30.0 | 108000 | 0.5546 | | |
| | 0.3292 | 31.0 | 111600 | 0.5486 | | |
| | 0.3232 | 32.0 | 115200 | 0.5522 | | |
| | 0.3146 | 33.0 | 118800 | 0.5524 | | |
| | 0.31 | 34.0 | 122400 | 0.5505 | | |
| | 0.3057 | 35.0 | 126000 | 0.5538 | | |
| | 0.301 | 36.0 | 129600 | 0.5549 | | |
| | 0.2955 | 37.0 | 133200 | 0.5557 | | |
| | 0.2901 | 38.0 | 136800 | 0.5554 | | |
| | 0.2872 | 39.0 | 140400 | 0.5564 | | |
| | 0.2844 | 40.0 | 144000 | 0.5574 | | |
| ### Framework versions | |
| - Transformers 4.33.0 | |
| - Pytorch 2.1.0+cu121 | |
| - Datasets 2.11.0 | |
| - Tokenizers 0.13.3 | |