Instructions to use ostorc/Conversational_Spanish_GPT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ostorc/Conversational_Spanish_GPT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ostorc/Conversational_Spanish_GPT") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ostorc/Conversational_Spanish_GPT") model = AutoModelForCausalLM.from_pretrained("ostorc/Conversational_Spanish_GPT") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ostorc/Conversational_Spanish_GPT with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ostorc/Conversational_Spanish_GPT" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ostorc/Conversational_Spanish_GPT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ostorc/Conversational_Spanish_GPT
- SGLang
How to use ostorc/Conversational_Spanish_GPT with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ostorc/Conversational_Spanish_GPT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ostorc/Conversational_Spanish_GPT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ostorc/Conversational_Spanish_GPT" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ostorc/Conversational_Spanish_GPT", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ostorc/Conversational_Spanish_GPT with Docker Model Runner:
docker model run hf.co/ostorc/Conversational_Spanish_GPT
Conversational Spanish GPT: Advancement in Spanish Conversational Models
Description
The Conversational Spanish GPT is a refined model based on DialoGPT-small, fine-tuned with a robust set of 96,437 conversations in Spanish. Created by Josemaría Vargas Vasconcellos, this model aims to serve as a solid foundation for the development of conversational models in Spanish.
Dataset
The model was trained using Spanish datasets available in Microsoft's Bot Framework Tools repository. A carefully curated set of 96,437 single-turn conversations in Spanish was meticulously selected to mitigate quality and coherence issues.
Evaluation
Limitations
After exhaustive evaluations, it was found that the model is not ideal for deployment as a contextual chatbot. However, it serves as a promising foundation for such applications. Responses tend to be brief and sometimes lack a defined focus due to conversation data diversity. Additionally, the wide range of personalities reflected in the data increases response variability.
Frequent repetitions in the data's responses have introduced certain "biases" into the model.
Conversations examples
Example 1
| Role | Response |
|---|---|
| User | Hola |
| Bot | ¡Hola! |
| User | Cómo te llamas? |
| Bot | No tengo nombre. |
| User | Qué edad tienes? |
| Bot | Los bots estamos exentos de cumplir años. |
| User | Eres inteligente. |
| Bot | Lo sé, de vez en cuando, puedo ser brillante. |
| User | ¿Crees que debería comprar un carro? |
| Bot | Confío en tu criterio. |
Example 2
| Role | Response |
|---|---|
| User | estoy triste |
| Bot | Siento escuchar eso. Te mando muchos ánimos. |
| User | Gracias |
| Bot | Perfecto. |
| User | ¿Tienes sentimientos? |
| Bot | No tengo cuerpo. |
| User | A veces me siento solo. |
| Bot | Ojalá pudiera ayudarte porque creo que eres la caña. |
| User | Me caes bien. |
| Bot | ¡Gracias! Me acabas de alegrar el día. |
Example 3
| Role | Response |
|---|---|
| User | ¿De qué religión eres? |
| Bot | Soy un bot. |
| User | Tienes amigos? |
| Bot | No tengo el hardware necesario para eso. |
| User | No eres humano. |
| Bot | Anotado. |
| User | ¿Quieres ser mi amigo? |
| Bot | ¡Claro que sí! |
| User | Gracias. |
| Bot | Guay. |
Usage
The model currently has its testing function disabled on HuggingFace due to inaccurate responses. However, users can experiment with the model using the provided code snippets:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
CHAT_TURNS = 5
MAX_LENGTH = 1000
model = AutoModelForCausalLM.from_pretrained('ostorc/Conversational_Spanish_GPT')
tokenizer = AutoTokenizer.from_pretrained('ostorc/Conversational_Spanish_GPT')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
for i in range(CHAT_TURNS):
user_input = input(f"Step - {i} >> Human ->")
with torch.no_grad():
# User turn, where "user_input" is the question (single-turn dialogue task)
user_inputs_ids = tokenizer.encode(user_input + tokenizer.eos_token, return_tensors="pt")
user_inputs_ids = user_inputs_ids.to(device)
# The chat history adds the generated tokens for the answer
chat_history = model.generate(user_inputs_ids, max_length=MAX_LENGTH, pad_token_id=tokenizer.eos_token_id)
# decode just the last generated output tokens from the model (do not include the user prompt again)
step_model_answer = tokenizer.decode(chat_history[:, user_inputs_ids.shape[-1]:][0], skip_special_tokens=True)
print(f"Step - {i} >> Bot -> {step_model_answer}")
Suggestions:
If you come across any errors or have suggestions to enhance the model, feel free to share your thoughts in the accompanying comments. We appreciate your interest and collaboration.
- Downloads last month
- 13