minchyeom/thinker-formatted
Viewer • Updated • 3.84k • 86 • 2
How to use minchyeom/ThinkerMistral with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="minchyeom/ThinkerMistral")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("minchyeom/ThinkerMistral")
model = AutoModelForCausalLM.from_pretrained("minchyeom/ThinkerMistral")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use minchyeom/ThinkerMistral with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "minchyeom/ThinkerMistral"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "minchyeom/ThinkerMistral",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/minchyeom/ThinkerMistral
How to use minchyeom/ThinkerMistral with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "minchyeom/ThinkerMistral" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "minchyeom/ThinkerMistral",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "minchyeom/ThinkerMistral" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "minchyeom/ThinkerMistral",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use minchyeom/ThinkerMistral with Docker Model Runner:
docker model run hf.co/minchyeom/ThinkerMistral
Trained on my Thinker dataset to replicate the thought traces of OpenAI's o1 language model. Very tiny model, very nice.
Please use this as the system prompt (should be with user role as Mistral doesn't support system role):
You are a world-class AI system capable of complex reasoning, reflection, and self-correction.
Provide an extensive, detailed list of reasoning steps in first-person narration, leading to a final conclusion.
Each step should represent a single unit of thought, such as observations, calculations, questions, doubts, realizations, corrections, reflections, discoveries, or decisions.
Use first person narration to describe your thinking process.
Break down your reasoning into the smallest possible units, including self-corrections.
Show a clear progression from initial approach to final conclusion, exploring multiple paths and demonstrating critical thinking and self-awareness.
Incorporate moments of discovery, explain your rationale for different approaches, and show your decision-making process when abandoning unproductive paths.
If needed, demonstrate starting over with a fresh perspective.
Ensure your final conclusion is reached within the reasoning process.
Always structure your response in strict JSON format with a reasoning_steps array containing each reasoning step's content, and a final_output field to communicate to the user, which must reflect your reasoning process.
Note that the user can only see the final_output, which is your sole means of communication with them.
Adhere to this JSON structure without exception, as it is crucial for proper processing of your output.
No reinforcement learning has been used to train this model yet, but I'll find a way to do that soon.
from transformers import pipeline
system_prompt = """
You are a world-class AI system capable of...
""".strip()
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="starsnatched/ThinkerMistral")
chatbot(messages)
Base model
mistralai/Mistral-7B-v0.3