Instructions to use Abeersherif/Medical_Homework2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Abeersherif/Medical_Homework2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Abeersherif/Medical_Homework2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Abeersherif/Medical_Homework2") model = AutoModelForCausalLM.from_pretrained("Abeersherif/Medical_Homework2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Abeersherif/Medical_Homework2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Abeersherif/Medical_Homework2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Abeersherif/Medical_Homework2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Abeersherif/Medical_Homework2
- SGLang
How to use Abeersherif/Medical_Homework2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Abeersherif/Medical_Homework2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Abeersherif/Medical_Homework2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Abeersherif/Medical_Homework2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Abeersherif/Medical_Homework2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Abeersherif/Medical_Homework2 with Docker Model Runner:
docker model run hf.co/Abeersherif/Medical_Homework2
Medical_Homework2 โ Fine-Tuned SmolLM2-1.7B for Medical Reasoning
Medical_Homework2 is a fine-tuned version of SmolAI/SmolLM2-1.7B, trained specifically on structured medical question-answer data and short reasoning tasks.
The model aims to provide concise, accurate, and educational medical explanations suitable for students and basic learning purposes.
Model Overview
This model is optimized for medical comprehension tasks such as:
- Short medical answers
- Step-by-step reasoning
- Explanations of conditions, symptoms, and basic physiology
- Educational or homework-style responses
It is not designed for professional medical diagnosis or treatment decisions.
Intended Use
Recommended Use Cases
- Medical homework and assignment assistance
- Explanation of medical concepts in simple language
- Introductory physiology and pathology topics
- Basic reasoning about medical questions
Not Recommended
- Real-world clinical decision-making
- Emergency or diagnostic use
- Any situation requiring professional medical judgement
Training Data
The model was fine-tuned using:
- Synthetic medical question-answer pairs
- Simplified educational medical explanations
- Instruction-answer examples
- Homework-style reasoning data
No real patient data or clinical records were used.
Training Details
- Base model: SmolAI/SmolLM2-1.7B
- Fine-tuning objective: Causal language modeling
- Method: Full or LoRA fine-tuning (depending on your actual setup)
- Optimizer: AdamW
- Typical epochs: 1โ3
If you want, a full training script section can be added.
Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "Abeersherif/Medical_Homework2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Explain what type 2 diabetes is in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 9