Instructions to use DreamhubAI/Nova-e-mini with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DreamhubAI/Nova-e-mini with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DreamhubAI/Nova-e-mini") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DreamhubAI/Nova-e-mini") model = AutoModelForCausalLM.from_pretrained("DreamhubAI/Nova-e-mini") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DreamhubAI/Nova-e-mini with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DreamhubAI/Nova-e-mini" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DreamhubAI/Nova-e-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/DreamhubAI/Nova-e-mini
- SGLang
How to use DreamhubAI/Nova-e-mini with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DreamhubAI/Nova-e-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DreamhubAI/Nova-e-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DreamhubAI/Nova-e-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DreamhubAI/Nova-e-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use DreamhubAI/Nova-e-mini with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for DreamhubAI/Nova-e-mini to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for DreamhubAI/Nova-e-mini to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for DreamhubAI/Nova-e-mini to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="DreamhubAI/Nova-e-mini", max_seq_length=2048, ) - Docker Model Runner
How to use DreamhubAI/Nova-e-mini with Docker Model Runner:
docker model run hf.co/DreamhubAI/Nova-e-mini
🤖 WALL•E — Lightweight Local AI Assistant (1B)
WALL•E is a fine-tuned, lightweight language model based on Gemma 3 1B, designed for local, privacy-preserving AI usage.
It focuses on practical tasks, fast responses, and real-world utility rather than model size.
🎯 Why WALL•E?
Most modern AI models are either:
- Too large to run locally, or
- Too generic for everyday tasks
WALL•E is built to fill that gap.
✅ Runs entirely locally
✅ No API keys or cloud services
✅ Designed for low-resource environments
✅ Open-source and transparent
✨ Key Capabilities
🌐 Multilingual Support
- English – primary interaction language
- فارسی (Persian) – natural and fluent responses
- Deutsch (German) – conversational support
🛠 Practical Task Focus
- 📝 Text summarization (articles, notes, reports)
- 💻 Coding help (Python, JavaScript, Bash, shell)
- 🖥 Linux command explanations & troubleshooting
- 📚 Short factual answers and guidance
The model is optimized to handle short and minimal prompts naturally (e.g. "Hi", "Explain ls -la"), avoiding over-generation.
⚙️ Technical Overview
| Component | Details |
|---|---|
| Base Model | Google Gemma 3 1B |
| Fine-tuning | Supervised Fine-Tuning (SFT) |
| Framework | Unsloth |
| Context Length | 3200 tokens |
| Precision | BF16 |
| License | Apache 2.0 |
🚀 Quick Start (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "sinamsv0/WALL-E"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto"
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer
)
response = pipe(
"Summarize this text: Artificial intelligence is...",
max_new_tokens=120
)
print(response[0]["generated_text"])
🧪 Training Summary
Method: Supervised Fine-Tuning (SFT)
Data: Custom multilingual datasets with safety-focused filtering
Hardware: Single consumer GPU
Goal: Improve instruction-following, multilingual responses, and short-prompt behavior
🛡 Safety & Limitations
- ✅ Trained with safety-aware data
- ✅ Avoids harmful or unethical requests
- ⚠️ Limited reasoning depth due to 1B parameter size
- ⚠️ Not intended for complex multi-step reasoning or creative writing
🌍 Ideal Use Cases
Local coding assistant
Study and document summarization
Privacy-focused users
Lightweight edge deployments
Research and experimentation with small LLMs
🤝 Community & Links
GitHub: https://github.com/unknownmsv/WALL-E
Hugging Face Model: https://huggingface.co/sinamsv0/WALL-E
Hugging Face Space: https://huggingface.co/spaces/sinamsv0/WALL-E-DEMO
🔮 Roadmap (Planned)
UI tools for local use
Optional voice interface
Extended language support
Performance benchmarking on edge devices
Small model, focused design. WALL•E proves that useful AI doesn’t have to be huge.
- Downloads last month
- 5