Instructions to use alpha-ai/OopsHusBot-3B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use alpha-ai/OopsHusBot-3B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="alpha-ai/OopsHusBot-3B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("alpha-ai/OopsHusBot-3B") model = AutoModelForCausalLM.from_pretrained("alpha-ai/OopsHusBot-3B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use alpha-ai/OopsHusBot-3B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "alpha-ai/OopsHusBot-3B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alpha-ai/OopsHusBot-3B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/alpha-ai/OopsHusBot-3B
- SGLang
How to use alpha-ai/OopsHusBot-3B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "alpha-ai/OopsHusBot-3B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alpha-ai/OopsHusBot-3B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "alpha-ai/OopsHusBot-3B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "alpha-ai/OopsHusBot-3B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use alpha-ai/OopsHusBot-3B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for alpha-ai/OopsHusBot-3B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for alpha-ai/OopsHusBot-3B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for alpha-ai/OopsHusBot-3B to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="alpha-ai/OopsHusBot-3B", max_seq_length=2048, ) - Docker Model Runner
How to use alpha-ai/OopsHusBot-3B with Docker Model Runner:
docker model run hf.co/alpha-ai/OopsHusBot-3B
Uploaded Model
- Developed by: Alpha AI
- License: apache-2.0
- Finetuned from model: meta-llama/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with Unsloth and Hugging Face's TRL library.
OopsHusBot-3B: The AI Model for Husbands Who Try (and Sometimes Fail) at Communication
Overview
Husbands mean well. Really. But communication can sometimes feel like an unsolvable puzzle. OopsHusBot-3B is here to help! Designed to assist husbands in navigating tricky conversations, avoiding misunderstandings, and delivering just the right amount of romance (without overdoing it), this model is your ultimate survival guide for relationship communication.
Built on meta-llama/Llama-3.2-3B-Instruct, this model is fine-tuned to prevent classic communication blunders—because sometimes, a simple “OK” isn’t the right answer.
Model Details
- Base Model: meta-llama/Llama-3.2-3B-Instruct
- Fine-tuned By: Alpha AI
- Training Framework: Unsloth
Quantization Levels Available
- q4_k_m
- q5_k_m
- q8_0
- 16-bit (this, full precision) - Link
(Note: The INT1 16-bit link is referenced (https://huggingface.co/alphaaico/OopsHusBot-3B)
Format: GGUF (Optimized for local deployments, https://huggingface.co/alphaaico/OopsHusBot-3B-GGUF)
Key Features
- Auto-Smooth Talk – Helps generate heartfelt, thoughtful responses without sounding robotic.
- Oops Recovery Mode – Immediate damage control when you say something unintentionally dumb.
- Danger Phrase Decoder – Correctly interprets high-risk phrases like “Do whatever you want” (Hint: She doesn’t mean that).
- Anniversary & Birthday Reminder – Generates sweet, meaningful texts to keep you in the clear.
- Pre-Apology Generator – Because sometimes, you don’t know what you did wrong—but you know you need to fix it.
- Selective Hearing Fixer – Crafts responses to make it seem like you were totally paying attention.
Training & Data
OopsHusBot-3B has been trained on a carefully curated dataset of:
Romantic yet slightly clueless husband responses
Apology best practices (ranked by effectiveness)
Deciphering “I’m fine” and other cryptic messages
Emergency sweet talk for when things go south
When to text “I love you” without being asked
Avoiding the classic “Are you mad?” trap
Important Warnings
❌ Not responsible for husbands who still say “Calm down.”
❌ Does not fix situations where you actually forgot her birthday.
❌ AI-generated compliments may be too good, causing suspicion.
❌ Disables “I told you so” responses for your safety.
Use Cases
- When she says “I have nothing to wear” – Generates supportive yet non-argumentative responses.
- Emergency Romance Mode – For those “You never say nice things to me” situations.
- Silent Treatment Prevention – Helps craft messages to de-escalate tension before it spirals.
- Reading Between the Lines – Ensures you don’t misinterpret “Do whatever you want.”
- Gift Idea Generator – Ensures you never make the mistake of buying a vacuum as a romantic gift again.
Model Performance
OopsHusBot-3B has been further optimized to deliver:
- Empathic and Context-Aware Responses – Improved understanding of user inputs with a focus on empathetic replies.
- High Efficiency on Consumer Hardware – Maintains quick inference speeds even with more advanced conversation modeling.
- Balanced Coherence and Creativity – Strikes an ideal balance for real-world dialogue applications, allowing for both coherent answers and creative flair.
Limitations & Biases
Like any AI system, this model may exhibit biases stemming from its training data. Users should employ it responsibly and consider additional fine-tuning if needed for sensitive or specialized applications.
License
Released under the Apache-2.0 license. For full details, please consult the license file in the Hugging Face repository.
Acknowledgments
Special thanks to the Unsloth team for their optimized training pipeline for LLaMA models. Additional appreciation goes to Hugging Face’s TRL library for enabling accelerated and efficient fine-tuning workflows.
NOTE - If you’re a husband who means well but sometimes just doesn’t get it—OopsHusBot-3B has your back. 🚀🔥
- Downloads last month
- 733