Instructions to use Havmand/minillama with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Havmand/minillama with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Havmand/minillama", filename="minillama.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Havmand/minillama with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Havmand/minillama # Run inference directly in the terminal: llama-cli -hf Havmand/minillama
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Havmand/minillama # Run inference directly in the terminal: llama-cli -hf Havmand/minillama
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Havmand/minillama # Run inference directly in the terminal: ./llama-cli -hf Havmand/minillama
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Havmand/minillama # Run inference directly in the terminal: ./build/bin/llama-cli -hf Havmand/minillama
Use Docker
docker model run hf.co/Havmand/minillama
- LM Studio
- Jan
- Ollama
How to use Havmand/minillama with Ollama:
ollama run hf.co/Havmand/minillama
- Unsloth Studio new
How to use Havmand/minillama with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Havmand/minillama to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Havmand/minillama to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Havmand/minillama to start chatting
- Docker Model Runner
How to use Havmand/minillama with Docker Model Runner:
docker model run hf.co/Havmand/minillama
- Lemonade
How to use Havmand/minillama with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Havmand/minillama
Run and chat with the model
lemonade run user.minillama-{{QUANT_TAG}}List all available models
lemonade list
minillama
- Model creator: Mads Havmand
Description
minillama is a minimal Large Language Model using the Llama architecture and distributed in the GGUF format.
The purpose of the model is to be small and technically qualify as a model that can be loaded with llama.cpp without causing an error. I originally created this model because I needed a small model for my unit tests of Python code that used llama-cpp-python.
The model can technically be used for inference, but the output produced is a close to useless as you can get. Tokens per second is nice though, at around 1000 tokens per second on an Apple M2 Pro.
To reduce file size, the model is quantized using Q2_K.
The model contains 4.26 million parameters and is 3.26 MiB.
As for the vocabulary, the model uses the llama vocabulary provided by llama.cpp (SHA512: 38a5acf305050422882044df0acc97e5ae992ed19b2838b3b58ebbbb1f61c59bfc12a6f686a724aed32227045806e4dd46aadf9822155d1169455fa56d38fbc2)
The training corpus consists of a space and a newline:
00000000 20 0a | .|
00000002
Finally, the model was build using llama.cpp's train-text-from-scratch (from commit 97c1549808d2742d37584a3c9df28154bdf34417). The command used was:
./train-text-from-scratch \
--vocab-model models/ggml-vocab-llama.gguf \
--ctx 1 --embd 64 --head 1 --layer 1 \
--checkpoint-in chk-minillama-LATEST.gguf \
--checkpoint-out chk-minillama-ITERATION.gguf \
--model-out ggml-minillama-f32-ITERATION.gguf \
--train-data "training.txt" \
-t 6 -b 16 --seed 1 --adam-iter 1 \
--no-checkpointing
Quantization happened using ./quantize ggml-minillama-f32-LATEST.gguf 10.
These files were quantized using hardware kindly provided by me.
- Downloads last month
- 22
We're not able to determine the quantization variants.