Instructions to use spuun/kekbot-mini with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use spuun/kekbot-mini with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="spuun/kekbot-mini")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("spuun/kekbot-mini") model = AutoModelForCausalLM.from_pretrained("spuun/kekbot-mini") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use spuun/kekbot-mini with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "spuun/kekbot-mini" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "spuun/kekbot-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/spuun/kekbot-mini
- SGLang
How to use spuun/kekbot-mini with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "spuun/kekbot-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "spuun/kekbot-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "spuun/kekbot-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "spuun/kekbot-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use spuun/kekbot-mini with Docker Model Runner:
docker model run hf.co/spuun/kekbot-mini
YAML Metadata Error:"co2_eq_emissions.emissions" must be a number
THIS MODEL IS INTENDED FOR RESEARCH PURPOSES ONLY
Kekbot Mini
Based on a distilgpt2 model, fine-tuned to a select subset (65k<= messages) of Art Union's general-chat channel chat history.
Limits and biases
As this is trained on chat history, it is possible that discriminatory or even offensive materials to be outputted. Author holds his ground on the fact that ML models are mere statistical representation of the dataset used to train it, and that due to the nature of the dataset it is practically impossible to be certain of the degree of "cleanliness" that the data contained within holds.
Author can confirm, however, that from heuristical testing that the model was not found to be offensive to the author himself, hopefully this opinion stays true for everyone in the audience.
- Downloads last month
- 7