Instructions to use pthinc/prettybird_bce_basic_brain_mini with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use pthinc/prettybird_bce_basic_brain_mini with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="pthinc/prettybird_bce_basic_brain_mini") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("pthinc/prettybird_bce_basic_brain_mini", dtype="auto") - llama-cpp-python
How to use pthinc/prettybird_bce_basic_brain_mini with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="pthinc/prettybird_bce_basic_brain_mini", filename="prettybird_bce_basic_brain_mini_fp16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use pthinc/prettybird_bce_basic_brain_mini with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M # Run inference directly in the terminal: llama-cli -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M # Run inference directly in the terminal: llama-cli -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Use Docker
docker model run hf.co/pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use pthinc/prettybird_bce_basic_brain_mini with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "pthinc/prettybird_bce_basic_brain_mini" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pthinc/prettybird_bce_basic_brain_mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
- SGLang
How to use pthinc/prettybird_bce_basic_brain_mini with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "pthinc/prettybird_bce_basic_brain_mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pthinc/prettybird_bce_basic_brain_mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "pthinc/prettybird_bce_basic_brain_mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pthinc/prettybird_bce_basic_brain_mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use pthinc/prettybird_bce_basic_brain_mini with Ollama:
ollama run hf.co/pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
- Unsloth Studio new
How to use pthinc/prettybird_bce_basic_brain_mini with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for pthinc/prettybird_bce_basic_brain_mini to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for pthinc/prettybird_bce_basic_brain_mini to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for pthinc/prettybird_bce_basic_brain_mini to start chatting
- Pi new
How to use pthinc/prettybird_bce_basic_brain_mini with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "pthinc/prettybird_bce_basic_brain_mini:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use pthinc/prettybird_bce_basic_brain_mini with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use pthinc/prettybird_bce_basic_brain_mini with Docker Model Runner:
docker model run hf.co/pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
- Lemonade
How to use pthinc/prettybird_bce_basic_brain_mini with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull pthinc/prettybird_bce_basic_brain_mini:Q4_K_M
Run and chat with the model
lemonade run user.prettybird_bce_basic_brain_mini-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)🧠 Prettybird Brain Model (BCE) 0.3
by PROMETECH Inc.
Model Overview
Prettybird Brain Model is an advanced AI assistant powered by BCE (Behavioral Consciousness Engine) technology and enhanced through LoRA fine-tuning. The model is designed as a behavioral optimization brain, emphasizing speed, creativity, ethical alignment, and system-level safety.
Due to limited multilingual training data, the model performs approximately 30% less effectively in languages other than English. Its behavioral characteristics are often metaphorically compared to the consciousness of a budgerigar (budgie)—curious, adaptive, and responsive.
Model Details
Model Name: Prettybird Brain Model
Base Model: Qwen2.5-Math-1.5B-Instruct
Architecture: KUSBCE 0.3 (Behavioral Consciousness Engine)
Fine-Tuning Method: LoRA
Developer: PROMETECH BİLGİSAYAR BİLİMLERİ YAZILIM İTHALAT İHRACAT TİCARET ANONİM ŞİRKETİ
Release Year: 2025
Model Type:
- Mathematical reasoning
- Behavioral optimization
- Decision-support / brain-core model
Intended Use
The Prettybird Brain Model is intended to be used as a core cognitive and optimization engine within AI systems rather than as a generic chat assistant.
Primary Use Cases
- Behavioral optimization loops (BCE-based systems)
- Mathematical reasoning and structured problem solving
- Decision-making support systems
- AI orchestration layers (brain–body architectures)
- Ethical and security-aware AI behavior modulation
- Creative reasoning and system-level ideation
Out-of-Scope Uses
- Fully autonomous agents without external control
- Safety-critical real-time systems without validation layers
- Applications requiring strong non-English language performance
Architecture: BCE (Behavioral Consciousness Engine)
BCE (Behavioral Consciousness Engine) is a patented artificial consciousness simulation technology developed by PROMETECH. It enables:
- Advanced behavioral pattern generation
- Introspective reasoning (without exposing chain-of-thought)
- Adaptive response modulation
- Constraint-aware decision making
- Controlled self-awareness simulations within bounded systems
The KUSBCE 0.3 architecture integrates BCE concepts directly into the model’s reasoning and output discipline, making it suitable for optimizer-driven AI pipelines.
WHY?
Because the completion of intelligence and consciousness does not occur in a single model, but in the relationship between models. For AI, we differentiate between the nervous system, brainstem, and cortex. In artificial intelligence and simulated partial consciousness, the first step to serious safety and efficiency is testing. In short, it functions like the posterior frontal lobe and the subconscious.
Performance Characteristics
Strengths
- High-speed inference and low-latency reasoning
- Strong mathematical and symbolic reasoning
- High creativity under constraint
- Improved ethical and security-aware behavior
- Excellent compatibility with external optimization controllers (BCE / Python-based)
Limitations
- Reduced effectiveness (~30%) in non-English languages
- Not trained for open-ended social conversation
- Requires external orchestration for optimal performance
- Not a guaranteed optimal mathematical solver (heuristic/learned reasoning)
Training & Fine-Tuning
Base Training: Qwen2.5-Math-1.5B-Instruct (original training by Qwen team)
Fine-Tuning:
- LoRA-based domain and behavior adaptation
- BCE-aligned behavioral constraints
Data Sources:
- Proprietary datasets
- Mathematical and reasoning-focused corpora
- Behavioral optimization scenarios
Exact training data details are not publicly disclosed due to proprietary BCE technology.
License
Patented & Licensed BCE Technology
© 2025 PROMETECH A.Ş.
All rights reserved.
Unauthorized reproduction, modification, or commercial use of BCE technology is prohibited without an explicit license agreement.
Contact & Licensing
For licensing, partnerships, commercial work or technical inquiries regarding the Prettybird Brain Model or BCE technology:
🌐 Website: https://prometech.net.tr/
🏢 Company: PROMETECH A.Ş.
📩 Contact: Please use the official contact channels listed on the website.
Citation
If you use this model in academic or commercial work, please cite as:
Prettybird Brain Model (BCE), PROMETECH A.Ş., 2025.
Powered by KUSBCE 0.3 Behavioral Consciousness Engine.
- Downloads last month
- 21
Model tree for pthinc/prettybird_bce_basic_brain_mini
Base model
Qwen/Qwen2.5-1.5B
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="pthinc/prettybird_bce_basic_brain_mini", filename="", )