Instructions to use ghosthets/Dex with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ghosthets/Dex with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ghosthets/Dex")# Load model directly from transformers import AutoModelWithLMHead model = AutoModelWithLMHead.from_pretrained("ghosthets/Dex", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ghosthets/Dex with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ghosthets/Dex" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ghosthets/Dex", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ghosthets/Dex
- SGLang
How to use ghosthets/Dex with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ghosthets/Dex" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ghosthets/Dex", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ghosthets/Dex" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ghosthets/Dex", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ghosthets/Dex with Docker Model Runner:
docker model run hf.co/ghosthets/Dex
๐ก๏ธ Dex โ Your Personal Cybersecurity AI Sidekick
โBuilt for the curious, optimized for the underground.โ
Dex (Digital Exploit eXpert) is an intelligent cybersecurity-oriented conversational AI built for ethical hackers, CTF warriors, and knowledge-hungry learners. Whether you're crafting payloads, understanding CVEs, or just chatting โ Dex makes cyber learning fun, fast & fearless.
๐ง Model Intelligence
| Feature | Description |
|---|---|
| ๐ Name | ghosthets/Dex |
| ๐ง Type | Causal Language Model |
| ๐ฏ Domain | Cybersecurity, CTFs, Ethical Hacking |
| ๐งช Usage | Educational, research, AI assistant for cyber learners |
| ๐งฐ Tools | Text reasoning, payload discussion, exploit explanation |
๐ฅ Dex Is Trained To:
- ๐ฌ Talk like a pro on cybersecurity topics
- ๐ Explain CVEs, XSS, CSRF, SQLi, etc.
- ๐งน Assist in CTF logic & recon mindset
- ๐ก Generate payload hints
- ๐พ Help you build your own AI security bots
- โ๏ธ Run locally with just CPU or on edge boards
โ๏ธ Run It in Python (Transformers)
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("ghosthets/Dex")
model = AutoModelForCausalLM.from_pretrained("ghosthets/Dex")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
def ask_dex(prompt):
input_text = f"User: {prompt}\nDex:"
inputs = tokenizer(input_text, return_tensors="pt").to(device)
outputs = model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
max_length=256,
do_sample=True,
top_k=40,
top_p=0.92,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id
)
return tokenizer.decode(outputs[0], skip_special_tokens=True).split("Dex:")[-1].strip()
print(ask_dex("How to identify an XSS vulnerability?"))
โก Edge-Device Ready
๐ป Lightweight. Efficient. Ready-to-Hack.
Dex is engineered to run smoothly on low-power devices like:
- ๐น Raspberry Pi Zero W
- ๐น Pi 2W & Pi 4
- ๐น Tinker Boards / NanoPi / Pocket PCs
- ๐น Light virtual containers and offline setups
Despite its small size, Dex delivers powerful outputs and will soon be trained on more curated data, making it sharper, smarter, and more secure.
๐จโ๐ป Perfect For
- Ethical Hackers
- Red Teamers & Pentesters
- Bug Bounty Hunters
- Cybersecurity Students
- AI + Security Researchers
- Solo Cyber Ninjas
๐ง Built With โค๏ธ by
Gaurav Chouhan
aka ghosthets
๐ GitHub: ghosthets
๐ From India ๐ฎ๐ณ
- Downloads last month
- 19