Self-hosted LLM API — OpenAI-compatible Checking...
All /v1/* endpoints require a Bearer token:
Authorization: Bearer YOUR_API_KEY
The /health endpoint is public.
Chat completion (messages format). Supports streaming via "stream": true.
curl https://llm.ftrz.de/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3-coder",
"messages": [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "Write a fibonacci function in Python"}
],
"max_tokens": 512,
"temperature": 0.7
}'
Text completion (prompt format).
curl https://llm.ftrz.de/v1/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen3-coder",
"prompt": "def binary_search(arr, target):",
"max_tokens": 256
}'
List available models.
Health check (no auth required). Returns server status and loaded model info.
OpenAI Python SDK
from openai import OpenAI
client = OpenAI(
base_url="https://llm.ftrz.de/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="qwen3-coder",
messages=[{"role": "user", "content": "Hello!"}]
)
| Hardware | AMD Ryzen AI Max 395 — 96 GB unified memory |
|---|---|
| Backend | llama.cpp (ROCm HIP) |
| Speed | ~35 tok/s generation, ~50 tok/s prompt processing |
| Rate Limit | 30 req/min per IP |
| Timeout | 600s per request |