LLMKube
Kubernetes operator for llama.cpp-native LLM inference with GPU scheduling, Apple Silicon Metal support, and OpenAI-compatible API
Categories
Platforms
License
Related Applications
Ollama
166.4kGet up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, and other large language models
LocalAI
44.5kRun your AI models locally and generate images and audio
Teleport
20.1kCertificate authority and access plane for SSH, Kubernetes, web applications, and databases
Onyx Community Edition
19.7kChat UI that works with any LLM. It comes loaded with advanced features like agents, web search,...
Stalwart Mail Server
12.1kAll-in-one mail server with JMAP, IMAP4, and SMTP support and a wide range of modern features
Flipt
4.8kFeature flag solution with support for multiple data backends