| 03/14 | 10 |
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
| 03/13 | 9 |
Tool for running and managing large language models.
|
| 03/15 | 8 |
Run AI models locally with Node.js bindings for llama.cpp.
|
| 03/11 | 8 |
Cross-platform app for downloading, training, fine-tuning, chatting with, and evaluating large language and diffusion models.
|
| 03/16 | 7 |
Private GenAI stack for deploying AI agents with support for RAG, API calls, vision, and efficient GPU scheduling.
|
| 03/16 | 7 |
Go-based library for hardware-accelerated local inference with llama.cpp integration.
|
| 02/26 | 7 |
Desktop app for running Large Language Models locally with cross-platform support and integrated image generation.
|
| 02/21 | 7 |
Ruby bindings for llama.cpp, enabling easy integration of the library in Ruby applications.
|
| 03/08 | 6 |
React Native binding for running LLaMA model inference with multimodal support including vision and audio.
|
| 03/13 | 5 |
Reliable on-demand model switching between local OpenAI-compatible inference servers without restarting applications.
|