| 03/16 | 7 |
Go-based library for hardware-accelerated local inference with llama.cpp integration.
|
| 03/11 | 7 |
Runs local and cloud AI models with privacy-focused control and customizable assistants.
|
| 03/05 | 7 |
React Native package for running AI language models locally with support for text, vision, and tool calling.
|
| 02/23 | 7 |
Go-based library for hardware-accelerated local inference with llama.cpp integration.
|
| 02/23 | 7 |
React Native package for running AI language models locally with support for text, vision, and tool calling.
|
| 03/16 | 6 |
Go-based library for hardware-accelerated local inference with llama.cpp integration.
|
| 03/09 | 6 |
React Native package for running AI language models locally with support for text, vision, and tool calling.
|
| 03/05 | 6 |
React Native package for running AI language models locally with support for text, vision, and tool calling.
|
| 03/18 | 5 |
React Native package for running AI language models locally with support for text, vision, and tool calling.
|
| 03/18 | 5 |
React Native package for running AI language models locally with support for text, vision, and tool calling.
|
| 03/13 | 5 |
Reliable on-demand model switching between local OpenAI-compatible inference servers without restarting applications.
|
| 03/02 | 5 |
Reliable on-demand model switching between local OpenAI-compatible inference servers without restarting applications.
|
| 02/28 | 5 |
Reliable on-demand model switching between local OpenAI-compatible inference servers without restarting applications.
|
| 02/22 | 5 |
Reliable on-demand model switching between local OpenAI-compatible inference servers without restarting applications.
|
| 02/20 | 5 |
Reliable on-demand model switching between local OpenAI-compatible inference servers without restarting applications.
|