llama
Date | Project Name | 🎉 | ? | Tags |
---|---|---|---|---|
08/04 |
LocalAI v3.3.2
* [LocalAI v3.3.2](https://github.com/mudler/LocalAI) – Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
8
|
Go 34357 ⭐870 days old |
ai llm go llama rwkv stable-diffusion |
08/01 |
llama.vscode v0.0.13
* [llama.vscode v0.0.13](https://github.com/ggml-org/llama.vscode) – VS Code extension providing local LLM-assisted text completion with configurable context and performance options.
VS Code extension providing local LLM-assisted text completion with configurable context and performance options.
|
6
|
TypeScript 876 ⭐210 days old |
extension typescript llm llama copilot developer-tool |
08/01 |
LocalAI v3.3.1
* [LocalAI v3.3.1](https://github.com/mudler/LocalAI) – Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
8
|
Go 34357 ⭐870 days old |
ai llm go llama rwkv stable-diffusion |
07/31 |
Transformer Lab v0.20.7
* [Transformer Lab v0.20.7](https://github.com/transformerlab/transformerlab-app) – Cross-platform app for downloading, training, fine-tuning, chatting with, and evaluating large language and diffusion models.
Cross-platform app for downloading, training, fine-tuning, chatting with, and evaluating large language and diffusion models.
|
6
|
TypeScript 3709 ⭐589 days old |
electron typescript llama llms lora rlhf |
07/31 |
HelixML 2.0.0
* [HelixML 2.0.0](https://github.com/helixml/helix) – Private GenAI stack for deploying AI agents with support for RAG, API calls, vision, and efficient GPU scheduling.
Private GenAI stack for deploying AI agents with support for RAG, API calls, vision, and efficient GPU scheduling.
|
8
|
Go 509 ⭐658 days old |
golang llm openai go llama mistral |
07/31 |
llama.rn v0.6.5
* [llama.rn v0.6.5](https://github.com/mybigday/llama.rn) – React Native binding for running LLaMA model inference with multimodal support including vision and audio.
React Native binding for running LLaMA model inference with multimodal support including vision and audio.
|
5
|
C 600 ⭐734 days old |
react-native android llm c llama llama-cpp |
07/30 |
Ollama v0.10.1
* [Ollama v0.10.1](https://github.com/ollama/ollama) – Tool for running and managing large language models.
Tool for running and managing large language models.
|
7
|
Go 148689 ⭐770 days old |
llm go llama llama2 llms |
07/30 |
Ollama v0.10.0
* [Ollama v0.10.0](https://github.com/ollama/ollama) – Tool for running and managing large language models.
Tool for running and managing large language models.
|
9
|
Go 148689 ⭐770 days old |
llm go llama llama2 llms |
07/29 |
node-llama-cpp v3.11.0
* [node-llama-cpp v3.11.0](https://github.com/withcatai/node-llama-cpp) – Run AI models locally with Node.js bindings for llama.cpp.
Run AI models locally with Node.js bindings for llama.cpp.
|
8
|
TypeScript 1607 ⭐723 days old |
typescript ai llama bindings catai llama-cpp |
07/28 |
LocalAI v3.3.0
* [LocalAI v3.3.0](https://github.com/mudler/LocalAI) – Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
9
|
Go 34357 ⭐870 days old |
ai llm go llama rwkv stable-diffusion |
07/26 |
LocalAI v3.2.3
* [LocalAI v3.2.3](https://github.com/mudler/LocalAI) – Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
7
|
Go 34357 ⭐870 days old |
ai llm go llama rwkv stable-diffusion |
07/25 |
LocalAI v3.2.2
* [LocalAI v3.2.2](https://github.com/mudler/LocalAI) – Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
8
|
Go 34357 ⭐870 days old |
ai llm go llama rwkv stable-diffusion |
07/25 |
LocalAI v3.2.1
* [LocalAI v3.2.1](https://github.com/mudler/LocalAI) – Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
8
|
Go 34357 ⭐870 days old |
ai llm go llama rwkv stable-diffusion |
07/24 |
LocalAI v3.2.0
* [LocalAI v3.2.0](https://github.com/mudler/LocalAI) – Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
Self-hosted alternative to popular AI APIs for local inferencing on consumer-grade hardware.
|
9
|
Go 34357 ⭐870 days old |
ai llm go llama rwkv stable-diffusion |