| 02/18 | 9 |
Full-stack application for turning documents into context for LLMs with multi-user management.
|
| 03/18 | 7 |
Full-stack application for turning documents into context for LLMs with multi-user management.
|
| 03/12 | 7 |
macOS app that captures scheduled background and foreground screenshots and converts them into timelines, daily summaries, project milestones, and addiction tracking with optional end-to-end encrypted social sharing.
|
| 03/10 | 7 |
macOS app that captures scheduled background and foreground screenshots and converts them into timelines, daily summaries, project milestones, and addiction tracking with optional end-to-end encrypted social sharing.
|
| 03/04 | 7 |
GUI and toolkit for managing Codex CLI projects, sessions, file-tree navigation, prompt notepad, git worktrees, file viewers, and usage analytics.
|
| 03/02 | 7 |
Full-stack application for turning documents into context for LLMs with multi-user management.
|
| 03/08 | 6 |
Browser extension connecting local AI models to a sidebar and web UI for interacting with any webpage.
|
| 03/01 | 6 |
Browser extension connecting local AI models to a sidebar and web UI for interacting with any webpage.
|
| 02/28 | 6 |
Private AI workspace combining chat, image generation, visual workflows, and local model support with browser-local storage for confidential conversations.
|
| 02/22 | 6 |
Browser extension connecting local AI models to a sidebar and web UI for interacting with any webpage.
|
| 02/17 | 6 |
Go-based terminal UI for real-time log analysis with charts, filtering, and AI-powered insights.
|
| 03/12 | 5 |
macOS app that captures scheduled background and foreground screenshots and converts them into timelines, daily summaries, project milestones, and addiction tracking with optional end-to-end encrypted social sharing.
|
| 03/12 | 5 |
Extra-small SDK for integrating with OpenAI or compatible APIs across various runtimes.
|
| 02/22 | 4 |
High-performance LLM proxy and load balancer providing intelligent routing, automatic failover, and unified model discovery across local and remote inference backends.
|
| 02/20 | 4 |
High-performance LLM proxy and load balancer providing intelligent routing, automatic failover, and unified model discovery across local and remote inference backends.
|
| 03/05 | 3 |
Platform to run AI models on personal data with easy installation and usage.
|
| 02/26 | 3 |
Platform to run AI models on personal data with easy installation and usage.
|
| 02/17 | 2 |
Platform to run AI models on personal data with easy installation and usage.
|
| 02/17 | 2 |
Platform to run AI models on personal data with easy installation and usage.
|
| 02/17 | 2 |
Platform to run AI models on personal data with easy installation and usage.
|