Overview:
VT is an open-source, production-ready AI chat application designed with a privacy-first architecture. It provides a single interface to access multiple AI providers and models, including both cloud-based services and local models running on the user’s own machine. The application targets developers and technically-minded users who want to experiment with various AI models, maintain control over their conversation data, and avoid server-side storage of chat history. It is positioned as a modern, multi-provider AI client rather than a dedicated alternative to any single commercial product.
Core Features:
Nano Banana Conversational Image Editor: Generate and edit images through natural language commands, preserving an edit history.
Multi-AI Provider Support: Connects to OpenAI, Anthropic, Google, Fireworks, Together AI, xAI, and local providers (Ollama, LM Studio).
Local-First Storage: All chat data is stored in the browser’s IndexedDB, with no server-side storage of conversations.
Document Processing and Extraction: Upload PDF, DOC, DOCX, TXT, and MD files for analysis and structured JSON data extraction.
Intelligent Tool Router: Automatically activates relevant tools (web search, chart generation, calculator) based on user queries using semantic matching.
Free and Premium Model Tiers: Access to 9 free server models (e.g., Gemini Flash) and premium models via user-provided API keys.
Use Cases:
Developers exploring and comparing outputs from different AI models (e.g., GPT-4, Claude, Gemini) through a single chat interface.
Users who need an AI assistant with real-time web search and advanced reasoning capabilities for research or current events.
Individuals who want to run AI models locally for complete privacy, using Ollama or LM Studio, and still access a full-featured chat UI.
Users who need to extract structured data from documents (e.g., PDF invoices, reports) into JSON format.
Why It Matters:
VT distinguishes itself by focusing on data privacy through local-first storage and supporting a broad range of AI providers, including local models, without requiring a paid subscription for basic access. The application’s architecture emphasizes user control over data and model choice, making it a practical option for those who want to avoid ecosystem lock-in or server-side data collection. It is a client-side application that prioritizes transparency in AI reasoning and tool usage, rather than a platform with proprietary models or data policies.




