The private AI workstation for Law, Healthcare, and Enterprise. Run advanced LLMs and agentic workflows entirely on your local hardware. Zero cloud data leaks, total GDPR compliance.
LlamaSharp native inference. Whisper STT/TTS integration. Supports GGUF models up to 70B+.
Local ONNX-based embeddings. Multi-lingual RAG with persistent source citations and context snippets.
Time & File triggers. Native MCP (Model Context Protocol) nodes with granular permissions. Includes native AD/LDAP and IMAP/SMTP integrations for autonomous agent routing.
OpenAI-compatible local endpoint. Collaborate via LAN with shared Vector-Collections.
The Watson API Server is the heart of your hybrid AI infrastructure. For the first time, you combine the power of cloud services with the security of local control – including granular budget monitoring that protects your credit card.
Set strict monthly token limits per user or API key. A faulty script or an infinite loop in the cloud will be stopped immediately before any costs are incurred.
Use cloud vector databases (such as Pinecone or Qdrant) directly via the Watson server. All connections and keys are stored using AES-256 encryption.
Determine individually for each API user whether they are allowed to access local models, cloud providers, or specific internal knowledge bases.
Forget about pseudo-hidden user interfaces that hog valuable memory in the background. EIDOSDynamics can run as a pure, resource-efficient background process (CLI). Perfect for Windows services, cron jobs, and MLOps pipelines.
With the new Secure Hybrid Engine, you no longer have to choose between data protection and AI power. Use local GGUF models for highly confidential files and add global cloud AI for general tasks – secured by our impenetrable HITL firewall.
Our standard mode. Models like Llama-3 or Mistral run 100% in your hardware's RAM. No APIs, no internet, absolute data sovereignty.
Optionally, you can store your API keys for providers like OpenAI or Anthropic. At any time, you can select which model should handle the current task via a dropdown menu in the chat or workflow builder.
If a cloud AI in an autonomous workflow attempts to read local data (Active Directory, SQL, files) via our MCP tools, EIDOSDynamics immediately blocks the action. The execution is frozen and requires your manual approval in the Approval Dashboard (human-in-the-loop).
Learn why cloud-based LLMs (like ChatGPT) pose a liability risk for law firms and practices, and how to deploy autonomous AI agents in a GDPR-compliant and 100% local manner. Includes a practical example and checklist.
EIDOSDynamics utilizes Apple Metal acceleration. Since GPU and CPU share the same memory (Unified Memory), RAM is the key to performance.
Native support for NVIDIA CUDA and Microsoft DirectML. VRAM is your most important asset.
| Feature | Community | Personal | Professional | Enterprise |
|---|---|---|---|---|
| Local Intelligence | Chat + RAG | Chat (MCP included) + RAG | Complete AI-Suite | LLM Trainer |
| Automation | — | Basic Workflows (Files/Timers) | Agentic Workflows + Text-To-Workflow | as with Professional |
| Enterprise Integrationen | — | — | Webscraper, AD/LDAP, E-Mail | as with Professional |
| Resilience and security | — | — | Auto Recovery & Audit Logs | Self Healing MCP Nodes |
| Watson Server (Team API) | — | Basic API Access | Execution Dashboard & HITL | as with Professional |
| True Headless Mode (CLI) | — | — | Daemon & Automation | Complete (incl. MLOps) |
| Licensing (Hardware Bound) | — | 99 € / Year | 390 € / Year | 1.499 € / Year |
Built with .NET & Avalonia. No Chromium overhead, no Electron lag. Just pure native OS power.
No Python install. No `pip install` headaches. No virtual environments. Drag, drop, and work.
Fully functional without an internet connection. Your data remains on your physical hardware, forever.
Seamlessly connect EIDOSDynamics with your local IDE, PDF libraries, and SQL databases. Use the Intent Router and Parallel Fork for high-speed, autonomous decision making. Crucially, you retain absolute control: assign specific permissions, monitor actions via Audit Logs, and require manual approval for critical tasks.
Seamlessly connect EIDOSDynamics with your local IDE, PDF libraries, and SQL databases without external orchestrators.
Dictate exactly which MCP capabilities are granted to each individual node in your workflow for maximum security.
A dedicated UI tab allows agents to pause workflows and request your explicit approval before executing critical tasks.
Every single MCP action is permanently recorded in a dedicated, tamper-proof SQLite database to ensure full compliance and traceability.
The Holy Grail of Enterprise AI: Train open-source models with your own legal or medical terminology. No dependency hell, no Python setup, no cloud GPUs. The EIDOSDynamics LLM Trainer makes native LoRA fine-tuning a 5-step process. 100% local.
You drag and drop your PDFs into it. The AI autonomously generates perfect Q&A training data in .jsonl format (80/20 split).
Optional: Create lightning-fast intent classification models from your data for routing workflows.
The compiled native engine performs the training (weight adjustments) directly on your hardware in a resource-efficient manner.
With one click, the new knowledge (LoRA) merges inextricably with your chosen base model (Safetensors).
The trained model is quantized for maximum performance (e.g., Q8_0) and is immediately ready for use in chat or workflow.
Buy, own, keep. All editions are hardware-bound (1 host = 1 license). Your perpetual license guarantees you permanent access to the version you purchased – with no hidden recurring costs.
Permanently free
Does your IT team lack the resources for LLM fine-tuning or building complex RAG vector databases? Our experts at EIDOSDynamics Consulting handle turnkey integration directly on your hardware.
Discover Consulting Services →Optimal architecture für Mac & CUDA.
Connection of internal vector databases.
Custom MCP-Nodes & Agents.