100% Offline AI Workstation

ChatGPT-Power.
100% Offline.

The private AI workstation for Law, Healthcare, and Enterprise. Run advanced LLMs and agentic workflows entirely on your local hardware. Zero cloud data leaks, total GDPR compliance.

macOS Apple Silicon • 2026.2.2
~472 MB (Standalone Installer)
Windows Windows x64 • 2026.2.2
~2.9 GB (CUDA & Torch incl. Zero Setup!)
✓ No Cloud ✓ No Subscriptions ✓ Local RAG
EIDOSDynamics Interface
Engine

Hybrid LLM Chat

LlamaSharp native inference. Whisper STT/TTS integration. Supports GGUF models up to 70B+.

Knowledge

SQLite Vector-DB

Local ONNX-based embeddings. Multi-lingual RAG with persistent source citations and context snippets.

Automation

Workflow Builder

Time & File triggers. Native MCP (Model Context Protocol) nodes with granular permissions. Includes native AD/LDAP and IMAP/SMTP integrations for autonomous agent routing.

Connectivity

Watson API Server

OpenAI-compatible local endpoint. Collaborate via LAN with shared Vector-Collections.

Watson API User Management
Analyst_North_Office
Access: Local GGUF Only
Unlimited
Local Tokens
External_Partner_API
Access: Hybrid (Cloud AI + Cloud VectorDB)
42.850 / 50.000
Monthly Token Limit
Management & Compliance

Watson Gateway:
Full cost control.

The Watson API Server is the heart of your hybrid AI infrastructure. For the first time, you combine the power of cloud services with the security of local control – including granular budget monitoring that protects your credit card.

Token-based cost control

Set strict monthly token limits per user or API key. A faulty script or an infinite loop in the cloud will be stopped immediately before any costs are incurred.

Hybrid RAG & Cloud VectorDBs

Use cloud vector databases (such as Pinecone or Qdrant) directly via the Watson server. All connections and keys are stored using AES-256 encryption.

Granular access control

Determine individually for each API user whether they are allowed to access local models, cloud providers, or specific internal knowledge bases.

Pro & Enterprise

True Headless Mode.
100% Automatable.

Forget about pseudo-hidden user interfaces that hog valuable memory in the background. EIDOSDynamics can run as a pure, resource-efficient background process (CLI). Perfect for Windows services, cron jobs, and MLOps pipelines.

  • Demonized agents: Start workflows and the Watson API server completely without a UI.
  • Scriptable fine-tuning: Automate LoRA training and GGUF conversions via command line.
  • Nightly RAG updates: Have file folders automatically translated into vector collections overnight using a batch script.
Read CLI Dokumentation
admin@eidosdynamics-server: ~
$ EIDOSDynamics --build-dataset --input-dir "C:\Contracts" --qa-ratio 20
>>> Starting Headless Dataset Generation...
>>> Dataset successfully generated at: \output\training.jsonl
$ EIDOSDynamics --finetune --base-model "Llama-3-8B" --epochs 3
>>> Starting Headless LoRA Fine-Tuning...
Starting LoRA Finetuning on GPU 0...
Epoch 1/3: ━━━━━━━━━━━━━━━━━━━━━━━ 100%
>>> [SUCCESS] LoRA Fine-Tuning completed!
$ EIDOSDynamics --daemon --run-workflow "Legal_RAG_Nightly"
>>> Workflow Automation Daemon is running in Headless Mode.
>>> Executing Workflow Legal_RAG_Nightly...

Privacy by Architecture

The Cloud Risk

  • ✕ Data leaks via third-party APIs
  • ✕ Servers outside GDPR jurisdiction
  • ✕ Your secrets used for model training
  • ✕ Hidden monthly API costs

The EIDOSDynamics Solution

  • ✓ 100% Local Inference (LlamaSharp)
  • ✓ Zero Data Telemetry
  • ✓ Air-Gapped environment support
  • ✓ One-time activation, lifetime use
EIDOSDynamics Hybrid Architecture

Locally protected.
Globally connected (if you want).

With the new Secure Hybrid Engine, you no longer have to choose between data protection and AI power. Use local GGUF models for highly confidential files and add global cloud AI for general tasks – secured by our impenetrable HITL firewall.

Air-Gapped Default

On-premise GGUF Models

Our standard mode. Models like Llama-3 or Mistral run 100% in your hardware's RAM. No APIs, no internet, absolute data sovereignty.

  • Perfect for: contracts, patient records, balance sheets.
  • RAG search in the local SQLite vector database.
  • Guaranteed GDPR compliance.
Optional Multi-Cloud

Cloud AI Integration

Optionally, you can store your API keys for providers like OpenAI or Anthropic. At any time, you can select which model should handle the current task via a dropdown menu in the chat or workflow builder.

The HITL firewall intervenes

If a cloud AI in an autonomous workflow attempts to read local data (Active Directory, SQL, files) via our MCP tools, EIDOSDynamics immediately blocks the action. The execution is frozen and requires your manual approval in the Approval Dashboard (human-in-the-loop).

Free whitepaper

The AI Compliance Guide 2026

Learn why cloud-based LLMs (like ChatGPT) pose a liability risk for law firms and practices, and how to deploy autonomous AI agents in a GDPR-compliant and 100% local manner. Includes a practical example and checklist.

Download PDF Instant download (no registration required)
Engine LlamaSharp / llama.cpp
Optimization Metal / CUDA / DirectML
Standards GGUF / RAG-Ready
Compliance GDPR / DSGVO

Hardware Intelligence Guide

 Apple Silicon (M-Series)

EIDOSDynamics utilizes Apple Metal acceleration. Since GPU and CPU share the same memory (Unified Memory), RAM is the key to performance.

  • 8GB - 16GB (Entry Level) Perfect for Phi-3, Mistral or Llama-3 (8B) models. Ideal for fast chat and document scanning.
  • 32GB - 64GB (Professional) Run high-parameter models (30B+) with massive document context in the SQLite Vector DB.

⊞ Windows x64 Workstations

Native support for NVIDIA CUDA and Microsoft DirectML. VRAM is your most important asset.

  • 8GB VRAM (Entry Level) Great for fast 8B models. System RAM handles the Vector DB and UI seamlessly.
  • 16GB - 24GB VRAM (Professional) Ideal for complex Workflows, large context windows, and hosting the Watson API Server.

Edition Comparison

Feature Community Personal Professional Enterprise
Local Intelligence Chat + RAG Chat (MCP included) + RAG Complete AI-Suite LLM Trainer
Automation Basic Workflows (Files/Timers) Agentic Workflows + Text-To-Workflow as with Professional
Enterprise Integrationen Webscraper, AD/LDAP, E-Mail as with Professional
Resilience and security Auto Recovery & Audit Logs Self Healing MCP Nodes
Watson Server (Team API) Basic API Access Execution Dashboard & HITL as with Professional
True Headless Mode (CLI) Daemon & Automation Complete (incl. MLOps)
Licensing (Hardware Bound) 99 € / Year 390 € / Year 1.499 € / Year

Native Performance

Built with .NET & Avalonia. No Chromium overhead, no Electron lag. Just pure native OS power.

Zero Dependency

No Python install. No `pip install` headaches. No virtual environments. Drag, drop, and work.

Air-Gap Ready

Fully functional without an internet connection. Your data remains on your physical hardware, forever.

Live: The Agentic Revolution

Granular Control with MCP Nodes

Seamlessly connect EIDOSDynamics with your local IDE, PDF libraries, and SQL databases. Use the Intent Router and Parallel Fork for high-speed, autonomous decision making. Crucially, you retain absolute control: assign specific permissions, monitor actions via Audit Logs, and require manual approval for critical tasks.

Native Integration

Seamlessly connect EIDOSDynamics with your local IDE, PDF libraries, and SQL databases without external orchestrators.

Granular Security

Dictate exactly which MCP capabilities are granted to each individual node in your workflow for maximum security.

Human-in-the-Loop (HITL)

A dedicated UI tab allows agents to pause workflows and request your explicit approval before executing critical tasks.

Revision-Safe Audit Logs

Every single MCP action is permanently recorded in a dedicated, tamper-proof SQLite database to ensure full compliance and traceability.

Enterprise Exclusive

Your data. Your own AI.

The Holy Grail of Enterprise AI: Train open-source models with your own legal or medical terminology. No dependency hell, no Python setup, no cloud GPUs. The EIDOSDynamics LLM Trainer makes native LoRA fine-tuning a 5-step process. 100% local.

1

Dataset Builder

You drag and drop your PDFs into it. The AI autonomously generates perfect Q&A training data in .jsonl format (80/20 split).

2

SLM Classifier

Optional: Create lightning-fast intent classification models from your data for routing workflows.

3

LoRA Fine-Tuner

The compiled native engine performs the training (weight adjustments) directly on your hardware in a resource-efficient manner.

4

Model Fuser

With one click, the new knowledge (LoRA) merges inextricably with your chosen base model (Safetensors).

5

GGUF Converter

The trained model is quantized for maximum performance (e.g., Q8_0) and is immediately ready for use in chat or workflow.

From Release 2026.2.1

True perpetual licenses. No SaaS requirement.

Buy, own, keep. All editions are hardware-bound (1 host = 1 license). Your perpetual license guarantees you permanent access to the version you purchased – with no hidden recurring costs.

Community

0€

Permanently free

  • ✓ Full local AI Chat
  • ✓ Optional Cloud AI Integration
  • ✓ Basic RAG (PDFs / Documents)
  • ✕ No Workflows
  • ✕ No Webserver API Access
macOS ~472 MB (Standalone Installer)
Windows ~2.9 GB (All-in-One Installer, with CUDA)
Recommended

Professional

390 €
Perpetual license for 1 host
Includes 1 year of updates & new features
  • ✓ All Community Features
  • ✓ True Headless Mode
  • ✓ Agentic Workflows
  • ✓ Text-To-Workflow
  • ✓ Approval Dashboard (HITL)
  • ✓ Audit-proof logs
  • Auto Recovery after crashes

Enterprise

1.499 €
Perpetual license for 1 host
Includes 1 year of updates & new features
Done-For-You Integration

You have the vision.
We are the AI architects.

Does your IT team lack the resources for LLM fine-tuning or building complex RAG vector databases? Our experts at EIDOSDynamics Consulting handle turnkey integration directly on your hardware.

Discover Consulting Services →
Hardware Sizing

Optimal architecture für Mac & CUDA.

RAG Integration

Connection of internal vector databases.

Workflow
Dev

Custom MCP-Nodes & Agents.