How it works
Write workflows in Mermaid.
No drag-and-drop builder. No JSON walls. Just syntax that humans and LLMs understand natively.
flowchart TD
A{SSE:ANALYSIS_STARTED} *data=$analysisId* --> B
B(Transform:AssembleText) --> C
C{{Agent:ProcessText}} *model=claude-4.6 timeout=3m input=$text* --> D
D[(Fetch:GetUserData)] --> E
E{{Agent:MatchResults}} *model=@best input=$C.result,$D.data* --> F
F{SSE:ANALYSIS_COMPLETED}This pipeline
- Emits a real-time SSE event
- Assembles input data
- Runs an AI agent on the text
- Fetches user data from your backend
- Runs a second AI agent cross-referencing both outputs
- Signals completion via SSE
6 lines. Two AI agents. Real-time streaming. Cross-node data flow.
Enterprise Scale
A robust engine for your operations
Intelligent orchestration handling massive loads without breaking a sweat.
Business Apps SDK / API
↓
Gateway REST · SSE · WebSocket · Auth · E2EE
↓
ionun Core Execution Engine · Context Memory
PostgreSQL
↓
Workers
Go · AI Go · Data Data · Optim
flowchart TD
A[(Fetch:UserContext)] *input=$userId* --> B
B{{Agent:Understand}} *model=@fastest input=$A.context,$message* --> C
C{Switch:intent} *mode=rules*
C --> |query| D{{Agent:Retrieve}} *model=claude-4.6*
C --> |action| E{{Agent:Execute}} *model=@best*
C --> |memory| F(Transform:UpdateMemory)
D --> G((Return:response))
E --> G
F --> GWhat this gives enterprises
- Full agent orchestration: multi-step reasoning, tool use, memory management
- Per-user memory: each user has isolated persistent context across sessions
- Absolute Sovereignty: 100% European hosting and design
- Automatic failover: if a model provider degrades, traffic shifts in real time
- Infinite Scaling: dynamically handles load spikes without manual intervention
- Compliance: E2EE on every payload, zero-knowledge architecture
One platform to run your entire AI and business infrastructure. Not five tools duct-taped together.