AI orchestration built to ship fast and scale hard.
Code AI workflows fast.
Run production AI without compromises.
All nodes run in Go, with Go and Rust streaming workers on Kubernetes. Client and user isolation is built in. Per-user memory stays persistent. Workflows can mutate live when performance shifts. Each node autoscales in isolation. From first workflow to enterprise load, one runtime.
flowchart TD
A[Progress:40:processing] --> B
B{{OpenAI:chat}} *model=claude-4.6* --> C
C[Progress:80:post_processing] --> D
D((Notify:correction)) *e2ee=auto* Built different. By design.
Every feature is a structural advantage. Not a checkbox.
Mermaid DSL
Define entire AI pipelines in human-readable syntax. LLM-native, trivially mutable. An AI can generate, modify, and reason about workflows as easily as reading a sentence.
B{{Agent:ProcessText}} *model=@best*Per-node Kubernetes scaling
Your 2ms transform and 5-second LLM inference don't share the same scaling policy. Each node gets its own HPA, its own resource profile. 1 pod for transforms. 8 for inference bursts.
Node A: 1 replica -> Node B: 8 replicasZero-knowledge encryption
ChaCha20-Poly1305 AEAD with one flag. The encryption key is never stored by ionun and is provided externally at runtime. Even if storage is compromised, user data stays unreadable.
D((Notify:result)) *e2ee=auto*Per-user persistent memory
Every user gets dedicated fast memory: isolated, hot-cached, encrypted at rest. Workflows remember conversation history, preferences, accumulated context across sessions.
Redis-backed · zero-knowledge · auto-tieredAdaptive workflows
When a node underperforms (2x latency, rising error rate), ionun can branch, swap, alert, or predict. Autonomous orchestration: workflows that heal and optimize themselves.
model=@best -> auto-failover on degradationNative security DSL
Credentials injected from secret stores at runtime. Multi-tenant isolation at the database level. Client auth via API key + short-lived JWT. Security in the syntax, not as an afterthought.
e2ee=auto · secret_store=vault · jwt=trueWrite workflows in Mermaid.
No drag-and-drop builder. No JSON walls. Just syntax that humans and LLMs understand natively.
flowchart TD
A{SSE:ANALYSIS_STARTED} *data=$analysisId* --> B
B(Transform:AssembleText) --> C
C{{Agent:ProcessText}} *model=claude-4.6 timeout=3m input=$text* --> D
D[(Fetch:GetUserData)] --> E
E{{Agent:MatchResults}} *model=@best input=$C.result,$D.data* --> F
F{SSE:ANALYSIS_COMPLETED}- Emits a real-time SSE event
- Assembles input data
- Runs an AI agent on the text
- Fetches user data from your backend
- Runs a second AI agent cross-referencing both outputs
- Signals completion via SSE
AI products need memory,
not just execution.
ionun gives every user their own persistent memory: fast, isolated, hot-cached, and encrypted at rest. The encryption key is never stored by ionun.
When your user comes back tomorrow, ionun already knows them. This turns a stateless pipeline runner into a runtime for intelligent products.
- Isolated per client Each product has its own namespace
- Per-user fast state Hot Redis-backed memory per user
- Encrypted at rest Key never stored by ionun
- Zero-knowledge Key provided externally at runtime
- Auto-tiered Hot to cold storage automatically
- Context-aware History, preferences, accumulated data
Scaling that matches reality.
Your AI pipeline has a 2ms data transform and a 5-second LLM inference in the same workflow. They don't need the same resources. They shouldn't share the same scaling policy.
ionun scales each node independently on Kubernetes with separate replicas, separate HPAs, separate resource profiles. No other orchestration engine does this.
400+ nodes. 60+ categories.
Everything you need, ready to wire into any workflow.
Enterprise AI Infrastructure
7 microservices. Go backend. Svelte dashboard. PostgreSQL. Redis. Kubernetes.
flowchart TD
A[(Fetch:UserContext)] *input=$userId* --> B
B{{Agent:Understand}} *model=@fastest input=$A.context,$message* --> C
C{Switch:intent} *mode=rules*
C --> |query| D{{Agent:Retrieve}} *model=claude-4.6*
C --> |action| E{{Agent:Execute}} *model=@best*
C --> |memory| F(Transform:UpdateMemory)
D --> G((Return:response))
E --> G
F --> GRoadmap
Solid foundation. Ambitious future.
Ready in minutes.
Execute workflows via REST API. Use named workflows or send raw Mermaid directly.
curl -X POST https://gateway.ionun.com/api/workflows/execute \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{
"workflow_name": "textcorrection",
"input": {
"text_content": "Fix ths sentance plz"
}
}'