Learn how to design, build, and optimize complex multi-agent workflows using MotteAF's visual interface and powerful orchestration engine.
Workflows in Motte are structured sequences of operations that coordinate multiple AI agents, tools, and data processing steps to accomplish complex tasks. Think of them as blueprints that define how different components work together to solve problems that require multiple steps or perspectives.
Individual processing units that perform specific tasks:
Connections that define data flow and execution order:
Shared state and variables that flow through the workflow, allowing nodes to access and modify data from previous steps.
Events that initiate workflow execution, such as API calls, scheduled events, or external webhooks.
Input → Classifier → Processor → Validator → Output
The simplest pattern where each step depends on the previous one. Perfect for document processing, content generation, and data transformation pipelines.
Input → Split → [Agent A, Agent B, Agent C] → Merge → Output
Execute multiple independent tasks simultaneously to reduce overall processing time and gather diverse perspectives.
Input → Decision Node → [Route A | Route B | Route C] → Output
Dynamic path selection based on input characteristics, user preferences, or business rules.
Input → Processor → Validator → [Continue | Retry] → Output
Iterative refinement with quality checks and retry logic for improved results.
Clearly articulate what you want to accomplish and break it down into discrete steps that can be automated.
Determine which AI models, tools, and data sources you'll need. Consider what external APIs or services are required.
Map out the sequence of operations, decision points, and data transformations using MotteAF's visual designer.
Start with simple test cases, monitor performance, and gradually add complexity while optimizing for reliability and efficiency.
Always include error handling nodes and fallback paths. Design for graceful degradation when components fail.
Add monitoring nodes at key points to track performance, costs, and quality metrics throughout your workflow.
Create reusable sub-workflows for common operations. This makes maintenance easier and promotes consistency.
Use parallel processing where possible, implement caching for expensive operations, and optimize API calls to reduce latency.
{ "name": "Customer Support Automation", "description": "Intelligent ticket routing and response generation", "nodes": [ { "id": "intake", "type": "input", "config": { "schema": { "message": "string", "customer_id": "string", "priority": "enum[low,medium,high,urgent]" } } }, { "id": "classifier", "type": "llm_agent", "config": { "model": "gpt-4", "prompt": "Classify this support request: {{intake.message}}", "output_schema": { "category": "enum[technical,billing,general]", "urgency": "enum[low,medium,high]", "sentiment": "enum[positive,neutral,negative]" } } }, { "id": "memory_search", "type": "memory_query", "config": { "query": "{{intake.message}}", "limit": 5, "threshold": 0.7 } }, { "id": "response_generator", "type": "llm_agent", "config": { "model": "gpt-4", "prompt": "Generate a helpful response based on:\nCustomer message: {{intake.message}}\nCategory: {{classifier.category}}\nRelevant knowledge: {{memory_search.results}}", "temperature": 0.7 } }, { "id": "quality_check", "type": "llm_agent", "config": { "model": "gpt-3.5-turbo", "prompt": "Rate this response quality (1-10): {{response_generator.output}}", "output_schema": { "score": "number", "feedback": "string" } } } ], "edges": [ {"from": "intake", "to": "classifier"}, {"from": "intake", "to": "memory_search"}, {"from": "classifier", "to": "response_generator"}, {"from": "memory_search", "to": "response_generator"}, {"from": "response_generator", "to": "quality_check"}, { "from": "quality_check", "to": "response_generator", "condition": "quality_check.score < 7", "type": "retry" } ] }