Mastra gives you the building blocks for AI agents: tools, structured output, multi-agent workflows. But frameworks don't come with a place to run. Agentuity provides the deployment runtime, persistent state, and observability so you can focus on agent logic instead of infrastructure. Define your agents in Mastra, deploy them on Agentuity.
The Integration Pattern
Wrap a Mastra Agent in Agentuity's createAgent(). Mastra runs the LLM calls and tool orchestration. Agentuity provides schemas, thread-isolated state, logging via ctx.logger, and deployment.
Create the Mastra agent with your model and instructions:
import { Agent } from '@mastra/core/agent';
// Mastra handles the agent logic and LLM interaction
const chatAgent = new Agent({
id: 'chat-agent',
name: 'Chat Agent',
instructions: 'You are a helpful assistant with memory.',
model: 'openai/gpt-5.4',
});Then wrap it with Agentuity's createAgent() for deployment, state management, and observability:
import { createAgent } from '@agentuity/runtime';
import { s } from '@agentuity/schema';
// Agentuity handles deployment, schemas, and state
export default createAgent('chat', {
schema: {
input: s.object({ message: s.string() }),
output: s.object({ response: s.string() }),
},
handler: async (ctx, { message }) => {
// Load history from thread state, isolated per conversation
const history = (await ctx.thread.state.get<{ role: string; content: string }[]>('messages')) ?? [];
const result = await chatAgent.generate([
...history,
{ role: 'user', content: message },
]);
// Persist with a 20-message sliding window
await ctx.thread.state.push('messages', { role: 'user', content: message }, 20);
await ctx.thread.state.push('messages', { role: 'assistant', content: result.text }, 20);
return { response: result.text };
},
});Mastra handles the LLM interaction and tool orchestration. Agentuity provides per-conversation state via ctx.thread.state (with a sliding window to cap history size), structured logging, and deployment.
When deployed on Agentuity, Mastra's openai/gpt-5.4 model string routes through the AI Gateway automatically. Set OPENAI_BASE_URL to point at the gateway, and Mastra uses it without code changes. See the gateway bridge section below.
Tool Calling
Mastra's createTool() defines typed tools with Zod schemas. The agent calls them automatically based on the user's message. Agentuity wraps the agent for deployment and schema validation.
Define a tool with createTool() and attach it to a Mastra agent:
import { createTool } from '@mastra/core/tools';
import { Agent } from '@mastra/core/agent';
import { z } from 'zod';
// Zod schema for tool input, used by the LLM to understand parameters
const weatherTool = createTool({
id: 'get-weather',
description: 'Fetches current weather for a location',
inputSchema: z.object({
location: z.string().describe('City or location name'),
}),
execute: async ({ location }: { location: string }) => {
const geo = await fetch(`https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(location)}&count=1`);
const { results } = await geo.json();
const { latitude, longitude } = results[0];
const weather = await fetch(`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m`);
const data = await weather.json();
return `${location}: ${data.current.temperature_2m}°C`;
},
});
const weatherAgent = new Agent({
id: 'weather-agent',
instructions: 'Use the get-weather tool when users ask about weather.',
model: 'openai/gpt-5.4',
tools: { weatherTool }, // Mastra handles the function calling loop
});Then create the Agentuity handler that calls the agent and returns the result:
import { createAgent } from '@agentuity/runtime';
import { s } from '@agentuity/schema';
export default createAgent('weather', {
schema: {
input: s.object({ message: s.string() }),
output: s.object({ response: s.string(), tokens: s.number() }),
},
handler: async (ctx, { message }) => {
ctx.logger.info('Weather request', { message });
const result = await weatherAgent.generate(message);
const tokens = (result.usage?.inputTokens ?? 0) + (result.usage?.outputTokens ?? 0);
return { response: result.text, tokens };
},
});- Mastra tools use
createTool()with Zod schemas for parameter validation result.textcontains the LLM's final response after all tool calls completeresult.usage?.inputTokensandoutputTokenstrack token consumption per request
Structured Output
When you need the LLM to return data in a specific shape, use Mastra's structuredOutput with a Zod schema. The LLM response is validated and parsed before your code sees it.
Define the Zod schema that describes the expected output shape:
import { Agent } from '@mastra/core/agent';
import { z } from 'zod';
// Zod schema: validates the LLM's structured response
const DayPlanSchema = z.object({
plan: z.array(z.object({
name: z.string().describe('Time block name'),
activities: z.array(z.object({
name: z.string(),
startTime: z.string().describe('HH:MM format'),
endTime: z.string().describe('HH:MM format'),
description: z.string(),
priority: z.enum(['high', 'medium', 'low']),
})),
})),
summary: z.string(),
});
const plannerAgent = new Agent({
id: 'day-planner',
instructions: 'Create structured daily plans from descriptions.',
model: 'openai/gpt-5.4',
});Then pass the schema to generate() via { structuredOutput: { schema } } and access the parsed result via result.object:
import { createAgent } from '@agentuity/runtime';
import { s } from '@agentuity/schema';
export default createAgent('day-planner', {
schema: {
input: s.object({ prompt: s.string() }),
output: s.object({ plan: s.unknown(), summary: s.string() }),
},
handler: async (ctx, { prompt }) => {
// structuredOutput returns a typed, validated object
const result = await plannerAgent.generate(prompt, {
structuredOutput: { schema: DayPlanSchema },
});
const plan = result.object; // typed as z.infer<typeof DayPlanSchema>
ctx.logger.info('Plan generated', {
blocks: plan?.plan.length,
activities: plan?.plan.reduce((sum, b) => sum + b.activities.length, 0),
});
return { plan: plan?.plan, summary: plan?.summary ?? '' };
},
});Access the parsed output via result.object instead of result.text. The two schema layers serve different purposes: Zod controls what the LLM returns, while @agentuity/schema validates the HTTP request and response payloads.
Use Zod for Mastra's structuredOutput (LLM validation) and @agentuity/schema for Agentuity's I/O layer (API validation). They serve different purposes: Zod tells the LLM what shape to return, while s validates HTTP request/response payloads.
AI Gateway Bridge
Mastra uses standard OpenAI-compatible environment variables. To route LLM calls through the AI Gateway when deployed, add a gateway bridge file:
// Route Mastra's OpenAI calls through the Agentuity AI Gateway
if (!process.env.OPENAI_API_KEY && process.env.AGENTUITY_SDK_KEY) {
const gw = process.env.AGENTUITY_AIGATEWAY_URL ?? 'https://catalyst.agentuity.cloud';
process.env.OPENAI_API_KEY = process.env.AGENTUITY_SDK_KEY;
process.env.OPENAI_BASE_URL = `${gw}/gateway/openai`;
}Import this file at the top of your agent modules:
import '../lib/gateway';When running locally with your own OpenAI key, the bridge is skipped. When deployed on Agentuity, LLM calls route through the gateway for unified billing and observability.
Full Examples
Each example is a complete project with agent code, React frontend, API routes, and evals:
| Example | Pattern | Source |
|---|---|---|
| Agent Memory | Conversation history with sliding window | agent-memory |
| Using Tools | Tool calling with real APIs | using-tools |
| Structured Output | Type-safe LLM responses with Zod | structured-output |
| Agent Approval | Human-in-the-loop tool approval | agent-approval |
| Network Agent | Multi-agent routing and delegation | network-agent |
| Network Approval | Approval flows in multi-agent networks | network-approval |
Next Steps
- State Management: All state scopes (request, thread, global)
- AI Gateway: Provider configuration and supported models
- Evaluations: Test and validate agent outputs
- Chat with History: Same pattern using the Vercel AI SDK directly