The OpenAI Agents SDK handles tool calling, handoffs, and structured output. Agentuity gives those agents a place to run: deployment, persistent state, observability, and cloud services like storage and queues. The SDK manages the agent loop, Agentuity manages everything around it.
Tool Calling Agent
Define tools with the OpenAI Agents SDK tool() function and run them inside an Agentuity handler. The SDK manages the ReAct loop automatically.
Define your tools using tool() with Zod parameter schemas:
import { Agent, tool, setTracingDisabled } from '@openai/agents';
import { z } from 'zod';
// Agentuity provides its own observability
setTracingDisabled(true);
const search = tool({
name: 'search',
description: 'Search for information on any topic',
parameters: z.object({
query: z.string().describe('The search query'),
}),
execute: async ({ query }) => {
return `Results for: ${query}`;
},
});
const assistant = new Agent({
name: 'Research Assistant',
instructions: 'You are a helpful assistant. Be concise.',
model: 'gpt-5.4',
tools: [search],
});Create an OpenAI agent and wrap it with createAgent():
import { createAgent } from '@agentuity/runtime';
import { s } from '@agentuity/schema';
import { run } from '@openai/agents';
export default createAgent('tool-calling', {
description: 'OpenAI Agents SDK with function tools',
schema: {
input: s.object({ message: s.string() }),
output: s.object({ response: s.string() }),
},
handler: async (ctx, { message }) => {
ctx.logger.info('Running OpenAI agent', { message });
// run() executes the full ReAct loop
const result = await run(assistant, message);
return {
response: typeof result.finalOutput === 'string'
? result.finalOutput
: 'No response generated',
};
},
});- OpenAI Agents SDK uses
parametersfor tool schemas, while LangChain usesschema run(agent, input)executes the full ReAct loop automatically, including all tool callsresult.finalOutputcontains the agent's final response;result.newItemscontains the full execution trace
Call setTracingDisabled(true) when running on Agentuity. Agentuity's built-in observability captures traces, logs, and metrics automatically.
Agent Handoffs
Use the SDK's handoffs array and handoff() function to route requests between specialist agents. A triage agent decides which specialist handles each request.
import { Agent, run, handoff, setTracingDisabled } from '@openai/agents';
import { z } from 'zod';
// ... tool definitions above (lookupInvoice, processRefund)
setTracingDisabled(true);
// Specialist agents with focused instructions
const billingAgent = new Agent({
name: 'Billing Agent',
instructions: 'Help with invoice lookups and payment status.',
model: 'gpt-5.4',
tools: [lookupInvoice],
});
const refundAgent = new Agent({
name: 'Refund Agent',
instructions: 'Process refund requests immediately.',
model: 'gpt-5.4',
tools: [processRefund],
});
// Typed escalation data for the refund handoff
const EscalationData = z.object({
reason: z.string().describe('Why this is being escalated'),
});
// Triage agent routes to specialists via handoffs
const triageAgent = Agent.create({
name: 'Triage Agent',
instructions: `Route requests to the right specialist:
- Billing questions → Billing Agent
- Refund requests → Refund Agent (use escalate_to_refund)`,
model: 'gpt-5.4',
handoffs: [
billingAgent, // basic handoff
handoff(refundAgent, {
onHandoff: (_ctx, input) => {
// onHandoff runs at module scope, not inside the agent handler —
// ctx.logger is not in scope here, so console.log is correct
console.log('Refund escalation:', input?.reason);
},
inputType: EscalationData,
toolNameOverride: 'escalate_to_refund',
}),
],
});Track which specialist handled the request using result.lastAgent:
const result = await run(triageAgent, message);
const handledBy = result.lastAgent?.name ?? 'Triage Agent';
ctx.logger.info('Request routed', { handledBy });The handoff() wrapper adds typed escalation data and a callback. After the run completes, result.lastAgent.name tells you which agent handled the request. Use Agent.create() for the triage agent to get full type inference across the handoff chain.
Use Agent.create() instead of new Agent() for the triage agent. This gives TypeScript full type inference across the handoff chain, including typed inputType schemas.
Structured Context
Use RunContext<T> to pass typed data to tools at runtime. The context is available inside tool execute functions but is never sent to the LLM. Pair it with outputType to get structured JSON output.
Define the context interface and a tool that reads from it:
import { tool } from '@openai/agents';
import type { RunContext } from '@openai/agents';
import { z } from 'zod';
// Context type: available to tools, never sent to the LLM
interface UserInfo {
name: string;
uid: number;
role: string;
}
const lookupContact = tool({
name: 'lookup_contact',
description: 'Look up a contact by name',
parameters: z.object({ name: z.string() }),
execute: async ({ name }, ctx?: RunContext<UserInfo>) => {
// ctx.context holds typed data from the run() call
const requester = ctx?.context.name ?? 'unknown';
return `[Looked up by ${requester}] ${name}: alice@acme.com`;
},
});Create the agent with outputType and pass context at runtime via run():
import { Agent, run } from '@openai/agents';
import { z } from 'zod';
// Structured output schema: the LLM must return this exact shape
const ContactOutput = z.object({
name: z.string(),
email: z.string(),
company: z.string(),
summary: z.string(),
});
const assistant = new Agent<UserInfo, typeof ContactOutput>({
name: 'Contact Finder',
instructions: 'Look up contacts and return structured data.',
model: 'gpt-5.4',
tools: [lookupContact],
outputType: ContactOutput,
});
// Pass context at runtime: tools receive it, LLM does not
const result = await run(assistant, 'Find Alice', {
context: { name: 'Demo User', uid: 42, role: 'admin' },
});
// result.finalOutput is typed as z.infer<typeof ContactOutput>- Context is passed at runtime via
run(agent, input, { context: data })and is never sent to the LLM - Tools receive context as the second parameter:
execute(args, ctx?: RunContext<T>) outputTypeforces the agent to return structured JSON matching the Zod schema;result.finalOutputis fully typed
RunContext is for runtime data that tools need (user identity, permissions, session state). Use instructions for LLM behavior guidance. They serve different purposes: context stays server-side, instructions go to the model.
Full Examples
Explore complete working examples for each pattern:
- Tool Calling: function tools, model config, ReAct loop
- Agent Handoffs: triage routing, typed escalation,
lastAgenttracking - Structured Context:
RunContext<T>for typed tool context,outputTypefor structured JSON - Streaming Events:
stream: truewith real-time event timeline
Next Steps
- Creating Agents: Agentuity agent patterns and schemas
- AI Gateway: Managed model credentials across providers
- Tracing: Observability that replaces OpenAI's built-in tracing