This guide walks you through building a chat agent from scratch using OpenAI.
When you create a project with agentuity project create, you get a translation agent demonstrating:
- AI Gateway: OpenAI SDK routed through Agentuity (unified billing, no separate API keys)
- Thread state: Persistent translation history across requests
- Structured logging: Observability via
ctx.logger - Tailwind CSS: Pre-configured styling
- Workbench: Local testing UI at
/workbench
1. Create the Agent
Create src/agent/chat/agent.ts:
import { createAgent } from '@agentuity/runtime';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { s } from '@agentuity/schema';
const agent = createAgent('Chat', {
schema: {
input: s.object({ message: s.string() }),
output: s.object({ response: s.string() }),
},
handler: async (ctx, input) => {
ctx.logger.info('Received message', { message: input.message });
const { text } = await generateText({
model: openai('gpt-5.4-nano'),
prompt: input.message,
});
return { response: text };
},
});
export default agent;2. Register the Agent
Add the agent to the registry in src/agent/index.ts:
import chat from './chat/agent';
export default [chat];3. Add a Route
Create src/api/index.ts:
import { Hono } from 'hono';
import type { Env } from '@agentuity/runtime';
import chat from '@agent/chat/agent';
const api = new Hono<Env>()
.post('/chat', chat.validator(), async (c) => {
const data = c.req.valid('json');
return c.json(await chat.run(data));
});
export type ApiRouter = typeof api;
export default api;Wire the route and your agent into the project-root app.ts:
import { createApp } from '@agentuity/runtime';
import api from './src/api/index';
import agents from './src/agent';
export default await createApp({
router: { path: '/api', router: api },
agents,
});4. Run Your Agent
Start the dev server:
agentuity dev
# or: bun run devTest your agent via curl:
curl -X POST http://localhost:3500/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "What is the capital of France?"}'Response:
{
"response": "The capital of France is Paris."
}5. Test Locally with Workbench
Workbench is a built-in UI for testing agents directly, without going through your frontend. The default template comes with Workbench pre-configured in app.ts:
export default await createApp({
router: { path: '/api', router: api },
agents,
workbench: { route: '/workbench' },
});Open http://localhost:3500/workbench to test your agents with raw JSON input. See Testing with Workbench for adding test prompts and configuration options.
6. Add a Frontend
Add a React UI in src/web/ to call your agent. Use Hono's hc() client for type-safe API calls:
import { hc } from 'hono/client';
import type { ApiRouter } from '../api';
import { useState } from 'react';
const client = hc<ApiRouter>('/api');
export function App() {
const [message, setMessage] = useState('');
const [response, setResponse] = useState('');
const [isLoading, setIsLoading] = useState(false);
const handleSend = async () => {
setIsLoading(true);
try {
const res = await client.chat.$post({ json: { message } });
const data = await res.json();
setResponse(data.response);
} finally {
setIsLoading(false);
}
};
return (
<div>
<input
value={message}
onChange={(e) => setMessage(e.target.value)}
placeholder="Ask something..."
disabled={isLoading}
/>
<button onClick={handleSend} disabled={isLoading}>
{isLoading ? 'Thinking...' : 'Send'}
</button>
{response && <p>{response}</p>}
</div>
);
}The hc() client infers types from your router, so data.response is fully typed.
7. Add Streaming
For real-time responses, return a stream instead:
import { createAgent } from '@agentuity/runtime';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { s } from '@agentuity/schema';
const agent = createAgent('chat', {
schema: {
input: s.object({ message: s.string() }),
stream: true,
},
handler: async (ctx, input) => {
const { textStream } = streamText({
model: openai('gpt-5.4-nano'),
prompt: input.message,
});
return textStream;
},
});
export default agent;Update the route to use the stream() handler:
import { Hono } from 'hono';
import type { Env } from '@agentuity/runtime';
import { stream } from '@agentuity/runtime';
import chat from '@agent/chat/agent';
const api = new Hono<Env>()
.post('/chat', chat.validator(), stream(async (c) => {
const data = c.req.valid('json');
return chat.run(data);
}));
export type ApiRouter = typeof api;
export default api;8. Deploy
When you're ready, deploy to Agentuity:
agentuity deploy
# or: bun run deployYour agent is now live with a public URL. View deployments, logs, and more in the Web App.
After your first deployment, the App populates with:
- Agents: Your deployed agents with their endpoints
- Routes: Registered HTTP, cron, and other routes
- Sessions: Request history and logs as traffic flows
- Deployments: Version history with rollback options
What's Next?
Install the OpenCode plugin for AI-assisted agent development. Get help writing agents, debugging, and deploying directly from your editor:
agentuity ai opencode installLearn the concepts:
- Understanding How Agents Work: Tools, loops, and autonomous behavior
Build something more:
- Build a multi-agent system: Routing, RAG, workflows
- Persist data: Use thread and session state
- RPC Client: Type-safe
hc()patterns for browser and server code
Understand the platform:
- Project Structure: Agents, routes, and frontend
- App Configuration: Configure your project
- Local Development: Dev server, hot reload, and debugging