TemperStack
Intermediate12 min readUpdated Mar 18, 2026

How to implement structured ai outputs on Vercel

Quick Answer

Implement structured AI outputs on Vercel by creating API routes with schema validation, using Vercel's Edge Runtime for optimal performance, and configuring proper response formatting. Deploy using Vercel CLI or GitHub integration for seamless CI/CD.

Prerequisites

  1. Basic knowledge of Next.js or React
  2. Vercel account and CLI installed
  3. Understanding of JSON schemas
  4. Experience with API routes
1

Set up your Next.js project with Vercel configuration

Create a new Next.js project or navigate to your existing one. Install required dependencies with npm install zod @vercel/edge openai. Create a vercel.json file in your project root with the following configuration:
{
  "functions": {
    "pages/api/ai/*.js": {
      "runtime": "edge"
    }
  }
}
This ensures your AI endpoints run on Vercel's Edge Runtime for better performance.
Tip
Use Edge Runtime for AI functions as it provides faster cold starts and better global distribution.
2

Create structured output schemas using Zod

Create a schemas directory and define your output structures. For example, create schemas/aiResponse.js:
import { z } from 'zod';

export const ProductSchema = z.object({
  name: z.string(),
  description: z.string(),
  price: z.number().positive(),
  category: z.enum(['electronics', 'clothing', 'books']),
  tags: z.array(z.string()).optional()
});

export const AIResponseSchema = z.object({
  success: z.boolean(),
  data: ProductSchema,
  timestamp: z.string()
});
This creates type-safe schemas for validating AI outputs.
Tip
Define multiple schemas for different use cases to maintain flexibility while ensuring structure.
3

Build the AI API endpoint with structured validation

Create pages/api/ai/generate.js (or app/api/ai/generate/route.js for App Router):
import OpenAI from 'openai';
import { AIResponseSchema } from '../../../schemas/aiResponse';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

export default async function handler(req, res) {
  try {
    const completion = await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{
        role: 'system',
        content: 'Return only valid JSON matching the product schema'
      }, {
        role: 'user', 
        content: req.body.prompt
      }],
      response_format: { type: 'json_object' }
    });
    
    const parsed = JSON.parse(completion.choices[0].message.content);
    const validated = AIResponseSchema.parse({
      success: true,
      data: parsed,
      timestamp: new Date().toISOString()
    });
    
    res.status(200).json(validated);
  } catch (error) {
    res.status(500).json({ success: false, error: error.message });
  }
}
Tip
Always validate AI responses before sending them to clients to ensure data consistency.
4

Configure environment variables in Vercel

In your Vercel dashboard, navigate to your project and click Settings. Select Environment Variables from the sidebar. Add your required variables:
  • OPENAI_API_KEY - Your OpenAI API key
  • NODE_ENV - Set to 'production'
For local development, create a .env.local file with the same variables. Use the Vercel CLI to pull environment variables locally: vercel env pull .env.local
Tip
Use different API keys for development and production environments for better security and cost control.
5

Implement client-side integration with error handling

Create a client component to consume your structured AI API:
export default function AIGenerator() {
  const [result, setResult] = useState(null);
  const [loading, setLoading] = useState(false);
  
  const generateContent = async (prompt) => {
    setLoading(true);
    try {
      const response = await fetch('/api/ai/generate', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ prompt })
      });
      
      if (!response.ok) throw new Error('Generation failed');
      
      const data = await response.json();
      setResult(data);
    } catch (error) {
      console.error('AI generation error:', error);
    } finally {
      setLoading(false);
    }
  };
  
  return (
    
{/* Your UI components */}
); }
Tip
Implement proper loading states and error boundaries for better user experience.
6

Deploy and test your structured AI endpoints

Deploy your application using vercel --prod or push to your connected GitHub repository. Test your endpoints using the Vercel Functions tab in your dashboard. Navigate to Functions > View Function Logs to monitor performance and errors. Use tools like Postman or curl to test your API endpoints:
curl -X POST https://your-app.vercel.app/api/ai/generate \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Generate a product for electronics category"}'
Tip
Use Vercel's preview deployments to test changes before pushing to production.
7

Optimize performance and implement caching

Add response caching to your API routes for repeated queries. Implement edge caching by adding headers:
res.setHeader('Cache-Control', 'public, s-maxage=300, stale-while-revalidate=600');
Consider using Vercel KV for caching AI responses:
import { kv } from '@vercel/kv';

const cacheKey = `ai-response-${hash(prompt)}`;
const cached = await kv.get(cacheKey);

if (cached) {
  return res.json(cached);
}

// Generate new response and cache it
await kv.setex(cacheKey, 3600, response);
Monitor usage in the Vercel dashboard under Analytics.
Tip
Cache frequently requested AI outputs to reduce API costs and improve response times.

Troubleshooting

Edge Runtime compatibility issues with certain packages
Check if your dependencies support Edge Runtime. Replace incompatible packages or use the Node.js runtime by removing the edge configuration from vercel.json. Use next-runtime instead of edge for complex operations.
Function timeout errors during AI generation
Increase function timeout in vercel.json:
"functions": { "pages/api/ai/*.js": { "maxDuration": 30 } }
Consider implementing streaming responses or breaking complex operations into smaller chunks.
Schema validation failures with AI responses
Add more specific prompting to ensure AI follows your schema. Implement fallback parsing with z.safeParse() and provide clear error messages. Use few-shot examples in your AI prompts to improve output consistency.
Environment variables not accessible in Edge Runtime
Ensure environment variables are properly configured in Vercel dashboard and available at build time. Some variables may need to be prefixed with NEXT_PUBLIC_ for client-side access, but never expose API keys this way.

Related Guides

More Vercel Tutorials

Other Tool Tutorials

Ready to get started with Vercel?

Put this tutorial into practice. Visit Vercel and follow the steps above.

Visit Vercel