DOCS / SDKS / VERCEL AI SDK
VIEW RAW

Vercel AI SDK + humanhours

Pattern, not a package. This page documents how to wire the JavaScript SDK around Vercel AI SDK calls. The JS SDK itself is in preview (see /docs/sdks/javascript).

The cleanest way to track Vercel AI SDK calls is via withTask, since each generateText / streamText / generateObject call is one "agent task" by definition. No middleware needed.

import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { Humanhours } from "@humanhours/sdk";

const hh = new Humanhours({ apiKey: process.env.HUMANHOURS_API_KEY! });

async function summarizeMeeting(transcript: string): Promise<string> {
  return hh.withTask(
    {
      agent_id: "meeting-summariser",
      task_type: "meeting_summary_30min",
    },
    async () => {
      const res = await generateText({
        model: openai("gpt-5"),
        prompt: `Summarise this meeting in 5 bullets:\n\n${transcript}`,
      });
      return res.text;
    },
  );
}

Capturing token cost

If you want cost-saved-minus-agent-cost (Level 3 ROI), pass agent_cost_eur derived from the model's pricing:

const result = await hh.withTask(
  { agent_id, task_type, metadata: { model: "gpt-5" } },
  async () => {
    const res = await generateText({...});
    // OpenAI gpt-5 pricing example: $0.005/1k input, $0.015/1k output
    const cost =
      (res.usage.promptTokens / 1000) * 0.005 +
      (res.usage.completionTokens / 1000) * 0.015;
    // pass through the closure so the track() call below sees it
    Object.assign(metaCarrier, { agent_cost: cost, ...res.usage });
    return res.text;
  },
);

For one-shot scripts use withTask. For long-lived agents that fire many calls, batch them through your own wrapper that keeps a running agent_id + per-call task_type map.

See also


Found a typo or want to suggest an edit? Email support@triadagency.ai.