Read your dashboard
This is the page for the person who logs into HumanHours and is not the developer who set it up. Owner who wants to validate ROI for the board, CFO who got a share-link, content lead who wants to know what their writers actually offloaded — this page maps every number on the dashboard back to plain English.
If you set up the integration yourself and want the technical detail, the Concepts page is the better starting point.
The four numbers at the top
When you open /overview, four numbers dominate the top of the screen:
TIME SAVED COST SAVED FTE EQUIVALENT EVENTS
52h €2,470 0.30 16
TIME SAVED
The wall-clock hours of human work the agents replaced over the selected period. If your support agent classified 100 emails and a human takes 4 minutes per classification, that is 400 minutes = 6h 40m TIME SAVED.
This number is computed per event from the task_type baseline minus any actual agent_duration_seconds you sent. We never inflate it; if you suspect it is too high, look at the baseline on /task-types and adjust.
COST SAVED
TIME SAVED multiplied by the workspace's hourly rate. Each event was stamped with the rate at the moment it landed; changing the rate later only affects future events. Historical cost numbers stay credible to the CFO because they cannot be retroactively pumped.
The currency depends on the workspace's default_currency setting. EUR by default; switch on /settings/workspace if you bill clients in another currency.
FTE EQUIVALENT
How many full-time-employee months the saved hours add up to. Formula: hours_saved / (22 working days × 8 hours per day × period in months).
A FTE-equivalent of 0.30 over 30 days means "the agent did the work of 0.30 full-time employees this month". When a CFO asks "what is this AI worth in headcount terms", this is the answer. Below 0.5 means the agent is a productivity helper; above 1.0 means it is replacing real headcount work.
EVENTS
How many tracked tasks the agent ran in the period. One event per task. If you see this number lower than expected, check that the agent's track call is actually firing on every run (the most common cause is wiring it inside a conditional branch).
The bar below the EVENTS card breaks down success, failure, and needs_review outcomes. Healthy customer-support agents land 60-70% success and 25-35% needs_review; below 50% success suggests the AI is not actually saving the team time after corrections.
The chart in the middle
Time-saved daily over the selected period. Use the period switcher (top right: 7D / 30D / MTD / LAST MONTH / 90D / YTD / ALL) to zoom in or out. The subtitle above the chart shows the total for the period plus the FTE-equivalent over that exact window.
The two panels below the chart
TOP AGENTS ranks the agents that contributed the most hours in the period. If you have one agent doing 80% of the work, this is normal early; once you have multiple agents the ratio levels out. Click "All agents →" to drill into individual agents.
TOP TASK TYPES ranks the work categories. Useful for the conversation "where is our AI actually saving time?" If 90% is email_classification and 5% is customer_email_draft, the team is using the AI for the simple step but still writing replies by hand — there is room to expand AI scope.
The recent events feed
A live ticker of the last 8 events. Each row shows when, which agent, which task type, what the outcome was, and how much time it saved. Realtime: when an agent fires, the new row slides in within ~2 seconds. Useful for "is the integration actually working right now?" sanity checks.
Other pages worth knowing
/agents: every agent in the workspace, ranked by recent activity. Click one to see its detail (lifetime events, daily chart, recent events list)./task-types: the baseline minutes per task type. Override here if your team is faster or slower than our defaults; the change applies to future events only./reports: drill-down views (TIME SAVED breakdown, COST SAVED breakdown, agent leaderboard, usage). This is where you generate the share-with-CFO public link from./reports/email: configure a weekly + monthly digest to your inbox (or a non-HumanHours email like cfo@company.com). The monthly version is finance-toned and pairs with the share-with-CFO link./billing: plan + usage + invoice details. On Agency, this lives on the anchor workspace; on a child workspace, this page redirects you to the anchor.
What the share-with-CFO link shows
When someone you sent a share link to opens it, they see a stripped-down version of /reports/time-saved with the same numbers as the dashboard but no edit controls, no settings, and no other workspaces. They never see your other clients (relevant if you are an agency hosting multiple clients in one tree).
The link expires after the period you set when you created it. They cannot drill into events, cannot export, cannot change the period. It is a one-way window onto the headline numbers.
Numbers that should make you double-check
A few patterns that mean the integration is misconfigured rather than the agent doing real work:
- TIME SAVED jumps by exactly 0.067h per event in customer support: someone is sending the default
email_classificationbaseline (4 min) for everything. Probably forgot to wire individualtask_typevalues. - EVENTS climb 100x in a day: a flow is in a retry loop or a webhook is fanning out. Check the rate-limit hits.
- EVENTS at zero on
/overview: the integration is wired against a different workspace. Check the API key the integration uses (/api-keys) is for THIS workspace, not a sibling.
What to read next
- Track LinkedIn outreach — set up the agent that produces these numbers in the first place.
- Concepts — what an "event", "task type", "agent", "outcome" formally means.
- Pricing — the link between event volume and your monthly bill.