Audio-first forms

Turn any form into a live audio interview.

Talkform asks questions out loud, fills structured variables directly from the conversation, and exports product-friendly JSON for apps, workflows, and AI agents. Bring over an existing Typeform, Google Form, Jotform, HubSpot form, or any public form URL and turn it into an editable live interview.

Move faster from what you already have

Import a public form URL, review the extracted draft, and launch the audio version without rebuilding from scratch.

Built for agents

MCP tools, a CLI, JSON schemas, and docs that explain exactly how to configure and consume Talkform.

Old way / New way

Most teams already have a form. The win is importing that starting point, tightening the copy, and giving people a conversational path that feels easier to finish.

Import a public form
Old way

Static form rebuilds attention every screen

StepsOpen page, read, scan, type, submit
Time to finishLonger and easier to abandon mid-way
Completion likelihoodMore friction when the form feels like work
New way

Talkform carries the interview and writes the fields for them

StepsOpen, answer out loud, review the captured draft
Time to finishShorter because the host keeps momentum
Completion likelihoodHigher when the flow feels guided instead of manual

How it works

Products keep the schema they need. Talkform owns the interview, structured extraction, and export surface.

1

Define the fields

Describe the required variables, prompt copy, options, and validation in `AudioformConfig`.

2

Run the audio intake

Talkform asks one question at a time over live audio and updates the structured form directly.

3

Export the result

Download JSON or markdown exports, or pull the same data through the HTTP API, CLI, or MCP server.

The product surface

Your answers on the left, the live question flow in the middle, and captured form answers on the right.

Transcript
Prompt canvas
Lock the learner identity
Ask for the person’s name first, confirm it, and keep the interview moving one question at a time.
NameRoleGoalsAI comfort
Form answers

Integrations

Use Talkform from a product UI, a backend, the terminal, or an agent runtime.

React

Embed the widget

Use `@talkform/react` to drop the full audio form experience into a React product.

HTTP API

Create and export sessions

Bootstrap sessions, validate configs, and pull exports over simple JSON endpoints.

CLI

Scaffold and validate

Generate starter configs, run the demo locally, and export results from the command line.

MCP

Make it agent-usable

Expose templates, schemas, validation, session creation, and exports to coding agents.

Agent quickstart

Agents can discover Talkform from `llms.txt`, create configs, validate them, and consume session results.

CLI

`audioform validate ./customer-intake.json`

Validate a form definition before any product wiring happens.

MCP

`audioform.create_session`

Launch a browser-driven intake session and then fetch it back through MCP or HTTP export endpoints.

JSON

Canonical export

The same `AudioformSessionResult` shape is used by the UI, HTTP API, CLI, and MCP server.

Canonical output

Talkform exports one stable result schema so downstream products can adapt it into plans, CRM records, or onboarding flows.

AudioformSessionResult

The hosted app also publishes the JSON schemas at `/schemas/audioform-config.json` and `/schemas/audioform-session-result.json`.

{
  "schemaVersion": "1.0",
  "formId": "customer-intake",
  "sessionId": "session_3e2z1f0c",
  "status": "completed",
  "completion": {
    "required": 5,
    "captured": 5,
    "percent": 100,
    "missingFieldIds": []
  },
  "currentPrompt": null,
  "fields": {
    "fullName": "Avery Stone",
    "role": "Product Lead",
    "goal": [
      "upskill_current_job",
      "ship_ai_projects"
    ],
    "aiComfort": 4,
    "teamContext": "Leading a small product team at a B2B SaaS startup."
  },
  "transcript": [
    {
      "speaker": "assistant",
      "text": "What should I call you?",
      "timestamp": 1
    },
    {
      "speaker": "user",
      "text": "Avery Stone.",
      "timestamp": 2
    }
  ],
  "summary": "Avery leads product at a SaaS startup and wants to ship AI projects for the current role.",
  "metadata": {
    "model": "gpt-realtime",
    "voice": "marin",
    "startedAt": "2026-03-10T12:00:00.000Z"
  }
}

Schema availability

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "title": "AudioformSessionResult",
  "type": "object",
  "required": [
    "schemaVersion",
    "formId",
    "sessionId",
    "status",
    "completion",
    "fields",
    "transcript",
    "summary",
    "metadata"
  ],
  "properties": {
    "schemaVersion": {
      "type": "string",
      "const": "1.0"
    },
    "formId": {
      "type": "string"
    },
    "sessionId": {
      "type": "string"
    },
    "status": {
      "type": "string",
      "enum": [
        "in_progress",
        "completed",
        "abandoned"
      ]
    },
    "completion": {
      "type": "object",
      "required": [
        "required",
        "captured",
        "percent",
        "missingFieldIds"
      ],
      "properties": {
        "required": {
          "type": "number"
        },
        "captured": {
          "type": "number"
        },
        "percent": {
          "type": "number"
        },
        "missingFieldIds": {
          "type": "array",
          "items": {
            "type": "string"
          }
        }
      }
    },
    "currentPrompt": {
      "anyOf": [
        {
          "type": "null"
        },
        {
          "type": "object",
          "required": [
            "fieldId",
            "title",
            "detail"
          ],
          "properties": {
            "fieldId": {
              "type": "string"
            },
            "title": {
              "type": "string"
            },
            "detail": {
              "type": "string"
            }
          }
        }
      ]
    },
    "fields": {
      "type": "object",
      "additionalProperties": true
    },
    "transcript": {
      "type": "array",
      "items": {
        "type": "object",
        "required": [
          "id",
          "speaker",
          "text",
          "timestamp"
        ],
        "properties": {
          "id": {
            "type": "string"
          },
          "speaker": {
            "type": "string",
            "enum": [
              "assistant",
              "user",
              "system"
            ]
          },
          "text": {
            "type": "string"
          },
          "timestamp": {
            "type": "number"
          }
        }
      }
    },
    "summary": {
      "type": "string"
    },
    "metadata": {
      "type": "object",
      "required": [
        "model",
        "voice",
        "startedAt"
      ],
      "properties": {
        "model": {
          "type": "string"
        },
        "voice": {
          "type": "string"
        },
        "startedAt": {
          "type": "string"
        },
        "completedAt": {
          "type": "string"
        }
      }
    }
  }
}