Get started with the OpenAI Responses API
The OpenAI Responses API allows you to build an AI agent with OpenAI's models.
Code demo available
This guide includes a code demo to help you get started. See the GitHub repository.
Client-side setup
First, install the AI Agent extension.
npm install @tiptap-pro/extension-ai-agentThen, import the extension and configure it with the AiAgentProvider class.
import { Editor } from '@tiptap/core'
import StarterKit from '@tiptap/starter-kit'
import AiAgent, { AiAgentProvider } from '@tiptap-pro/extension-ai-agent'
const provider = new AiAgentProvider()
const editor = new Editor({
  extensions: [
    StarterKit,
    AiAgent.configure({
      provider,
    }),
  ],
})Inside the AiAgentProvider, define a resolver function that calls your backend.
import AiAgent, { AiAgentProvider } from '@tiptap-pro/extension-ai-agent'
const provider = new AiAgentProvider({
  // The `chatMessages` property contains the chat messages of the conversation
  resolver: async ({ chatMessages, schemaAwarenessData }) => {
    // Call the API endpoint of your backend
    const response = await fetch('/api-endpoint', {
      method: 'POST',
      body: JSON.stringify({ chatMessages, schemaAwarenessData }),
    })
    return await response.json()
  },
})In the next section, we'll see how to implement the API endpoint that returns the response in the correct format.
Server-side setup
First, install the AI Agent extension and OpenAI server libraries.
npm install @tiptap-pro/extension-ai-agent-server openaiGet the chat messages and schema awareness data from the request parameters.
// Code inside your API endpoint. Code depends on your backend framework
const { chatMessages, schemaAwarenessData } = requestThen, inside your API endpoint, create an AiAgentToolkit instance. It lets you configure the tools that will be available to the AI model.
import { AiAgentToolkit, openaiResponsesAdapter } from '@tiptap-pro/extension-ai-agent-server'
const toolkit = new AiAgentToolkit({
  adapter: openaiResponsesAdapter,
  schemaAwarenessData,
})Also define a ChatMessagesFormatter instance. It lets you convert the chat messages to the format expected by the OpenAI Responses API.
import {
  AiAgentToolkit,
  openaiResponsesAdapter,
  ChatMessagesFormatter,
} from '@tiptap-pro/extension-ai-agent-server'
const formatter = new ChatMessagesFormatter({
  initialMessages: chatMessages,
  adapter: openaiResponsesAdapter,
})After creating the toolkit and the formatter, send the request to the OpenAI Responses API.
import {
  AiAgentToolkit,
  openaiResponsesAdapter,
  ChatMessagesFormatter,
} from '@tiptap-pro/extension-ai-agent-server'
import OpenAI from 'openai'
const { chatMessages, schemaAwarenessData } = request
const toolkit = new AiAgentToolkit({
  adapter: openaiResponsesAdapter,
  schemaAwarenessData,
})
const formatter = new ChatMessagesFormatter({
  // Get the chat messages from the request body
  initialMessages: chatMessages,
  adapter: openaiResponsesAdapter,
})
// Initialize the OpenAI client
const openai = new OpenAI()
// Call the OpenAI Responses API
const response = await openai.responses.create({
  model: 'gpt-4.1',
  input: [
    {
      role: 'developer',
      content: `
<Your system prompt>
${toolkit.getSystemPrompt()}
`,
    },
    ...formatter.format(),
  ],
  // Provide the tools that the AI model can call
  tools: toolkit.format(),
})At the end of the system prompt, include the system prompt generated by the AiAgentToolkit instance, like this: toolkit.getSystemPrompt(). This contains instructions on how to use the tools.
To write the system prompt, see the system prompt guide. It includes an example system prompt that you can use as a starting point.
Finally, use the formatter to convert the response to the format expected by the AI Agent extension.
formatter.addAiResponse(response)
const response = formatter.getResolverResponse()The value returned from formatter.getResolverResponse() should be the response of your API endpoint and the return value of the resolver function.