Get started with the OpenAI Responses API
The OpenAI Responses API allows you to build an AI Agent with OpenAI's models.
Client-side setup
First, install the AI Agent extension.
npm install @tiptap-pro/extension-ai-agent
Then, import the extension and configure it with the AiAgentProvider
class.
import { Editor } from '@tiptap/core'
import StarterKit from '@tiptap/starter-kit'
import AiAgent, { AiAgentProvider } from '@tiptap-pro/extension-ai-agent'
const provider = new AiAgentProvider()
const editor = new Editor({
extensions: [
StarterKit,
AiAgent.configure({
provider,
}),
],
})
Inside the AI Agent provider, define a resolver
function that calls your backend.
Also define an adapter
function that converts the chat messages to the format expected by the OpenAI Responses API.
import AiAgent, { AiAgentProvider, openaiResponsesAdapter } from '@tiptap-pro/extension-ai-agent'
const provider = new AiAgentProvider({
adapter: openaiResponsesAdapter,
// The llmMessages property contains the chat messages in the format expected by the OpenAI API
resolver: async ({ llmMessages }) => {
// Call the API endpoint of your backend
const response = await fetch('/api-endpoint', {
method: 'POST',
body: JSON.stringify({ llmMessages }),
})
return await response.json()
},
})
In the next section, we'll see how to implement the API endpoint that returns the response in the correct format.
Server-side setup
First, install the AI Agent and OpenAI server libraries.
npm install @tiptap-pro/extension-ai-agent @tiptap-pro/extension-ai-agent-server openai
Then, inside your API endpoint, create an AiAgentToolkit
instance. It lets you configure the tools that will be available to the AI model.
import { AiAgentToolkit } from '@tiptap-pro/extension-ai-agent-server'
const toolkit = new AiAgentToolkit()
After creating the toolkit, send the request to the OpenAI Responses API.
import { AiAgentToolkit } from '@tiptap-pro/extension-ai-agent-server'
import { openaiResponsesAdapter } from '@tiptap-pro/extension-ai-agent'
import OpenAI from 'openai'
const toolkit = new AiAgentToolkit()
// Initialize the OpenAI client
const openai = new OpenAI()
// Call the OpenAI Responses API
const response = await openai.responses.create({
model: 'gpt-4.1',
input: [
{
role: 'developer',
content: `
<Your system prompt>
${toolkit.getSystemPrompt()}
`,
},
...llmMessages,
],
// Provide the tools that the AI model can call
tools: toolkit.getTools(openaiResponsesAdapter),
})
At the end of the system prompt, include the system prompt generated by the AiAgentToolkit
instance, like this: toolkit.getSystemPrompt()
. This contains instructions on how to use the tools.
To write the system prompt, see the system prompt guide. It includes an example system prompt that you can use as a starting point.
Finally, use openaiResponsesAdapter
to convert the response to the format expected by the AI Agent extension.
const result = openaiResponsesAdapter.parseResponse(response)
The result
should be the response of the API endpoint, and the return value of the resolver
function.