Since the launch of Tiptap's AI services, we have received a lot of feedback and feature requests for more sophisticated use cases. The most requested topics:
- We can't or won't use your cloud service.
- We have a custom LLM solution and want to use it with Tiptap's Content AI.
- Our product uses custom indexes to get even better results from AI suggestions.
- We want to track what our users do with Tiptap's Content AI.
We implemented these features, improved the documentation, and are launching today as Content AI Advanced for Entry, Business and Enterprise customers.
Define the GPT model per command
We have put a lot of care into selecting the right GPT model for each prompt so that developers can get started with content AI without any work. But we also understand that there are use cases where you would like to choose the model yourself. Therefore, we have now created this configuration option.
Types: gpt-4
, gpt-4-1106-preview
, gpt-3.5-turbo
, gpt-3.5-turbo-16k
, null
Default: gpt-3.5-turbo
editor.chain().focus().aiTranslate({ modelName: '' }).run()
Stream all responses into your Tiptap Editor
Streaming the AI's response into the editor instead of watching a loading graphic is a huge UX improvement. Until now, it was only possible to stream the AI's response with the autocomplete feature. With Content AI Advanced, you can now stream any AI text response.
editor.chain().focus().aiTranslate({ stream: true }).run()
Use your custom LLM
If you want to use a your own backend which provides access to a custom LLM, you can now override the the resolver functions defined in the Content AI extension configuration (only available for Business customers).
We’re entirely relying on a custom backend in this example:
// ...
import Ai from '@tiptap-pro/extension-ai-advanced'
// ...
Ai.configure({
appId: 'APP_ID_HERE',
token: 'TOKEN_HERE',
// ...
// Define the resolver function for completions (attention: streaming and image have to be defined separately!)
aiCompletionResolver: async ({
action, text, textOptions, extensionOptions, defaultResolver,
}) => {
// Check against action, text, whatever you like
// Decide to use custom endpoint
if (action === 'rephrase') {
const response = await fetch('https://dummyjson.com/quotes/random')
const json = await response?.json()
if (!response.ok) {
throw new Error(`${response.status} ${json?.message}`)
}
return json?.quote
}
// Everything else is routed to the Tiptap AI service
return defaultResolver({
action, text, textOptions, extensionOptions,
})
},
})
Create custom commands
In this example, we register a new editor command named aiCustomTextCommand
, use the Tiptap runAiTextCommand
function to let Tiptap do the rest, and add a custom command resolution to call a custom backend (in completion mode).
// …
import { Ai, runAiTextCommand } from '@tiptap-pro/extension-ai-advanced'
// …
const AiExtended = Ai.extend({
addCommands() {
return {
...this.parent?.(),
aiCustomTextCommand: (options = {}) => props => {
// Do whatever you want; e.g. get the selected text and pass it to the specific command resolution
return runAiTextCommand(props, 'customCommand', options)
},
}
},
})
// … this is where you initialize your Tiptap editor instance and register the extended extension
const editor = useEditor({
extensions: [
/* … add other extension */
AiExtended.configure({
/* … add configuration here (appId, token etc.) */
aiCompletionResolver: async ({ action, text, textOptions, extensionOptions, defaultResolver }) => {
if (action === 'customCommand') {
const response = await fetch('https://dummyjson.com/quotes/random')
const json = await response?.json()
if (!response.ok) {
throw new Error(`${response.status} ${json?.message}`)
}
return json?.quote
}
return defaultResolver({
action, text, textOptions, extensionOptions,
})
},
}),
],
content: '',
})
Read the Content AI docs