Content AI commands
Commands
Text | |
aiAdjustTone() | Adjusts the tone of voice |
aiComplete() | Completes the selected text |
aiDeEmojify() | Removes emojis from your text |
aiEmojify() | Adds emojis ✨ to your text |
aiExtend() | Extends your text |
aiFixSpellingAndGrammar() | Fixes spelling & grammar |
aiTextPrompt() | Runs your custom prompt |
aiRephrase() | Rephrases the selected text |
aiShorten() | Shortens the selected text |
aiSimplify() | Rephrases your text in simplified words |
aiSummarize() | Summarizes your text |
aiTldr() | Creates a "Too Long; Didn't Read" version text |
aiTranslate() | Translates the selected text into the specified language |
Image | |
aiImagePrompt() | Generates an image based on prompt and style |
aiAdjustTone()
Adjusts the tone of voice of the selected text to the specified TONE
.
// Tone: 'default' | 'academic' | 'business' | 'casual' | 'childfriendly' | 'confident' | 'conversational' | 'creative' | 'emotional' | 'excited' | 'formal' | 'friendly' | 'funny' | 'humorous' | 'informative' | 'inspirational' | 'memeify' | 'narrative' | 'objective' | 'persuasive' | 'poetic' | string
editor.chain().focus().aiAdjustTone(tone: Tone, options: TextOptions).run()
aiComplete()
Completes the selected text.
editor.chain().focus().aiComplete(options: TextOptions).run()
aiDeEmojify()
Removes emojis from your text selection.
editor.chain().focus().aiDeEmojify(options: TextOptions).run()
aiEmojify()
Adds emojis ✨ to your text selection.
editor.chain().focus().aiEmojify(options: TextOptions).run()
aiExtend()
Extends your written text.
editor.chain().focus().aiExtend(options: TextOptions).run()
aiFixSpellingAndGrammar()
Runs a spell- and grammar-check on your selected text.
editor.chain().focus().aiFixSpellingAndGrammar(options: TextOptions).run()
aiTextPrompt()
Useful to run your own, custom prompt. The given text will be used as the prompt.
editor.chain().focus().aiTextPrompt(options: TextOptions).run()
aiRephrase()
Rephrases your currently selected text.
editor.chain().focus().aiRephrase(options: TextOptions).run()
aiShorten()
Shortens your currently selected text.
editor.chain().focus().aiShorten(options: TextOptions).run()
aiSimplify()
Uses simple language to rephrase your selected text.
editor.chain().focus().aiSimplify(options: TextOptions).run()
aiSummarize()
Summarizes the selected text content.
editor.chain().focus().aiSummarize(options: TextOptions).run()
aiTldr()
Creates a "Too Long; Didn't Read" version of your selected text.
editor.chain().focus().aiTldr(options: TextOptions).run()
aiTranslate()
Translates the selected text content into the given output language.
It accepts two letter ISO 639-1 language codes.
// Language: 'en' | 'de' | 'nl' | ...
editor.chain().focus().aiTranslate(language: Language, options: TextOptions).run()
aiImagePrompt()
Generates an image based on your prompt and the desired style.
Make sure to load the image extension ('@tiptap/extension-image'
) in your editor instance.
editor.chain().focus().aiImagePrompt(options: ImageOptions).run()
Command Options
Text Options
On every command which supports additional options, you’re able to fine tune the output:
Setting | Type | Default | Definition |
---|---|---|---|
modelName | see Supported text models | gpt-3.5-turbo |
The model used at OpenAI |
language | string (e.g. en , de ) |
null |
Although we do our best to prompt OpenAI for a response in the language of the input, sometimes it’s better to define it yourself. |
tone | string |
null |
A voice of tone the response should be transformed to |
stream | boolean |
false |
Should the command stream characters to the editor? It’s like the typewriter behavior in ChatGPT. This requires the newest extension version! |
collapseToEnd | boolean |
true |
Wether the cursor should be set to the end after the operation or the insertion should get selected. |
Unfortunately the combination of tone and language sometimes leads to responses which are not in the desired language.
Supported text models
We currently support the following OpenAI chat models:
-
gpt-4
-
gpt-4-turbo-preview
-
gpt-4-0125-preview
-
gpt-4-1106-preview
-
gpt-4-0613
-
gpt-4-32k
-
gpt-4-32k-0613
-
gpt-3.5-turbo-0125
-
gpt-3.5-turbo
-
gpt-3.5-turbo-1106
-
gpt-3.5-turbo-16k
Image Options
With these settings you can control how the image is generated:
Setting | Type | Default | Definition |
---|---|---|---|
modelName | dall-e-2 , dall-e-3 , null |
dall-e-3 |
The model used at OpenAI |
style | photorealistic , digital_art , comic_book , neon_punk , isometric , line_art , 3d_model |
photorealistic |
Define the image style |
size | 256x256 , 512x512 , 1024x1024 |
null |
Choosing the Right Model
When configuring the Tiptap AI extension, consider the specific needs of your application:
- For Cost-Effective Operations: Opt for GPT-3 or DALL-E 2 if the primary concern is budget and the tasks do not demand the most advanced capabilities.
- For Advanced Requirements: Choose GPT-4 or DALL-E 3 when your application requires the highest level of language understanding or image generation quality, and budget is less of a constraint.
The Tiptap AI extension's flexible configuration allows you to tailor the AI integration to match your specific requirements and budgetary considerations.
Note: The pricing details are not provided here due to variability and the need for up-to-date information. It's recommended to refer to the official OpenAI pricing page for the latest figures.
Register custom command and run own prompt
To register your own AI commands, simply extend the AI extension, add your command in addCommands()
(don't forget to inherit the predefined commands in this.parent?.()
), and execute aiTextPrompt()
to run your individual prompt.
Please note that this example uses your prompt on the client-side, which means that users could read it. If you're looking to use a custom Language Model (LLM) or a prompt on your backend, please refer to the advanced example provided here.
import { Ai, getHTMLContentBetween } from '@tiptap-pro/extension-ai'
// … other imports
// Declare typings if TypeScript is used:
//
// declare module '@tiptap/core' {
// interface Commands<ReturnType> {
// ai: {
// aiCustomTextCommand: () => ReturnType,
// }
// }
// }
const AiExtended = Ai.extend({
addCommands() {
return {
...this.parent?.(),
aiCustomTextCommand: () => ({ editor, state }) => {
const { from, to } = state.selection
const selectedText = getHTMLContentBetween(editor, from, to)
return editor.commands.aiTextPrompt({ text: `Translate the following text to French and add some emojis: ${selectedText}` })
},
}
},
})
// … this is where you initialize your Tiptap editor instance and register the extended extension
const editor = useEditor({
extensions: [
StarterKit,
AiExtended.configure({ /* … */ }),
],
content: '',
})
// … use this to run your new command:
// editor.chain().focus().aiCustomTextCommand().run()