Integrating AI Assistant with your application
This guide will take you step-by-step through the process of running AI Assistant in your editor integration. It also presents possible configuration and customization options.
This is a premium feature and you need a license for it on top of your CKEditor 5 commercial license. Contact us to receive an offer tailored to your needs.
You can also sign up for the CKEditor Premium Features 30-day free trial to test the feature.
# Supported AI services
First, you will need to decide which AI service provider you want to integrate with. CKEditor does not provide the AI model itself. Instead, the feature relies on an external service to provide AI-generated responses.
We offer support for three leading platforms that provide AI services: OpenAI, Azure OpenAI, and Amazon Bedrock.
Since the feature relies on an external provider, the quality of the responses depends on that provider and their model.
If you have no constraints regarding the platform that you can use, we recommend integrating with the OpenAI API. It provides better quality and is the simplest to set up.
If you wish to use the Amazon Web Services (AWS) platform, we recommend the latest Claude model. It provides better quality than other models available on Amazon Bedrock.
This guide includes tips on how to set up the supported AI platforms. We expect that the integrator knows how their chosen platform works and how to configure it to best fit their use case.
# Using proxy endpoint
Before moving to the integration, there is one more subject to cover.
There are two general approaches to how the feature can communicate with the AI service provider: directly, or using an endpoint in your application.
Direct connection is simpler to set up and should not involve changes in your application’s backend. It is recommended for development purposes. AI Assistant supports this, as it makes it easier for you to test the feature without committing time to set up the backend part of the integration.
However, this method exposes your private authorization data which is a serious security issue. You should never use it in the production environment.
In the final solution, your application should provide an endpoint that the AI Assistant will call instead of calling the AI service directly. The main goal of this endpoint is to hide the authorization data from the editor users. The request to the AI service should happen from your backend, without exposing authorization credentials.
The application endpoint is also a good place to implement additional functionalities, like request customization, user billing, or logging statistics.
# Installation
⚠️ New import paths
Starting with version 42.0.0, we changed the format of import paths. This guide uses the new, shorter format. Refer to the Packages in the legacy setup guide if you use an older version of CKEditor 5.
After installing the editor, add the feature to your plugin list and toolbar configuration:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
// Load the plugin.
plugins: [ AIAssistant, /* ... */ ],
// Provide the license key.
licenseKey: '<YOUR_LICENSE_KEY>',
// Add the "AI commands" and "AI Assistant" buttons to the toolbar.
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
// AI configuration will land here:
ai: {
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
Read more about installing plugins and toolbar configuration.
# Integration
In the next step, you will need to set up the AI service of your choice and integrate the editor to use it:
# OpenAI integration
This section describes how to integrate the AI Assistant with the OpenAI platform.
# Set up the account
Create an OpenAI account and get your OpenAI API key.
# Making connection
To connect to the OpenAI service, you will need to add a connection adapter plugin to the editor. The adapter is responsible for making requests in the correct format and handling the responses.
Import the OpenAITextAdapter
plugin from the ckeditor5-ai
package and add it to the list of plugins.
Then, add the OpenAI key to the editor configuration.
You should send the key in the request “Authorization” header. You can set the request headers using the config.ai.openAI.requestHeaders
configuration property.
The snippet below presents the described changes:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, OpenAITextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
openAI: {
requestHeaders: {
// Paste your OpenAI API key in place of YOUR_OPENAI_API_KEY:
Authorization: 'Bearer YOUR_OPENAI_API_KEY'
}
}
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
This is the minimal setup required to launch AI Assistant. You can test it now.
# Request parameters
You can further configure how the OpenAI adapter works using the config.ai.openAI.requestParameters
option:
- Choose the exact OpenAI model to use.
- Set whether the response should be streamed (simulating the “writing” experience) or returned all at once.
- Fine-tune the model behavior.
See the OpenAI reference to learn what parameters you can use and how they affect the responses.
# Supported models
By default, the OpenAI adapter will use the “default” (the most recent stable) GPT-3.5 model.
CKEditor 5 supports all recent GPT-3.5 and GPT-4 models as well as legacy models (version 0613
).
You can find more information about offered models in the OpenAI documentation.
# Integrating with the proxy endpoint
As described earlier, before moving to production, you should create an endpoint that will communicate with the OpenAI service, instead of connecting directly and exposing your OpenAI API key.
For the OpenAI integration, you can implement the endpoint, in its simplest form, as a transparent proxy service. The service will get the requests from the editor, add authorization headers to them, and pass them to the AI service. Then, you should pass all responses back to the editor.
After you implemented the endpoint, set the URL to your endpoint using the config.ai.openAI.apiUrl
option. Also, remember to remove the OpenAI key from the configuration:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, OpenAITextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
openAI: {
apiUrl: 'https://url.to.your.application/ai'
}
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
Now, all requests are redirected to 'https://url.to.your.application/ai'
.
# Additional authorization and custom headers
Depending on your application, it might be necessary to pre-authorize the request before sending it to your application endpoint. One of the common patterns is to use JSON Web Token (JWT) authorization.
This, and similar cases, are supported through the config.ai.openAI.requestHeaders
option. You can set it to an object or an asynchronous function that resolves with an object. The object is then set as the request headers.
You can set config.ai.openAI.requestHeaders
to a function that queries the authorization API and sets the returned JWT in an authorization header:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, OpenAITextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
openAI: {
apiUrl: 'https://url.to.your.application/ai',
requestHeaders: async () => {
const jwt = await fetch( 'https://url.to.your.auth.endpoint/' );
return {
Authorization: 'Bearer ' + jwt
};
}
}
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
You can pass the actionId
parameter to the requestHeaders
function. It identifies the action that the user performed. This allows for further customization on your end.
{
requestHeaders: async ( actionId ) => {
const jwt = await fetch( 'https://url.to.your.auth.endpoint/?actionId=' + actionId );
return {
Authorization: 'Bearer ' + jwt
};
}
}
# Advanced customization
The most flexible place to apply request processing customization is your application endpoint. However, if for any reason you cannot customize the request on your application’s backend, you can consider the following extension points on the editor side.
Dynamic request headers.
As mentioned earlier, you can provide config.ai.openAI.requestHeaders
as an asynchronous function that can make a call to your application. You can use the actionId
parameter for further customization.
Dynamic request parameters.
Similarly to request headers, you can provide config.ai.openAI.requestParameters
as an asynchronous function. The function is also passed the actionId
. You can return different parameters based on it. For example, you can use different models for different actions.
Customizing request messages.
The request messages passed to the OpenAI service are built based on how the user used the feature: the selected content and the provided query.
You can overload the OpenAITextAdapter#prepareMessages()
method to customize the request messages or provide custom logic that will create the request messages.
For example:
- You can fine-tune the system message for specific (or your custom) predefined commands.
- You can pre-query your application to get extra context information and add it as an additional message.
- You can get additional context or data from the editor, document data, or your custom features.
- You can alter or redact parts of the
context
before sending it to the service.
import { OpenAITextAdapter } from 'ckeditor5-premium-features';
class CustomOpenAITextAdapter extends OpenAITextAdapter {
public async prepareMessages( query, context, actionId ) {
const messages = super.prepareMessages( query, context, actionId );
// Customize `messages` based on your requirements.
// You can use `actionId` to target only specific actions.
// You can make a call to your backend since `prepareMessages` is an asynchronous function.
// You can use `this.editor` to get access to the editor API.
return messages;
}
}
Remember to add CustomOpenAITextAdapter
to the plugin list instead of OpenAITextAdapter
.
Altering AI service responses
Each feature that sends a request provides the onData()
callback. The callback is executed each time the adapter receives the data from the AI service. You can decorate this callback to customize the response.
This will require overloading the OpenAITextAdapter#sendRequest()
method and changing the requestData.onData
parameter:
import { OpenAITextAdapter } from 'ckeditor5-premium-features';
class CustomOpenAITextAdapter extends OpenAITextAdapter {
public async sendRequest( requestData ) {
const originalOnData = requestData.onData;
requestData.onData = ( content ) => {
// Customize `content` based on your requirements.
// You can use `requestData.actionId` to target only specific actions.
// ...
// Then call the original callback with the modified `content`.
originalOnData( content );
};
return super.sendRequest( requestData );
}
}
If the adapter works in the streaming mode, the content
will include a partial, accumulating response. This may bring some extra complexity to your custom handling.
Remember to add CustomOpenAITextAdapter
to the plugin list instead of OpenAITextAdapter
.
Overloading the sendRequest()
method.
You can overload the sendRequest()
method to add some processing before or after making the call.
import { OpenAITextAdapter } from 'ckeditor5-premium-features';
class CustomOpenAITextAdapter extends OpenAITextAdapter {
public async sendRequest( requestData ) {
// Do something before making the actual request.
return super.sendRequest( requestData ).then( () => {
// Do something after the request has finished.
} );
}
}
Remember to add CustomOpenAITextAdapter
to the plugin list instead of OpenAITextAdapter
.
# Azure OpenAI integration
This section describes how to integrate the AI Assistant with the Azure OpenAI Service.
Microsoft’s Azure platform provides many AI-related services. AI Assistant supports only the OpenAI models.
# Set up the service
First, you will need to create an Azure account if you do not already own one.
You need to follow these steps to set up the AI Assistant:
- Log in to your Azure account.
- Create an “Azure OpenAI” resource.
- Go to the “Azure OpenAI” resource and open “Keys and Endpoint” to find your API key(s).
- Go to “Model deployments” and then create a deployment. Select the model and the name you want to use for that deployment.
- You will need the resource name, API key, and deployment name to configure the AI Assistant.
# Making connection
To connect to the Azure OpenAI service, you will need to add a connection adapter plugin to the editor. The adapter is responsible for making requests in the correct format and handling the responses.
Import the OpenAITextAdapter
plugin from the ckeditor5-ai
package and add it to the list of plugins.
Then, you will need to configure the AI Assistant, so it connects to the Azure OpenAI service using your data:
- The request URL as specified in the Azure OpenAI reference (it will include your deployment name and the API version).
- We tested AI Assistant with the
2023-12-01-preview
API version.
- We tested AI Assistant with the
- You need to pass the API key in the request
api-key
header.
The snippet below presents the described changes:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, OpenAITextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
openAI: {
// Paste your resource name, deployment name, and API version
// in place of YOUR_RESORCE_NAME, YOUR_DEPLOYMENT_NAME, and YOUR_API_VERSION:
apiUrl: 'https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=YOUR_API_VERSION',
requestHeaders: {
'api-key': 'YOUR_AZURE_OPEN_AI_API_KEY'
}
}
}
} )
.then( /* ... */ )
.catch( /* ... */ );
This is the minimal setup required to launch AI Assistant. You can test it now.
# Request parameters
You can further configure how the OpenAI adapter works using the config.ai.openAI.requestParameters
option:
- Set whether the response should be streamed (simulating the “writing” experience) or returned all at once.
- Fine-tune the model behavior.
See the Azure OpenAI reference to learn what parameters you can use and how they affect the responses.
You may also set requestParameters
to an asynchronous function. In this case, it should resolve with an object that contains the parameters. The function receives actionId
as a parameter, which identifies the action that the user performed. This allows for further customization on your end.
# Supported models and API versions
CKEditor 5 supports all recent GPT-3.5 and GPT-4 models as well as legacy models (version 0613
).
You can find more information about offered models in the Azure OpenAI documentation.
The most recent tested API version is 2023-12-01-preview
.
# Integrating with proxy endpoint
As described earlier, before moving to production, you should create an endpoint that will communicate with the OpenAI service, instead of connecting directly and exposing your OpenAI API key.
See the “Integration with the proxy endpoint” section for the OpenAI integration, as the process is the same for both platforms.
# Advanced customization
The most flexible place to apply request processing customization is your application endpoint. However, if for any reason you cannot customize the request on your application’s backend, you can extend the OpenAITextAdapter
. There are many extension points which you may consider.
See the “Advanced customization” section for the OpenAI integration, as it is the same for both platforms.
# Amazon Bedrock integration
This section describes how to integrate the AI Assistant with the Amazon Bedrock service. Amazon Bedrock is a service for building generative AI applications on AWS.
# Set up the account
First, you will need to create an AWS account if you do not already own one.
You need to follow these steps to set up the AI Assistant:
- Log in to your AWS account.
- Set up the Amazon Bedrock service.
- Go to the Amazon Bedrock management.
- Go to “Model access” » “Manage model access.”
- Request access to the models of your choice.
# Making connection
To connect to the Amazon Bedrock service, you will need to add a connection adapter plugin to the editor. The adapter is responsible for making requests in the correct format and handling the responses.
Import the AWSTextAdapter
plugin from the ckeditor5-ai
package and add it to the list of plugins.
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, AWSTextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
aws: {
// ...
}
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
For direct connection with Amazon Bedrock, AWSTextAdapter
uses BedrockRuntimeClient
from the AWS SDK library.
To integrate AI Assistant, you will need to set proper authorization credentials in the config.ai.aws.bedrockClientConfig
option.
There are many ways to authenticate a user using the AWS platform. Choose the one that is convenient for you. From the integration perspective, the only necessary step is to set up the correct parameters in the config.ai.aws.bedrockClientConfig
option. The value from this option is passed to the BedrockRuntimeClient
constructor.
The snippet below presents the described changes:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, AWSTextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
aws: {
bedrockClientConfig: {
// Fill in your service region, for example, 'us-west-2'.
region: 'YOUR_SERVICE_REGION',
credentials: {
// Paste your credentials in place of YOUR_ACCESS_KEY_ID and YOUR_SECRET_ACCESS_KEY.
accessKeyId: 'YOUR_ACCESS_KEY_ID',
secretAccessKey: 'YOUR_SECRET_ACCESS_KEY'
}
}
}
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
This is the minimal setup required to launch AI Assistant. You can test it now.
# Request parameters
You can further configure how the Amazon Bedrock adapter works using the config.ai.aws.requestParameters
option.
See the Amazon Bedrock model parameters reference to learn more.
Different models support different sets of parameters.
# Supported models
AWS text adapter supports many models available through Amazon Bedrock.
However, at the time of writing this guide, we recommend using models from the Claude family as they give the best results. Other models tend to return worse, non-formatted answers, fail to perform the instruction, do not support streaming, and are far less stable.
We tested the adapter with the following models:
- Claude by Anthropic:
'anthropic.claude-v2'
'anthropic.claude-instant-v1'
- Llama 2 by Meta:
'meta.llama2-70b-chat-v1'
(and13b
)
- Command by Cohere:
'cohere.command-text-v14'
'cohere.command-light-text-v14'
- Jurassic-2 by AI21:
'ai21.j2-mid-v1'
# Integrating with the proxy endpoint
As described earlier, before moving to production, it may be necessary to create an endpoint on the backend of your application. It will communicate with the Amazon Bedrock service.
This is not necessary if your integration’s frontend authorization is secure and will not lead to exposing your secret authorization credentials. Still, providing an endpoint may be beneficial, as it could enable you to enhance requests, gather statistics, or control service usage.
We recommend using the AWS SDK to implement the endpoint. It is provided in many popular programming languages.
When you switch to using an endpoint, all requests performed by AI Assistant will be passed to your endpoint instead of the AWS endpoints. The payload will include all the necessary data for you to make the actual request to the AWS endpoint.
As a response, the adapter will expect to get a JSON object with the full data (if not streaming), or multiple parts of the data (if streaming).
For example, if the Claude model streamed '<p>This is a test.</p>'
in three updates, the adapter should receive three responses, similar to the following:
Response 1: '{ "completion": "<p>Thi" }\n'
Response 2: '{ "completion": "s is a " }\n'
Response 3: '{ "completion": "test</p>" }\n'
You must separate the JSON strings by a newline character (\n
) for the adapter to be able to parse them as individual JSON objects.
Different models will return different JSON structures. Your endpoint should always respond with the same structure.
However, this should be transparent for your endpoint implementation. Amazon Bedrock will respond with update chunks that already have a proper structure. You should decode these chunks (from byte representation to a string) before responding with them to the adapter.
After you implement the endpoint, set the URL to your endpoint using the config.ai.aws.apiUrl
option. Also, remember to remove the configuration for the BedrockRuntimeClient
:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, AWSTextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
aws: {
apiUrl: 'https://url.to.your.application/ai'
}
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
Now, all requests are redirected to 'https://url.to.your.application/ai'
.
# Additional authorization and custom headers
Depending on your application, it might be necessary to pre-authorize the request before sending it to your application endpoint. One of the common patterns is to use JSON Web Token (JWT) authorization.
This, and similar cases, are supported through the config.ai.aws.requestHeaders
option. You can set it to an object or an asynchronous function that resolves with an object. The object is then set as the request headers.
You can set config.ai.aws.requestHeaders
to a function that queries the authorization API and set the returned JWT in an authorization header:
import { ClassicEditor } from 'ckeditor5';
import { AIAssistant, OpenAITextAdapter } from 'ckeditor5-premium-features';
ClassicEditor
.create( document.querySelector( '#editor' ), {
plugins: [ AIAssistant, OpenAITextAdapter, /* ... */ ],
licenseKey: '<YOUR_LICENSE_KEY>',
toolbar: [ 'aiCommands', 'aiAssistant', /* ... */ ],
ai: {
openAI: {
apiUrl: 'https://url.to.your.application/ai',
requestHeaders: async () => {
const jwt = await fetch( 'https://url.to.your.auth.endpoint/' );
return {
Authorization: 'Bearer ' + jwt
};
}
}
// ...
}
} )
.then( /* ... */ )
.catch( /* ... */ );
You can pass the actionId
parameter to the requestHeaders
function. It identifies the action that the user performed. This allows for further customization on your end.
{
requestHeaders: async ( actionId ) => {
const jwt = await fetch( 'https://url.to.your.auth.endpoint/?actionId=' + actionId );
return {
Authorization: 'Bearer ' + jwt
};
}
}
# Advanced customization
The most flexible place to apply request processing customization is your application endpoint. However, if for any reason you cannot customize the request on your application’s backend, you can consider the following extension points on the editor side.
Dynamic request headers.
As mentioned earlier, you can provide config.ai.aws.requestHeaders
as an asynchronous function that can make a call to your application. You can use the actionId
parameter for further customization.
Dynamic request parameters.
Similarly to request headers, you can provide config.ai.aws.requestParameters
as an asynchronous function as well. The function is also passed the actionId
. You can return different parameters based on it. For example, you can use different models for different actions.
Customizing request prompt.
The request prompt passed to the AI model is built based on how the user used the feature: the selected content and the provided query.
You can overload the AWSTextAdapter#preparePrompt()
method to customize the prompt sent to the AI model.
For example:
- You can fine-tune the prompt for specific (or your custom) predefined commands.
- You can pre-query your application to get extra context information and add it to the prompt.
- You can get extra context or data from the editor, document data, or your custom features.
- You can alter or redact parts of the
context
before sending it to the service. - You can change the prompt based on the used model.
Various models may require a specific prompt format.
import { AWSTextAdapter } from 'ckeditor5-premium-features';
class CustomAWSTextAdapter extends AWSTextAdapter {
public async preparePrompt( query, context, model, actionId ) {
// Customize `query` and `context` based on your requirements.
// You can use `actionId` to target only specific actions.
// Then, call the method in the parent class.
// Alternatively, you can generate the full prompt on your own.
// Keep in mind that various models may require a specific prompt format.
// You can make a call to your backend since `preparePrompt` is an asynchronous function.
// You can use `this.editor` to get access to the editor API.
return super.preparePrompt( query, context, model, actionId );
}
}
Remember to add CustomAWSTextAdapter
to the plugin list instead of AWSTextAdapter
.
Altering AI service responses
Each feature that sends a request provides the onData()
callback. The callback is executed each time the adapter receives the data from the AI service. You can decorate this callback to customize the response.
This will require overloading the AWSTextAdapter#sendRequest()
method and changing the requestData.onData
parameter:
import { AWSTextAdapter } from 'ckeditor5-premium-features';
class CustomAWSTextAdapter extends AWSTextAdapter {
public async sendRequest( requestData ) {
const originalOnData = requestData.onData;
requestData.onData = ( content ) => {
// Customize `content` based on your requirements.
// You can use `requestData.actionId` to target only specific actions.
// ...
// Then call the original callback with the modified `content`.
originalOnData( content );
};
return super.sendRequest( requestData );
}
}
If the adapter works in the streaming mode, content
will include a partial, accumulating response. This may bring some extra complexity to your custom handling.
Remember to add CustomAWSTextAdapter
to the plugin list instead of AWSTextAdapter
.
Overloading sendRequest()
method.
You can overload the sendRequest()
method to add some processing before or after making the call.
import { AWSTextAdapter } from 'ckeditor5-premium-features';
class CustomAWSTextAdapter extends AWSTextAdapter {
public async sendRequest( requestData ) {
// Do something before making the actual request.
return super.sendRequest( requestData ).then( () => {
// Do something after the request has finished.
} );
}
}
Remember to add CustomAWSTextAdapter
to the plugin list instead of AWSTextAdapter
.
# Custom models
You can integrate AI Assistant with any service of your choice as well as your custom models.
# Use OpenAI adapter and adjust AI service responses
A simple way to provide support for a different AI model or service is to use the OpenAI integration, and then provide an endpoint in your application that will query the chosen model or service. You need to make sure that the responses passed to the adapter are in the same format as the OpenAI API responses.
In the end, the adapter remains indifferent to what endpoint you connect to. Its role is to create the request data and handle the response. As long as the response format is the same as the one used by the OpenAI API, it will work.
# Implement custom adapter
Another method to support different models is to provide a custom implementation of the AITextAdapter
plugin.
This will give you more flexibility in creating the request and processing the response. The full implementation will depend on the requirements set by the chosen AI model.
Start with defining your custom adapter class:
import { AITextAdapter } from 'ckeditor5-premium-features';
class CustomAITextAdapter extends AITextAdapter {}
From the editor’s perspective, you will need to implement the sendRequest()
method:
import { AITextAdapter } from 'ckeditor5-premium-features';
class CustomAITextAdapter extends AITextAdapter {
public async sendRequest( requestData ) {}
}
This is the place where you should handle the request. The requestData
parameter includes the data provided by the feature (for example, AI Assistant) as the feature made the call to the adapter.
The API documentation describes each part of requestData
. To better understand it, here is a breakdown using the AI Assistant as an example:
query
– The predefined command query or custom query provided by the user. The instruction for the AI model.context
– The HTML content selected in the editor when the user made the request. It may be empty.actionId
– For AI Assistant this could beaiAssistant:custom
oraiAssistant:command:<commandId>
. You can use it to handle various user actions differently.onData
– For AI Assistant, it updates the UI (response area) and saves the updated response in the feature’s internals. You should call it each time the adapter gets an update from the AI service.
In short, you should use query
and context
(and optionally actionId
) to build a prompt for the AI service. Then call onData()
when you receive a response. It could happen once (no streaming) or many times (streaming). Support for streaming depends on the AI service.
import { AITextAdapter } from 'ckeditor5-premium-features';
class CustomAITextAdapter extends AITextAdapter {
public async sendRequest( requestData ) {
const prompt = requestData.query + '\n\n' + requestData.context;
const response = await fetch( `http://url.to.ai.serivce.endpoint/?prompt=${ prompt }` );
const responseText = await response.text();
requestData.onData( responseText );
}
}
Alternatively, you can pass query
, context
, and actionId
in the request to your application endpoint and handle them on the backend.
When your adapter fails for some reason, you should throw AIRequestError
. The error will be handled by the feature. In the case of AI Assistant, it will be displayed in a red notification box.
import { AITextAdapter, AIRequestError } from 'ckeditor5-premium-features';
class CustomAITextAdapter extends AITextAdapter {
public async sendRequest( requestData ) {
const prompt = requestData.query + '\n\n' + requestData.context;
const response = await fetch( `http://url.to.ai.serivce.endpoint/?prompt=${ prompt }` );
if ( !response.ok ) {
throw AIRequestError( 'The request failed for unknown reason.' );
}
const responseText = await response.text();
requestData.onData( responseText );
}
}
Finally, add CustomAITextAdapter
to the editor plugin list. Note, that you do not need to add any other adapter:
ClassicEditor
.create( element, {
plugins: [ AIAssistant, CustomAITextAdapter, /* ... */ ],
/* .. */
} )
.then( /* ... */ )
.catch( /* ... */ );
If the custom AI model supports streaming, you will receive the response in multiple small chunks. Make sure that each time the onData()
callback is called, the value passed to it contains the full response. It needs to be a sum of the current update and all previously received responses.
# Configuration and styling
# Adding AI commands to the list
The “AI Commands” button allows quick access to the most common AI Assistant commands. You can extend the default list of commands or define your list.
Use the config.ai.aiAssistant.extraCommandGroups
configuration option to extend the default list of commands:
ClassicEditor
.create( document.querySelector( '#editor' ), {
ai: {
// AI Assistant feature configuration.
aiAssistant: {
// Extend the default commands configuration.
extraCommandGroups: [
// Add a command to an existing group:
{
groupId: 'translate',
commands: [
{
id: 'translatePolish',
label: 'Translate to Polish',
prompt: 'Translate to Polish language.'
}
]
},
// Create a new AI commands group:
{
groupId: 'transformations',
groupLabel: 'Transformations',
commands: [
{
id: 'addEmojis',
label: 'Add emojis',
prompt: 'Analyze each sentence of this text. After each sentence add an emoji that summarizes the sentence.'
},
// ...
]
},
]
}
}
} )
.then( /* ... */ )
.catch( /* ... */ );
Use the config.ai.aiAssistant.commands
configuration option to create the list of commands from scratch:
ClassicEditor
.create( document.querySelector( '#editor' ), {
ai: {
// AI Assistant feature configuration.
aiAssistant: {
// Define the commands list from scratch.
commands: [
// Command groups keep them organized on the list.
{
groupId: 'customGroupId',
groupLabel: 'My group of commands',
commands: [
{
id: 'translateSpanish',
label: 'Translate to Spanish',
prompt: 'Translate this text to Spanish.'
},
{
id: 'explainFive',
label: 'Explain like I\'m five',
prompt: 'Explain this like I\'m five years old.'
},
// ...
]
},
// You can add more command groups here.
]
}
}
} )
.then( /* ... */ )
.catch( /* ... */ );
Please note that you can avoid creating command groups by passing commands definitions directly to the ai.aiAssistant.commands
configuration key. This will result in a flat list in the user interface.
# Removing default commands from the list
You can use the config.ai.aiAssistant.removeCommands
configuration to remove some default commands from the list:
ClassicEditor
.create( document.querySelector( '#editor' ), {
ai: {
// AI Assistant feature configuration.
aiAssistant: {
// Remove some of the default commands.
removeCommands: [
'improveWriting',
// ...
]
}
}
} )
.then( /* ... */ )
.catch( /* ... */ );
# Removing the violet tint from the UI
By default, some parts of the UI come with a violet tint that distinguishes the AI Assistant from the rest of CKEditor 5 features. If you do not want this styling in your integration, you can remove it by setting the config.ai.useTheme
configuration to false
:
ClassicEditor
.create( document.querySelector( '#editor' ), {
ai: {
// Remove the default feature's theme.
useTheme: false
}
} )
.then( /* ... */ )
.catch( /* ... */ );
The AI Assistant in this editor shares colors with the rest of the UI.
# Using custom colors for the UI
You can customize the looks of the AI Assistant UI by using CSS custom properties. Below is the full list of CSS variables that you can set.
For instance, you can use the following CSS snippet to change the tint color to red:
.ck-ai-assistant-ui_theme {
--ck-color-button-default-hover-background: hsl(0, 100%, 96%);
--ck-color-button-default-active-background: hsl(0,100%,96.3%);
--ck-color-button-on-background: hsl(0,100%,96.3%);
--ck-color-button-on-hover-background: hsl(0,60%,92.2%);
--ck-color-button-on-active-background: hsl(0,100%,96.3%);
--ck-color-button-on-disabled-background: hsl(0,100%,96.3%);
--ck-color-button-on-color: hsl(0,59.2%,52%);
--ck-color-button-action-background: hsl(0,59.2%,52%);
--ck-color-button-action-hover-background: hsl(0,58.9%,49.6%);
--ck-color-button-action-active-background: hsl(0,58.9%,49.6%);
--ck-color-button-action-disabled-background: hsl(0,59.3%,75.9%);
--ck-color-list-button-hover-background: hsl(0,100%,96.3%);
--ck-color-ai-selection: hsl(0,60%,90%);
}
The AI Assistant UI in this editor uses a configured red color.
If you set config.aiAssistant.useTheme
to false
and remove the default color theme, the .ck-ai-assistant-ui_theme
class will no longer be available. You can still apply custom styles via the .ck-ai-assistant-ui
CSS class that stays regardless of configuration, though.
# Changing the width of the dialog
Use the following CSS snippet to widen the AI Assistant pop-up dialog:
.ck.ck-ai-form {
--ck-ai-form-view-width: 800px;
}
# Changing the height of the response area
Use the following CSS snippet to increase the max-height
CSS property of the response content area and display more content to the users:
.ck.ck-ai-form {
--ck-ai-form-content-height: 500px;
}
# Styling the AI response area
By default, the AI Assistant’s response content area comes with the .ck-content
CSS class. This makes it possible for the users to see the response styled in the same way as the main editor content (learn more about it in the Content styles guide).
However, if your integration uses custom styles outside the .ck-content
class scope, and you want to apply them in the Assistant’s response content area, you can use config.ai.aiAssistant.contentAreaCssClass
and specify an additional class name (or names) for the element.
Styling the AI Assistant’s response content area is also possible via the .ck.ck-ai-form .ck.ck-ai-form__content-field
selector:
.ck.ck-ai-form .ck.ck-ai-form__content-field h2 {
/* Custom <h2> styles. */
}
Every day, we work hard to keep our documentation complete. Have you spotted outdated information? Is something missing? Please report it via our issue tracker.
With the release of version 42.0.0, we have rewritten much of our documentation to reflect the new import paths and features. We appreciate your feedback to help us ensure its accuracy and completeness.