Configuration
You can configure CKEditor AI On-Premises using environment variables. Below you will find available environment variables with descriptions.
The license_key variable is used to verify that you own the rights to run CKEditor AI On-Premises. To get your license_key, please contact us. If you leave this option empty or provide an invalid key, CKEditor AI On-Premises will not launch.
The environments_management_secret_key variable should be a hard-to-guess string, preferably generated by an external password generator. It will be used to access the Cloud Services Management Panel.
To configure the MySQL database, you should first set the database_driver variable to mysql.
Then, you can provide:
database_hostanddatabase_portto configure the database address.database_useranddatabase_passwordto configure the database user.database_databaseto set the database that should be used by CKEditor AI On-Premises.
If your database connection should be encrypted, use the database_ssl_ca, database_ssl_key, and database_ssl_cert configurations.
To configure the Postgres database instead of MySQL, you should first set the database_driver variable to postgres.
Then, you can provide:
database_hostanddatabase_portto configure the database address.database_useranddatabase_passwordto configure the database user.database_databaseanddatabase_schemato indicate which schema should be used by the On-Premises server.
If your database connection should be encrypted, use the database_ssl_ca, database_ssl_key, and database_ssl_cert configurations.
Besides the SQL database, you need to configure the Redis database using:
redis_hostandredis_portto configure the database address.redis_passwordandredis_userto configure the database credentials. Both configuration options are optional.
You can also provide redis_db if you would rather not use the default database number, which is set to 1.
If you have a problem connecting to IPv6, try setting redis_ip_family to 6.
If your database connection should be encrypted, use the redis_tls_ca, redis_tls_key, and redis_tls_cert configurations, or set redis_tls_enable to true if you don’t use a custom certificate.
To set up a connection with a Redis Cluster, nodes need to be provided as the REDIS_CLUSTER_NODES variable.
REDIS_CLUSTER_NODES - required (for Redis Cluster connection)
REDIS_IP_FAMILY - optional (required only when using an IPv6 domain in `REDIS_CLUSTER_NODES`)
The REDIS_CLUSTER_NODES variable needs to contain a list of nodes in the dedicated format, which consists of:
"IP:PORT:[optional PASSWORD],IP:PORT:[optional PASSWORD]"
To understand the connection string format, check the examples below:
# IPv6
REDIS_CLUSTER_NODES: "[0:0:0:0:0:0:0:1]:7000,[0:0:0:0:0:0:0:1]:7001,[0:0:0:0:0:0:0:1]:7002"
# IPv6 with a password
REDIS_CLUSTER_NODES: "[0:0:0:0:0:0:0:1]:7000:password1,[0:0:0:0:0:0:0:1]:7001:password2,[0:0:0:0:0:0:0:1]:7002:password3"
# Domain name
REDIS_CLUSTER_NODES: "example.redis.server.com:7000,example.redis.server.com:7001,example.redis.server.com:7002"
# Domain name with IPv6 support
REDIS_IP_FAMILY: 6
REDIS_CLUSTER_NODES: "example.ipv6.redis.server.com:7000,example.ipv6.redis.server.com:7001,example.ipv6.redis.server.com:7002"
# Domain name with a password
REDIS_CLUSTER_NODES: "example.redis.server.com:7000:password1,example.redis.server.com:7001:password2,example.redis.server.com:7002:password3"
To configure S3 as your file storage, you should first set the storage_driver variable to s3.
Then, you can provide:
storage_access_key_idandstorage_secret_access_keyto authorize the service.storage_bucketto set the bucket where the files should be stored.
If you use an S3-compatible server, you can set the address with the storage_endpoint variable.
STORAGE_DRIVER=s3
STORAGE_REGION=[AWS_REGION]
STORAGE_ACCESS_KEY_ID=[AWS_ACCESS_KEY_ID]
STORAGE_SECRET_ACCESS_KEY=[AWS_SECRET_ACCESS_KEY]
STORAGE_BUCKET=[AWS_S3_BUCKET]
STORAGE_ENDPOINT=[AWS_S3_ENDPOINT]
To configure Azure Blob Storage as your file storage, you should first set the storage_driver variable to azure.
Then, you can provide:
storage_account_nameandstorage_account_keyauthorize to the service.storage_containerto set the container where the files should be stored.
STORAGE_DRIVER=azure
STORAGE_ACCOUNT_NAME=[AZURE_ACCOUNT_NAME]
STORAGE_ACCOUNT_KEY=[AZURE_ACCOUNT_KEY]
STORAGE_CONTAINER=[AZURE_CONTAINER]
STORAGE_ENDPOINT=[AZURE_ENDPOINT]
To configure the filesystem as your file storage, you should first set the storage_driver variable to filesystem.
Then you can provide the path to the directory where the files should be stored using the storage_location variable.
STORAGE_DRIVER=filesystem
STORAGE_LOCATION=/var/storage/location
To configure MySQL or Postgres as your file storage, you should set the storage_driver variable to database. When using SQL as a storage driver, the application will use the configuration provided in the database_* variables.
STORAGE_DRIVER=database
CKEditor AI On-Premises supports the following LLM model providers:
- OpenAI
- Anthropic
- Google (Gemini family models)
- Custom providers (OpenAI API compatible)
To configure a provider, you need to set the appropriate provider options in the providers variable. It should be provided in a form of a stringified JSON object:
providers: '{
"anthropic": {
"type": "anthropic",
"name": "Anthropic",
"apiKeys": ["api_key1", "api_key2", "api_key3"]
},
"google": {
"type": "google",
"name": "Google",
"apiKeys": ["api_key1", "api_key2", "api_key3"]
},
"openai": {
"type": "openai",
"name": "OpenAI",
"apiKeys": ["api_key1", "api_key2", "api_key3"]
},
"your-custom-provider": {
"type": "openai-compatible",
"name": "Custom provider",
"baseUrl": "https://your-custom-provider.com",
"headers": {
"Authorization": "Bearer token",
"X-Custom-Header": "custom_value"
}
}
}'
For each provider type, you can set the following options:
type(required) – the type of the provider (“openai”, “anthropic”, “google” or “openai-compatible”).name(optional) – the name of the provider to be displayed in the models list. If not provided, the service will default to the provider key.baseUrl(required foropenai-compatibletype, optional for other types) – the base URL of the provider. Foropenai-compatibleproviders, all LLM requests will be sent to the{baseUrl}/chat/completionsendpoint. For other provider types, this overrides the default API URL.headers(optional) – additional headers sent with every request to the provider.apiKeys(optional for theopenai-compatibletype, required for other types) – the API keys for the provider.
By default, we use the default list of available models for each provider. If you want to change the list of available models or set the appropriate models for your custom provider, you can do so by specifying the models variable.
models: '[
{
"id": "model1",
"name": "Model 1",
"description": "Model 1 description",
"provider": "your-custom-provider",
"recommended": true,
"capabilities": {
"webSearch": true,
"reasoning": false
},
"features": ["conversations", "reviews", "actions"]
},
{
"id": "model2",
"name": "Model 2",
"description": "Model 2 description",
"provider": "your-custom-provider",
"recommended": true,
"capabilities": {
"webSearch": true,
"reasoning": false
},
"features": ["conversations", "reviews", "actions"]
}
]'
For each model you can set the following options:
id(required) – the ID of the model. It should be unique across all models. Used to specify the model in a call to the LLM provider.provider(required) – the id of model’s provider. It should be the same as the provider key in theprovidersvariable.description(required) – the description of the model to be displayed in the models list.name(optional) – the name of the model to be displayed in the models list. If not provided, the model will be displayed with the model id.recommended(optional) – whether the model should be placed in the recommended models list (list of models used by default by CKE5)capabilities(optional) – the capabilities of the model. It should be an object with the following keys:webSearch(optional, default: false) – whether the model can use the web search feature.reasoning(optional, default: false) – whether the model can use the reasoning feature.
contextLimits(optional) – the context limits of the model for single conversation. It should be an object with the following keys:maxContextLength(optional, default: 256000) – max context length in number of characters.maxFiles(optional, default: 100) – max number of files in a context.maxFileSize(optional, default: 5 MB for Anthropic models or 7 MB for other models) – max file size for a single file in bytes.maxTotalFileSize(optional, default: 30 MB) – max total file size for all files in a context in bytes.maxTotalPdfFilePages(optional, default: 100) – max total number of pages for all PDF files in a context.
features(optional) – the list of the features that the model should be used for. Users cannot set the model for each feature by themselves in our service. For example, system reviews, system quick actions, and conversation title generation use models defined by our service. This option allows you to specify which features the model should be used for. You can set the same feature to multiple models – models will be sorted and used in the order of appearance in your configuration. Available features:conversations– the model can be used for all conversations-related features.conversations.titleGeneration– the model can be used for conversation title generation.reviews– the model can be used for all reviews.reviews.correctness– the model can be used for a correctness review.reviews.clarity– the model can be used for a clarity review.reviews.readability– the model can be used for a readability review.reviews.make-longer– the model can be used for making text longer review.reviews.make-shorter– the model can be used for making text shorter review.reviews.make-tone-casual– the model can be used for making text more casual review.reviews.make-tone-direct– the model can be used for making text more direct review.reviews.make-tone-friendly– the model can be used for making text more friendly review.reviews.make-tone-confident– the model can be used for making text more confident review.reviews.make-tone-professional– the model can be used for making text more professional review.reviews.translate– the model can be used for a translation review.actions– the model can be used for all actions.actions.make-longer– the model can be used for making text longer.actions.make-shorter– the model can be used for making text shorter.actions.make-tone-casual– the model can be used for making text more casual.actions.make-tone-direct– the model can be used for making text more direct.actions.make-tone-friendly– the model can be used for making text more friendly.actions.make-tone-confident– the model can be used for making text more confident.actions.make-tone-professional– the model can be used for making text more professional.actions.translate– the model can be used for translation action.actions.continue– the model can be used for the continue action.actions.fix-grammar– the model can be used to fix grammar.actions.improve-writing– the model can be used to improve writing.
Remember that you should make sure that your model supports file handling capabilities. Otherwise, you should turn off file upload permissions for your users.
The features option is optional, but to run the service without errors, you must configure at least the conversations, reviews, and actions features for at least one model. To support some of our features your model also needs to support structured output capabilities.
To support different environments, CKEditor AI On-Premises does not bundle web scraping tooling. You can enable web scraping by connecting a tool of your choice (provided and operated by you) via a custom adapter.
To configure a custom endpoint for web scraping, define the following variables:
webresources_enabled– must be set totrue.webresources_endpoint– the endpoint URL of your custom gateway for downloading web resources.webresources_request_timeout– maximum request timeout in milliseconds (optional).
WEBRESOURCES_ENABLED="true"
WEBRESOURCES_ENDPOINT=[WEBRESOURCES_ENDPOINT]
WEBRESOURCES_REQUEST_TIMEOUT=[WEBRESOURCES_REQUEST_TIMEOUT]
Your custom endpoint should accept POST requests with the following JSON body:
{
"url": "https://example.com/page-to-scrape"
}
Where:
url(required) – the URL of the page to scrape.
Your endpoint should return a successful response in the following format:
{
"type": "text/html",
"data": "<html>...</html>"
}
Where:
type(required) – the content type of the scraped data. Allowed values:text/html,text/markdown.data(required) – the scraped website content.
To support different environments, CKEditor AI On-Premises does not bundle web search tooling. You can enable web search by connecting a tool of your choice (provided and operated by you) via a custom adapter.
To configure a custom endpoint for web search, define the following variables:
websearch_enabled– must be set totrue.websearch_endpoint– the endpoint URL of your custom gateway for performing web searches.websearch_request_timeout– maximum request timeout in milliseconds (optional).websearch_headers– additional headers sent with every web search request. It can be defined as a JSON object or a string in the formatkey1:value1,key2:value2...(optional).
WEBSEARCH_ENABLED="true"
WEBSEARCH_ENDPOINT=[WEBSEARCH_ENDPOINT]
WEBSEARCH_REQUEST_TIMEOUT=[WEBSEARCH_REQUEST_TIMEOUT]
WEBSEARCH_HEADERS=[WEBSEARCH_HEADERS]
Your custom endpoint should accept POST requests with the following JSON body:
{
"query": "search query string"
}
Where:
query(required) – the search query string.
Your endpoint should return a successful response in the following format:
{
"results": [
{
"url": "https://example.com/article",
"text": "A snippet or excerpt of the content",
"title": "Article Title",
"author": "Author Name",
"publishedAt": "2024-12-12T10:30:00Z",
"favicon": "https://example.com/favicon.ico"
}
]
}
Where:
results(required) – an array of web search results.
Each result object contains:
url(required) – the URL of the search result.text(required) – a snippet or excerpt of the content.title(optional) – the title of the search result.author(optional) – the author of the content.publishedAt(optional) – the publication date and time of the content in ISO 8601 format.favicon(optional) – the URL of the favicon for the website.