Configuration
CKEditor AI On-Premises is in early access and is going to be launched in Q1 2026. Please note that some functionalities may be changed or not work as expected.
Also, selected capabilities available on SaaS are not available for CKEditor AI On-Premises yet.
You can configure CKEditor AI On-Premises using environment variables. Below you will find available environment variables with descriptions.
The license_key variable is used to verify that you own the rights to run CKEditor AI On-Premises. To get your license_key, please contact us. If you leave this option empty or provide an invalid key, CKEditor AI On-Premises will not launch.
The environments_management_secret_key variable should be a hard-to-guess string, preferably generated by an external password generator. It will be used to access the Cloud Services Management Panel.
To configure the MySQL database, you should first set the database_driver variable to mysql.
Then, you can provide:
database_hostanddatabase_portto configure the database address.database_useranddatabase_passwordto configure the database user.database_databaseto set the database that should be used by CKEditor AI On-Premises.
If your database connection should be encrypted, use the database_ssl_ca, database_ssl_key, and database_ssl_cert configurations.
To configure the Postgres database instead of MySQL, you should first set the database_driver variable to postgres.
Then, you can provide:
database_hostanddatabase_portto configure the database address.database_useranddatabase_passwordto configure the database user.database_databaseanddatabase_schemato indicate which schema should be used by the On-Premises server.
If your database connection should be encrypted, use the database_ssl_ca, database_ssl_key and database_ssl_cert, configurations.
Besides the SQL database, you need to configure the Redis database using:
redis_hostandredis_portto configure the database address.redis_passwordandredis_userto configure the database credentials. Both configuration options are optional.
You can also provide redis_db if you don’t want to use the default database number, which is set to 1.
If you have a problem connecting to IPv6, try setting redis_ip_family to 6.
If your database connection should be encrypted, use the redis_tls_ca, redis_tls_key, and redis_tls_cert configurations, or set redis_tls_enable to true if you don’t use a custom certificate.
To set up a connection with a Redis Cluster, nodes need to be provided as the REDIS_CLUSTER_NODES variable.
REDIS_CLUSTER_NODES - required (for Redis Cluster connection)
REDIS_IP_FAMILY - optional (required only when using an IPv6 domain in `REDIS_CLUSTER_NODES`)
The REDIS_CLUSTER_NODES variable needs to contain a list of nodes in the dedicated format, which consists of:
"IP:PORT:[optional PASSWORD],IP:PORT:[optional PASSWORD]"
To understand the connection string format, check the examples below:
# IPv6
REDIS_CLUSTER_NODES: "[0:0:0:0:0:0:0:1]:7000,[0:0:0:0:0:0:0:1]:7001,[0:0:0:0:0:0:0:1]:7002"
# IPv6 with a password
REDIS_CLUSTER_NODES: "[0:0:0:0:0:0:0:1]:7000:password1,[0:0:0:0:0:0:0:1]:7001:password2,[0:0:0:0:0:0:0:1]:7002:password3"
# Domain name
REDIS_CLUSTER_NODES: "example.redis.server.com:7000,example.redis.server.com:7001,example.redis.server.com:7002"
# Domain name with IPv6 support
REDIS_IP_FAMILY: 6
REDIS_CLUSTER_NODES: "example.ipv6.redis.server.com:7000,example.ipv6.redis.server.com:7001,example.ipv6.redis.server.com:7002"
# Domain name with a password
REDIS_CLUSTER_NODES: "example.redis.server.com:7000:password1,example.redis.server.com:7001:password2,example.redis.server.com:7002:password3"
To configure S3 as your file storage, you should first set the storage_driver variable to s3.
Then, you can provide:
storage_access_key_idandstorage_secret_access_keyto authorize the service.storage_bucketto set the bucket where the files should be stored.
If you use an S3-compatible server, like MinIO, you can set the address with the storage_endpoint variable.
To configure Azure Blob Storage as your file storage, you should first set the storage_driver variable to azure.
Then, you can provide:
storage_account_nameandstorage_account_keyauthorize to the service.storage_containerto set the container where the files should be stored.
To configure the filesystem as your file storage, you should first set the storage_driver variable to filesystem.
Then you can provide the path to the directory where the files should be stored using the storage_location variable.
To configure MySQL or Postgres as your file storage, you should set the storage_driver variable to database. When using SQL as a storage driver, the application will use the configuration provided in the database_* variables.
To configure LLM providers, you need to obtain and pass to the CKEditor AI On-Premises your API keys for selected providers. CKEditor AI On-Premises supports the following providers:
- OpenAI –
openai_api_keys. If you want to use multiple API keys, then pass all the keys separated by commas likekey1,key2,key3. - Anthropic –
anthropic_api_keys. If you want to use multiple API keys, then pass all the keys separated by commas likekey1,key2,key3. - Google (Gemini family models) –
google_api_keys. If you want to use multiple API keys, then pass all the keys separated by commas likekey1,key2,key3.
We use the each provider’s default URLs. If you need to overwrite the base URL, you can do it by setting it up in the configuration:
- OpenAI –
openai_base_url. If you need to pass additional HTTP headers, pass them under theopenai_headerskey in the form of an object or string in the following format:key1:value1,key2:value2. - Anthropic –
anthropic_base_url. If you need to pass additional HTTP headers, pass them under theanthropic_headerskey in the form of an object or string in the following format:key1:value1,key2:value2. - Google (Gemini family models) –
google_base_url. If you need to pass additional HTTP headers, pass them under thegoogle_headerskey in the form of an object or string in the following format:key1:value1,key2:value2.
Currently, only custom web scraping solutions are supported.
To configure a custom endpoint for web scraping, define the following variables:
webresources_enabled- must be set totrue.webresources_endpoint- the endpoint URL of your custom gateway for downloading web resources.webresources_request_timeout- maximum request timeout in milliseconds (optional).
Your custom endpoint should accept POST requests with the following JSON body:
{
"url": "https://example.com/page-to-scrape"
}
Where:
url(required) - the URL of the page to scrape.
Your endpoint should return a successful response in the following format:
{
"type": "text/html",
"data": "<html>...</html>"
}
Where:
type(required) - the content type of the scraped data. Allowed values:text/html,text/markdown.data(required) - the scraped website content.
Currently, only custom web search solutions are supported.
To configure a custom endpoint for web search, define the following variables:
websearch_enabled- must be set totrue.websearch_endpoint- the endpoint URL of your custom gateway for performing web search.websearch_request_timeout- maximum request timeout in milliseconds (optional).websearch_headers- additional headers sent with every web search request. Can be defined as a JSON object or a string in the formatkey1:value1,key2:value2...(optional).
Your custom endpoint should accept POST requests with the following JSON body:
{
"query": "search query string"
}
Where:
query(required) - the search query string.
Your endpoint should return a successful response in the following format:
{
"results": [
{
"url": "https://example.com/article",
"text": "A snippet or excerpt of the content",
"title": "Article Title",
"author": "Author Name",
"publishedAt": "2024-12-12T10:30:00Z",
"favicon": "https://example.com/favicon.ico"
}
]
}
Where:
results(required) - an array of web search results.
Each result object contains:
url(required) - the URL of the search result.text(required) - a snippet or excerpt of the content.title(optional) - the title of the search result.author(optional) - the author of the content.publishedAt(optional) - the publication date and time of the content in ISO 8601 format.favicon(optional) - the URL of the favicon for the website.
CKEditor AI On-Premises can be integrated with Collaboration Server On-Premises. To do so, the same database and Redis configurations need to be used. Make sure the database and Redis configurations refer to the same resources.
While enabling CKEditor AI On-Premises with Collaboration Server On-Premises, the enable_rtc_synchronization in CKEditor AI On-Premises configuration setting must be set to true.
Setting this option causes all environments created in the Collaboration Server On-Premises can be used in CKEditor AI On-Premises. This also disables the Management Panel for CKEditor AI On-Premises and allows handling all the configurations for CKEditor AI On-Premises via the Collaboration Server On-Premises Management Panel.
CKEditor AI On-Premises integration with Collaboration Server requires Collaboration Server On-Premises newer than 4.25.3.