# Enabling the Textual Agent

To enable the Textual Agent on a self-hosted instance of Tonic Textual, you must configure the connection to the AI model that you want to use.

Textual supports:

* Any OpenAI-compatible endpoint
* Gemini
* Amazon Bedrock

## Selecting the model provider

To identify the provider, set the [environment variable ](/textual/textual-install-administer/configuring-textual/textual-env-var-configure.md)`CHAT_LLM_PROVIDER`.

The available values are:

* `openai` - Indicates to use an OpenAI-compatible model.
* `gemini` - Indicates to use a Gemini model.
* `bedrock` - Indicates to use a model on Amazon Bedrock.

## Connecting to an OpenAI-compatible model

If you set `CHAT_LLM_PROVIDER` to `openai`, then to configure the connection to an OpenAI-compatible model, set the following environment variables:

* `CHAT_MODEL_ENDPOINT` - The endpoint URL for the OpenAI service.
* `CHAT_MODEL_NAME` - The name of the OpenAI-compatible model to use.
* `CHAT_API_KEY` - The API key to use for authentication.

## Connecting to a Gemini model

If you set `CHAT_LLM_PROVIDER` to `gemini`, then to configure the connection to a Gemini model, set the following environment variables:

* `CHAT_MODEL_ENDPOINT` - The endpoint URL for the Gemini service.
* `CHAT_MODEL_NAME` - The name of the Gemini model to use.
* `CHAT_API_KEY` - The Gemini API key to use for authentication.

## Connecting to a model on Amazon Bedrock

If you set `CHAT_LLM_PROVIDER` to `bedrock`, then to configure the connection to a model on Amazon Bedrock, set the following environment variables:

* `CHAT_MODEL_NAME` - The name of the model to use.

Optionally, also set:

* `CHAT_AMAZON_BEDROCK_REGION` - The AWS Region where Amazon Bedrock is located. If you do not provide this, then Textual uses the AWS Region that is set as the value of `AWS_DEFAULT_REGION`.
* `CHAT_AMAZON_BEDROCK_MAX_TOKENS` - The maximum number of output tokens for Amazon Bedrock responses. The default value is 64000.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.tonic.ai/textual/textual-install-administer/configuring-textual/enable-and-configure-textual-features/enabling-the-textual-agent.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
