# Configuring model preferences

On a self-hosted instance, you can configure settings to determine whether to the auxiliary model, and model use on GPU.

## Configuring whether to use an auxiliary model <a href="#config-model-aux" id="config-model-aux"></a>

To improve overall inference, you can configure whether Textual uses the `en_core_web_sm` auxiliary NER model.

### Entity types that the auxiliary model detects <a href="#config-model-aux-entity-types" id="config-model-aux-entity-types"></a>

The auxiliary model detects the following types:

* `EVENT`
* `LANGUAGE`
* `LAW`
* `NRP`
* `NUMERIC_VALUE`
* `PRODUCT`
* `WORK_OF_ART`

### Indicating whether to use the auxiliary model <a href="#config-model-aux-select" id="config-model-aux-select"></a>

To configure whether to use the auxiliary model, you use the environment variable `TEXTUAL_AUX_MODEL`.

The available values are:

* `en_core_web_sm` -  This is the default value.
* `none` - Indicates to not use the auxiliary model.

## Configuring model use for GPU <a href="#config-model-gpu" id="config-model-gpu"></a>

When you use a `textual-ml-gpu` container on accelerated hardware, you can configure:

* Whether to use the auxiliary model,
* Whether to use the date synthesis model

### Indicating whether to use the auxiliary model for GPU <a href="#config-model-gpu-aux" id="config-model-gpu-aux"></a>

To configure whether to use the auxiliary model for GPU, you configure the [environment variable](https://docs.tonic.ai/textual/textual-install-administer/configuring-textual/textual-env-var-configure) `TEXTUAL_AUX_MODEL_GPU`.

By default, on GPU, Textual does not use the auxiliary model, and `TEXTUAL_AUX_MODEL_GPU`  is `false`.

To use the auxiliary model for GPU, based on the configuration of `TEXTUAL_AUX_MODEL`, set `TEXTUAL_AUX_MODEL_GPU` to `true`.

When `TEXTUAL_AUX_MODEL_GPU` is `true`, and `TEXTUAL_MULTI_LINGUAL` is `true`, Textual also loads the multilingual models on GPU.

### Indicating whether to use the date synthesis model for GPU <a href="#config-model-gpu-date-synth" id="config-model-gpu-date-synth"></a>

By default, on GPU, Textual loads the date synthesis model on GPU.

Note that this model requires 600MB of GPU RAM for each machine learning worker.

To not load the date synthesis model on GPU, set the [environment variable](https://docs.tonic.ai/textual/textual-install-administer/configuring-textual/textual-env-var-configure) `TEXTUAL_DATE_SYNTH_GPU` to `false`.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.tonic.ai/textual/textual-install-administer/configuring-textual/enable-and-configure-textual-features/textual-config-model-prefs.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
