Configuring the number of textual-ml workers
Last updated
Was this helpful?
Last updated
Was this helpful?
The TEXTUAL_ML_WORKERS
environment variable specifies the number of workers to use within the textual-ml
container. The default value is 1.
Having multiple workers allows for parallelization of inferences with NER models. The number of required workers is also affected by the .
When you deploy Textual with Kubernetes on GPUs, parallelization allows the textual-ml
container to fully utilize the GPU.
We recommend 3GB of GPU RAM for each worker.