# Using Textual with Snowpark Container Services directly

Snowpark Container Services (SPCS) allow developers to run containerized workloads directly within Snowflake. Because Tonic Textual is distributed using a private Docker repository, you can use these images in SPCS to run Textual workloads.

It is quicker to use the [Snowflake Native App](https://docs.tonic.ai/textual/textual-integrations/snowflake-native-app-and-spcs/about-the-snowflake-native-app), but SPCS allows for more customization.

## Add images to the repository <a href="#spcs-add-images" id="spcs-add-images"></a>

To use the Textual images, you must add them to Snowflake. The Snowflake [documentation](https://docs.snowflake.com/en/developer-guide/snowpark-container-services/working-with-registry-repository) and [tutorial](https://docs.snowflake.com/en/developer-guide/snowpark-container-services/tutorials/tutorial-1) walks through the process in great detail, but the basic steps are as follows:

1. [Set up an image repository in Snowflake](https://docs.snowflake.com/en/developer-guide/snowpark-container-services/tutorials/common-setup#create-snowflake-objects).
2. To pull down the required images, you must have access to our private Docker image repository on [Quay.io](http://quay.io/).  You should have been provided credentials during onboarding.\
   \
   If you require new credentials, or you experience issues accessing the repository, contact <support@tonic.ai>.\
   \
   Once you have access, pull down the following images:
   * `textual-snowflake`
   * Either `textual-ml` or `textual-ml-gpu`, depending on whether you plan to use a GPU compute pool
3. [Use the Docker CLI to upload the images to the image repository.](https://docs.snowflake.com/en/developer-guide/snowpark-container-services/tutorials/tutorial-1#build-an-image-and-upload)

The images are now available in Snowflake.

## Create the API service <a href="#spcs-create-api-service" id="spcs-create-api-service"></a>

The API service exposes the functions that are used to redact sensitive values in Snowflake. The service must be attached to a compute pool. You can scale the instances as needed, but you likely only need one API.

```sql
DROP SERVICE IF EXISTS api_service;
CREATE SERVICE api_service
  IN COMPUTE POOL compute_pool
  FROM SPECIFICATION $$
    spec:
      containers:
      - name: api_container
        image: /your_db/your_schema/your_image_repository/textual-snowflake:latest
        env:
          ML_SERVICE_URL: https://ml-service:7701
      endpoints:
        - name: api_endpoint
          port: 9002
          protocol: HTTP
      $$
   MIN_INSTANCES=1
   MAX_INSTANCES=1;
```

## Create the machine learning (ML) service <a href="#spcs-create-ml-service" id="spcs-create-ml-service"></a>

Next, you create the ML service, which recognizes personally identifiable information (PII) and other sensitive values in text. This is more likely to need scaling.

```sql
DROP SERVICE IF EXISTS ml_service;
CREATE SERVICE ml_service
  IN COMPUTE POOL compute_pool
  FROM SPECIFICATION $$
    spec:
      containers:
      - name: ml_container
        image: /your_db/your_schema/your_image_repository/textual-ml:latest
      endpoints:
        - name: ml_endpoint
          port: 7701
          protocol: TCP
      $$
   MIN_INSTANCES=1
   MAX_INSTANCES=1;
```

## Create functions <a href="#spcs-create-functions" id="spcs-create-functions"></a>

You can create custom SQL functions that use your API and ML services. These functions are accessible from directly within Snowflake.

{% code overflow="wrap" %}

```sql
CREATE OR REPLACE FUNCTION textual_redact(input_text STRING, config STRING)
  RETURNS STRING
  SERVICE = your_db.your_schema.api_service
  ENDPOINT = 'api_endpoint'
  AS '/api/redact';

CREATE OR REPLACE FUNCTION textual_redact(input_text STRING)
  RETURNS STRING
  SERVICE = your_db.your_schema.api_service
  CONTEXT_HEADERS = (current_user)
  ENDPOINT = 'api_endpoint'
  AS '/api/redact';
  
CREATE OR REPLACE FUNCTION textual_parse(PATH VARCHAR, STAGE_NAME VARCHAR, md5sum VARCHAR)
  returns string
  SERVICE=core.textual_service
  CONTEXT_HEADERS = (current_user)
  endpoint='api_endpoint'
  MAX_BATCH_ROWS=10
  as '/api/parse/start';
```

{% endcode %}

## Example usage <a href="#spcs-example-usage" id="spcs-example-usage"></a>

It can take a couple of minutes for the containers to start. After the containers are started, you can use the functions that you created in Snowflake.

To test the functions, use an existing table. You can also create this simple test table:

```sql
CREATE TABLE Messages (
    Message TEXT
);

INSERT INTO Messages (Message) VALUES ('Hi my name is John Smith');
INSERT INTO Messages (Message) VALUES ('Hi John, mine is Jane Doe');
```

You use the function in the same way as any other user-defined function. You can pass in additional configuration to determine how to process [specific types of sensitive values](https://docs.tonic.ai/textual/textual-integrations/snowflake-native-app-and-spcs/broken-reference).

For example:

{% code overflow="wrap" %}

```sql
SELECT Message, textual_redact(Message) as REDACTED, textual_redact(Message, PARSE_JSON('NAME_GIVEN':'Synthesis', 'NAME_FAMILY':'Off')) as SYNTHESIZED FROM MESSAGES;
```

{% endcode %}

By default, the function redacts the entity values. In other words, it replaces the values with a placeholder that includes the type. `Synthesis` indicates to replace the value with a realistic replacement value. `Off` indicates to leave the value as is.&#x20;

The `textual_redact` function works identically to the [`textual_redact` function in the Snowflake Native App](https://docs.tonic.ai/textual/textual-integrations/snowflake-app-use#snowflake-app-textual-redact).

The response from the above example should look something like this:

<table><thead><tr><th valign="top">Message</th><th valign="top">Redacted</th><th valign="top">Synthesized</th></tr></thead><tbody><tr><td valign="top">Hi my name is John Smith</td><td valign="top">Hi my name is [NAME_GIVEN_Kx0Y7] [NAME_FAMILY_s9TTP0]</td><td valign="top">Hi my name is Lamar Smith</td></tr><tr><td valign="top">Hi John, mine is Jane Doe</td><td valign="top">Hi [NAME_GIVEN_Kx0Y7], mine is [NAME_GIVEN_veAy9] [NAME_FAMILY_6eC2]</td><td valign="top">Hi Lamar, mine is Doris Doe</td></tr></tbody></table>

The `textual_parse` function works identically to the [`textual_parse` function in the Snowflake Native App](https://docs.tonic.ai/textual/textual-integrations/snowflake-app-use#snowflake-app-textual-parse).
