LogoLogo
Release notesAPI docsDocs homeStructural CloudTonic.ai
  • Tonic Structural User Guide
  • About Tonic Structural
    • Structural data generation workflow
    • Structural deployment types
    • Structural implementation roles
    • Structural license plans
  • Logging into Structural for the first time
  • Getting started with the Structural free trial
  • Managing your user account
  • Frequently Asked Questions
  • Tutorial videos
  • Creating and managing workspaces
    • Managing workspaces
      • Viewing your list of workspaces
      • Creating, editing, or deleting a workspace
      • Workspace configuration settings
        • Workspace identification and connection type
        • Data connection settings
        • Configuring secrets managers for database connections
        • Data generation settings
        • Enabling and configuring upsert
        • Writing output to Tonic Ephemeral
        • Writing output to a container repository
        • Advanced workspace overrides
      • About the workspace management view
      • About workspace inheritance
      • Assigning tags to a workspace
      • Exporting and importing the workspace configuration
    • Managing access to workspaces
      • Sharing workspace access
      • Transferring ownership of a workspace
    • Viewing workspace jobs and job details
  • Configuring data generation
    • Privacy Hub
    • Database View
      • Viewing and configuring tables
      • Viewing the column list
      • Displaying sample data for a column
      • Configuring an individual column
      • Configuring multiple columns
      • Identifying similar columns
      • Commenting on columns
    • Table View
    • Working with document-based data
      • Performing scans on collections
      • Using Collection View
    • Identifying sensitive data
      • Running the Structural sensitivity scan
      • Manually indicating whether a column is sensitive
      • Built-in sensitivity types that Structural detects
      • Creating and managing custom sensitivity rules
    • Table modes
    • Generator information
      • Generator summary
      • Generator reference
        • Address
        • Algebraic
        • Alphanumeric String Key
        • Array Character Scramble
        • Array JSON Mask
        • Array Regex Mask
        • ASCII Key
        • Business Name
        • Categorical
        • Character Scramble
        • Character Substitution
        • Company Name
        • Conditional
        • Constant
        • Continuous
        • Cross Table Sum
        • CSV Mask
        • Custom Categorical
        • Date Truncation
        • Email
        • Event Timestamps
        • File Name
        • Find and Replace
        • FNR
        • Geo
        • HIPAA Address
        • Hostname
        • HStore Mask
        • HTML Mask
        • Integer Key
        • International Address
        • IP Address
        • JSON Mask
        • MAC Address
        • Mongo ObjectId Key
        • Name
        • Noise Generator
        • Null
        • Numeric String Key
        • Passthrough
        • Phone
        • Random Boolean
        • Random Double
        • Random Hash
        • Random Integer
        • Random Timestamp
        • Random UUID
        • Regex Mask
        • Sequential Integer
        • Shipping Container
        • SIN
        • SSN
        • Struct Mask
        • Timestamp Shift Generator
        • Unique Email
        • URL
        • UUID Key
        • XML Mask
      • Generator characteristics
        • Enabling consistency
        • Linking generators
        • Differential privacy
        • Partitioning a column
        • Data-free generators
        • Supporting uniqueness constraints
        • Format-preserving encryption (FPE)
      • Generator types
        • Composite generators
        • Primary key generators
    • Generator assignment and configuration
      • Reviewing and applying recommended generators
      • Assigning and configuring generators
      • Document View for file connector JSON columns
      • Generator hints and tips
      • Managing generator presets
      • Configuring and using Structural data encryption
      • Custom value processors
    • Subsetting data
      • About subsetting
      • Using table filtering for data warehouses and Spark-based data connectors
      • Viewing the current subsetting configuration
      • Subsetting and foreign keys
      • Configuring subsetting
      • Viewing and managing configuration inheritance
      • Viewing the subset creation steps
      • Viewing previous subsetting data generation runs
      • Generating cohesive subset data from related databases
      • Other subsetting hints and tips
    • Viewing and adding foreign keys
    • Viewing and resolving schema changes
    • Tracking changes to workspaces, generator presets, and sensitivity rules
    • Using the Privacy Report to verify data protection
  • Running data generation
    • Running data generation jobs
      • Types of data generation
      • Data generation process
      • Running data generation manually
      • Scheduling data generation
      • Issues that prevent data generation
    • Managing data generation performance
    • Viewing and downloading container artifacts
    • Post-job scripts
    • Webhooks
  • Installing and Administering Structural
    • Structural architecture
    • Using Structural securely
    • Deploying a self-hosted Structural instance
      • Deployment checklist
      • System requirements
      • Deploying with Docker Compose
      • Deploying on Kubernetes with Helm
      • Enabling the option to write output data to a container repository
        • Setting up a Kubernetes cluster to use to write output data to a container repository
        • Required access to write destination data to a container repository
      • Entering and updating your license key
      • Setting up host integration
      • Working with the application database
      • Setting up a secret
      • Setting a custom certificate
    • Using Structural Cloud
      • Structural Cloud notes
      • Setting up and managing a Structural Cloud pay-as-you-go subscription
      • Structural Cloud onboarding
    • Managing user access to Structural
      • Structural organizations
      • Determining whether users can create accounts
      • Creating a new account in an existing organization
      • Single sign-on (SSO)
        • Structural user authentication with SSO
        • Enabling and configuring SSO on Structural Cloud
        • Synchronizing SSO groups with Structural
        • Viewing the list of SSO groups in Tonic Structural
        • AWS IAM Identity Center
        • Duo
        • GitHub
        • Google
        • Keycloak
        • Microsoft Entra ID (previously Azure Active Directory)
        • Okta
        • OpenID Connect (OIDC)
        • SAML
      • Managing Structural users
      • Managing permissions
        • About permission sets
        • Built-in permission sets
        • Available permissions
        • Viewing the lists of global and workspace permission sets
        • Configuring custom permission sets
        • Selecting default permission sets
        • Configuring access to global permission sets
        • Setting initial access to all global permissions
        • Granting Account Admin access for a Structural Cloud organization
    • Structural monitoring and logging
      • Monitoring Structural services
      • Performing health checks
      • Downloading the usage report
      • Tracking user access and permissions
      • Redacted and diagnostic (unredacted) logs
      • Data that Tonic.ai collects
      • Verifying and enabling telemetry sharing
    • Configuring environment settings
    • Updating Structural
  • Connecting to your data
    • About data connectors
    • Overview for database administrators
    • Data connector summary
    • Amazon DynamoDB
      • System requirements and limitations for DynamoDB
      • Structural differences and limitations with DynamoDB
      • Before you create a DynamoDB workspace
      • Configuring DynamoDB workspace data connections
    • Amazon EMR
      • Structural process overview for Amazon EMR
      • System requirements for Amazon EMR
      • Structural differences and limitations with Amazon EMR
      • Before you create an Amazon EMR workspace
        • Creating IAM roles for Structural and Amazon EMR
        • Creating Athena workgroups
        • Configuration for cross-account setups
      • Configuring Amazon EMR workspace data connections
    • Amazon Redshift
      • Structural process overview for Amazon Redshift
      • Structural differences and limitations with Amazon Redshift
      • Before you create an Amazon Redshift workspace
        • Required AWS instance profile permissions for Amazon Redshift
        • Setting up the AWS Lambda role for Amazon Redshift
        • AWS KMS permissions for Amazon SQS message encryption
        • Amazon Redshift-specific Structural environment settings
        • Source and destination database permissions for Amazon Redshift
      • Configuring Amazon Redshift workspace data connections
    • Databricks
      • Structural process overview for Databricks
      • System requirements for Databricks
      • Structural differences and limitations with Databricks
      • Before you create a Databricks workspace
        • Granting access to storage
        • Setting up your Databricks cluster
        • Configuring the destination database schema creation
      • Configuring Databricks workspace data connections
    • Db2 for LUW
      • System requirements for Db2 for LUW
      • Structural differences and limitations with Db2 for LUW
      • Before you create a Db2 for LUW workspace
      • Configuring Db2 for LUW workspace data connections
    • File connector
      • Overview of the file connector process
      • Supported file and content types
      • Structural differences and limitations with the file connector
      • Before you create a file connector workspace
      • Configuring the file connector storage type and output options
      • Managing file groups in a file connector workspace
      • Downloading generated file connector files
    • Google BigQuery
      • Structural differences and limitations with Google BigQuery
      • Before you create a Google BigQuery workspace
      • Configuring Google BigQuery workspace data connections
      • Resolving schema changes for de-identified views
    • MongoDB
      • System requirements for MongoDB
      • Structural differences and limitations with MongoDB
      • Configuring MongoDB workspace data connections
      • Other MongoDB hints and tips
    • MySQL
      • System requirements for MySQL
      • Before you create a MySQL workspace
      • Configuring MySQL workspace data connections
    • Oracle
      • Known limitations for Oracle schema objects
      • System requirements for Oracle
      • Structural differences and limitations with Oracle
      • Before you create an Oracle workspace
      • Configuring Oracle workspace data connections
    • PostgreSQL
      • System requirements for PostgreSQL
      • Before you create a PostgreSQL workspace
      • Configuring PostgreSQL workspace data connections
    • Salesforce
      • System requirements for Salesforce
      • Structural differences and limitations with Salesforce
      • Before you create a Salesforce workspace
      • Configuring Salesforce workspace data connections
    • Snowflake on AWS
      • Structural process overviews for Snowflake on AWS
      • Structural differences and limitations with Snowflake on AWS
      • Before you create a Snowflake on AWS workspace
        • Required AWS instance profile permissions for Snowflake on AWS
        • Other configuration for Lambda processing
        • Source and destination database permissions for Snowflake on AWS
        • Configuring whether Structural creates the Snowflake on AWS destination database schema
      • Configuring Snowflake on AWS workspace data connections
    • Snowflake on Azure
      • Structural process overview for Snowflake on Azure
      • Structural differences and limitations with Snowflake on Azure
      • Before you create a Snowflake on Azure workspace
      • Configuring Snowflake on Azure workspace data connections
    • Spark SDK
      • Structural process overview for the Spark SDK
      • Structural differences and limitations with the Spark SDK
      • Configuring Spark SDK workspace data connections
      • Using Spark to run de-identification of the data
    • SQL Server
      • System requirements for SQL Server
      • Before you create a SQL Server workspace
      • Configuring SQL Server workspace data connections
    • Yugabyte
      • System requirements for Yugabyte
      • Structural differences and limitations with Yugabyte
      • Before you create a Yugabyte workspace
      • Configuring Yugabyte workspace data connections
      • Troubleshooting Yugabyte data generation issues
  • Using the Structural API
    • About the Structural API
    • Getting an API token
    • Getting the workspace ID
    • Using the Structural API to perform tasks
      • Configure environment settings
      • Manage generator presets
        • Retrieving the list of generator presets
        • Structure of a generator preset
        • Creating a custom generator preset
        • Updating an existing generator preset
        • Deleting a generator preset
      • Manage custom sensitivity rules
      • Create a workspace
      • Connect to source and destination data
      • Manage file groups in a file connector workspace
      • Assign table modes and filters to source database tables
      • Set column sensitivity
      • Assign generators to columns
        • Getting the generator IDs and available metadata
        • Updating generator configurations
        • Structure of a generator assignment
        • Generator API reference
          • Address (AddressGenerator)
          • Algebraic (AlgebraicGenerator)
          • Alphanumeric String Key (AlphaNumericPkGenerator)
          • Array Character Scramble (ArrayTextMaskGenerator)
          • Array JSON Mask (ArrayJsonMaskGenerator)
          • Array Regex Mask (ArrayRegexMaskGenerator)
          • ASCII Key (AsciiPkGenerator)
          • Business Name (BusinessNameGenerator)
          • Categorical (CategoricalGenerator)
          • Character Scramble (TextMaskGenerator)
          • Character Substitution (StringMaskGenerator)
          • Company Name (CompanyNameGenerator)
          • Conditional (ConditionalGenerator)
          • Constant (ConstantGenerator)
          • Continuous (GaussianGenerator)
          • Cross Table Sum (CrossTableAggregateGenerator)
          • CSV Mask (CsvMaskGenerator)
          • Custom Categorical (CustomCategoricalGenerator)
          • Date Truncation (DateTruncationGenerator)
          • Email (EmailGenerator)
          • Event Timestamps (EventGenerator)
          • File Name (FileNameGenerator)
          • Find and Replace (FindAndReplaceGenerator)
          • FNR (FnrGenerator)
          • Geo (GeoGenerator)
          • HIPAA Address (HipaaAddressGenerator)
          • Hostname (HostnameGenerator)
          • HStore Mask (HStoreMaskGenerator)
          • HTML Mask (HtmlMaskGenerator)
          • Integer Key (IntegerPkGenerator)
          • International Address (InternationalAddressGenerator)
          • IP Address (IPAddressGenerator)
          • JSON Mask (JsonMaskGenerator)
          • MAC Address (MACAddressGenerator)
          • Mongo ObjectId Key (ObjectIdPkGenerator)
          • Name (NameGenerator)
          • Noise Generator (NoiseGenerator)
          • Null (NullGenerator)
          • Numeric String Key (NumericStringPkGenerator)
          • Passthrough (PassthroughGenerator)
          • Phone (USPhoneNumberGenerator)
          • Random Boolean (RandomBooleanGenerator)
          • Random Double (RandomDoubleGenerator)
          • Random Hash (RandomStringGenerator)
          • Random Integer (RandomIntegerGenerator)
          • Random Timestamp (RandomTimestampGenerator)
          • Random UUID (UUIDGenerator)
          • Regex Mask (RegexMaskGenerator)
          • Sequential Integer (UniqueIntegerGenerator)
          • Shipping Container (ShippingContainerGenerator)
          • SIN (SINGenerator)
          • SSN (SsnGenerator)
          • Struct Mask (StructMaskGenerator)
          • Timestamp Shift (TimestampShiftGenerator)
          • Unique Email (UniqueEmailGenerator)
          • URL (UrlGenerator)
          • UUID Key (UuidPkGenerator)
          • XML Mask (XmlMaskGenerator)
      • Configure subsetting
      • Check for and resolve schema changes
      • Run data generation jobs
      • Schedule data generation jobs
    • Example script: Starting a data generation job
    • Example script: Polling for a job status and creating a Docker package
Powered by GitBook
On this page
  • Indicating to write destination data to container artifacts
  • Identifying the base image to use to create the container artifacts
  • Providing a customization file for MySQL
  • Setting the location for the container artifacts
  • Providing the credentials to write to the registry
  • Fields for registries other than Amazon ECR or GAR
  • Azure Container Registry (ACR) permission requirements
  • Providing a service file for GAR
  • Amazon ECR registries
  • Providing tags for the container artifacts
  • Specifying custom resources for the Kubernetes pods
  • Setting a custom database name
  • Setting a custom database user password
  • Configuring the required tolerations for datapacker node taints

Was this helpful?

Export as PDF
  1. Creating and managing workspaces
  2. Managing workspaces
  3. Workspace configuration settings

Writing output to a container repository

Last updated 2 days ago

Was this helpful?

Requires Kubernetes.

For self-hosted Docker deployments, you can install and configure a separate Kubernetes cluster to use. For more information, go to Setting up a Kubernetes cluster to use to write output data to a container repository.

For information about required Kubernetes permissions, go to Required access to write destination data to a container repository.

Not compatible with upsert.

Not compatible with Preserve Destination or Incremental table modes.

Only supported for PostgreSQL, MySQL, and SQL Server.

You can configure a workspace to write destination data to a container repository instead of to a database server.

When Structural writes data generation output to a repository, it writes the destination data to a container volume. From the list of container artifacts, you can copy the volume digest, and download a Docker Compose file that provides connection settings for the database on the volume. Structural generates the Compose file when you make the request to download it. For more information about getting access to the container artifacts, go to Viewing and downloading container artifacts.

You can also use the data volume to start a Tonic Ephemeral database. However, if the data is larger than 10 GB, we recommend that you write the data to an Ephemeral user snapshot instead. For information about writing to an Ephemeral snapshot, go to Writing output to Tonic Ephemeral.

Indicating to write destination data to container artifacts

Under Destination Settings, to indicate to write the destination data to container artifacts, click Container Repository.

You can switch between writing to a database server and writing to a container repository at any time. Structural preserves the configuration details for both options. When you run data generation, it uses the currently selected option for the workspace.

Identifying the base image to use to create the container artifacts

From the Database Image dropdown list, select the image to use to create the container artifacts.

Select an image version that is compatible with the version of the database that is used in the workspace.

Providing a customization file for MySQL

For a MySQL workspace, you can provide a customization file that helps to ensure that the temporary destination database is configured correctly.

To provide the customization details:

  1. Toggle Use customization to the on position.

  2. In the text area, paste the contents of the customization file.

Setting the location for the container artifacts

To provide the location where Structural publishes the container artifacts:

  1. In the Registry field, type the path to the container registry where Structural publishes the data volume.

    Do not include the HTTP protocol, such as http:// or https://.

  2. In the Repository Path field, provide the path within the registry where Structural publishes the data volume.

    For a Google Artifact Registry (GAR) repository, the path format is PROJECT-ID/REPOSITORY/IMAGE.

Providing the credentials to write to the registry

You next provide the credentials that Structural uses to read from and write to the registry.

When you provide the registry, Structural detects whether the registry is from Amazon Elastic Container Registry (Amazon ECR), Google Artifact Registry (GAR), or a different container solution.

It displays the appropriate fields based on the registry type.

Fields for registries other than Amazon ECR or GAR

For a registry other than an Amazon ECR or a GAR registry, the credentials can be either a username and access token, or a secret.

The option to use a secret is not available on Structural Cloud.

In general, the credentials must be for a user that has read and write permissions for the registry.

To use a username and access token:

  1. Click Access token.

  2. In the Username field, provide the username.

  3. In the Access Token field, provide the access token.

To use a secret:

  1. Click Secret name.

  2. In the Secret Name field, provide the name of the secret.

Azure Container Registry (ACR) permission requirements

For ACR, the provided credentials must be for a service principal that has sufficient permissions on the registry.

Providing a service file for GAR

Structural only supports Google Artifact Registry (GAR). It does not support Google Container Registry (GCR).

For a GAR registry, you upload a service account file, which is a JSON file that contains credentials that provide access to Google Cloud Platform (GCP).

The associated service account must have the Artifact Registry Writer role.

For Service Account File, to search for and select the file, click Browse.

Amazon ECR registries

For an Amazon ECR registry, you can either:

  • Provide the AWS access and secret key that is associated with the IAM user that will connect to the registry

  • Provide an assumed role

  • (Self-hosted only) Use the credentials configured in the Structural environment settings TONIC_AWS_ACCESS_KEY_ID and TONIC_AWS_SECRET_ACCESS_KEY.

  • (Self-hosted only) If Structural is deployed in Amazon Elastic Kubernetes Service (Amazon EKS), then you can use the AWS credentials that live on the EC2 instance.

Using AWS access keys

To provide an AWS access key and secret key:

  1. Click Access Keys.

  2. In the Access Key field, enter an AWS access key that is associated with an IAM user or role.

  3. In the Secret Key field, enter the secret key that is associated with the access key.

Using an assumed role

To provide an assumed role:

  1. Click Assume Role.

  2. In the Role ARN field, provide the Amazon Resource Name (ARN) for the role.

  3. In the Session Name field, provide the role session name. If you do not provide a session name, then Structural automatically generates a default unique value. The generated value begins with TonicStructural.

  4. In the Duration (in seconds) field, provide the maximum length in seconds of the session. The default is 3600, indicating that the session can be active for up to 1 hour. The provided value must be less than the maximum session duration that is allowed for the role.

For the assumed role, Structural generates the external ID that is used in the assume role request. Your role’s trust policy must be configured to condition on your unique external ID.

Here is an example trust policy:

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {
      "AWS": "<originating-account-id>"
    },
    "Action": "sts:AssumeRole",
    "Condition": {
      "StringEquals": {
        "sts:ExternalId": "<external-id>"
      }
    }
  }
}

Using the credentials from the environment settings (self-hosted only)

On a self-hosted instance, to use the credentials configured in the environment settings, click Environment Variables.

Using the AWS credentials from the EC2 instance (self-hosted only)

On a self-hosted instance, to use the AWS credentials from the EC2 instance, click Instance Profile.

Required permissions for the IAM user

The IAM user must have permission to list, push, and pull images from the registry. The following example policy includes the required permissions.

{
  {
    "Sid": "ManageTonicRepositoryContents",
    "Effect": "Allow",
    "Action": [
      "ecr:DescribeRepositories",
      "ecr:ListImages",
      "ecr:DescribeImages",
      "ecr:BatchGetImage",
      "ecr:BatchCheckLayerAvailability",
      "ecr:InitiateLayerUpload",
      "ecr:UploadLayerPart",
      "ecr:CompleteLayerUpload",
      "ecr:PutImage"
    ],
    "Resource": [
       "arn:aws:ecr:<region>:<account_id>:repository/<optional name filter>"
    ]
  },
  {
    "Sid": "GetAuthorizationToken",
    "Effect": "Allow",
    "Action": [
      "ecr:GetAuthorizationToken"
    ],
    "Resource": "*"
  }
}

For additional security, a repository name filter allows you to limit access to only the repositories that are used in Structural. You need to make sure that the repositories that you create for Structural match the filter.

For example, you could prefix Structural repository names with tonic-. In the policy, you include a filter based on the tonic- prefix:

"Resource": [
  "arn:aws:ecr:<region>:<account_id>:repository/tonic-*"
]

Providing tags for the container artifacts

In the Tags field, provide the tag values to apply to the container artifacts. You can also change the tag configuration for individual data generation jobs.

Use commas to separate the tags.

A tag cannot contain spaces. Structural provides the following built-in values for you to use in tags:

  • {workspaceId} - The identifier of the workspace.

  • {workspaceName} - The name of the workspace.

  • {timestamp} - The timestamp when the data generation job that created the artifact completed.

  • {jobId} - The identifier of the data generation job that created the artifact.

For example, the following creates a tag that contains the workspace name, job identifier, and timestamp:

{workspaceName}_{jobId}_{timestamp}

To also tag the artifacts as latest, check the Tag as "latest" in your repository checkbox.

Specifying custom resources for the Kubernetes pods

You can also optionally configure custom resource values for the Kubernetes pods. You can specify the ephemeral storage, memory, and CPU millicores.

To provide custom resources:

  1. Toggle Set custom pod resources to the on position.

  2. Under Storage Size:

    1. In the field, provide the number of megabytes or gigabytes of storage.

    2. From the dropdown list, select the unit to use.

    The storage can be between 32MB and 25GB.

  3. Under Memory Size:

    1. In the field, provide the number of megabytes or gigabytes of RAM.

    2. From the dropdown list, select the unit to use.

    The memory can be between 512MB and 4 GB.

  4. Under Processor Size:

    1. In the field, provide the number of millicores.

    2. From the dropdown list, select the unit.

    The processor size can be between 250m and 1000m.

Setting a custom database name

Only available for PostgreSQL and SQL Server. Not available for MySQL.

In the Custom Database Name field, provide the name to use for the destination database.

If you do not provide a custom database name, then the destination database uses the same name as the source database.

Setting a custom database user password

In the Custom Password field, provide the password for the destination database user.

If you do not provide a password, then Structural generates a password.

The destination database username is always the default user for the database:

  • For PostgreSQL, postgres

  • For MySQL, root

  • For SQL Server, sa

Configuring the required tolerations for datapacker node taints

If your Kubernetes nodes are configured with taints, then on a self-hosted instance, you can configure the tolerations that enable the datapacker pods to be scheduled on the nodes. The datapacker pod hosts the temporary database that Structural uses during the data generation.

  • CONTAINERIZATION_POD_NODE_TOLERATION_KEY - The toleration key value to apply to the datapacker pods. This setting is required. If you do not configure this setting, then Structural ignores the other settings.

  • CONTAINERIZATION_POD_NODE_TOLERATION_VALUES - A comma-separated list of toleration values to apply to the datapacker pods.

  • CONTAINERIZATION_POD_NODE_TOLERATION_EFFECT - The toleration effect to apply to the datapacker pods.

  • CONTAINERIZATION_POD_NODE_TOLERATION_OPERATOR - The toleration operator to apply to the datapacker pods.

For an overview of writing destination data to container artifacts, you can also view the .

For a Structural instance that is deployed on Docker, unless you , the Container Repository option is hidden.

For more information about repository and image names, go to the .

The secret is the name of a Kubernetes secret that lives on the pod that the Structural worker runs on. The secret type must be kubernetes.io/dockerconfigjson. The Kubernetes documentation provides information on .

For Structural, the service principal must at least have the permissions that are associated with the.

For an overview of taints and tolerations, go to the .

To configure the tolerations, you configure the following . You can add these settings to the Environment Settings list on Structural Settings.

video tutorial
set up a separate Kubernetes cluster
Google Cloud documentation
how to create a registry credentials secret
AcrPush role
Kubernetes documentation
environment settings
Diagram showing how data is written to and accessed from a container artifact