V805 - latest

A new entry is added each week, and contains the release notes for all of the Tonic versions that were released during that week.

V846 - V854

June 2, 2023
Removed caching of AWS credentials.
For the beta Data Pipeline V2 job processing (available for PostgreSQL only):
  • When a job fails, Tonic no longer tries to fall back to the current job processing. In most cases, jobs fail for reasons that are not connected to the processing type. Falling back to the current job processing is not effective.
  • Improved performance for subsetting.
  • Adjusted the logging level for telemetry-related log messages to DEBUG.
On the Foreign Keys view, when you filter the keys, click the select all option, and then clear the filter, only the matching keys are selected. Previously, the select all option always selected all of the keys.
Fixed an issue where CSV files could not be uploaded to data science modeling workspaces.
You can now configure parallelism for sensitivity scans. For relational databases, you use the environment variable TONIC_PII_SCAN_PARALLELISM_RDMBS, and the default is 4. For document-based databases, you use the environment variable TONIC_PII_SCAN_PARALLELISM_DOCUMENTDB, and the default is 1.
On the table configuration panel for subsetting, Tonic no longer displays a count of post-subset rows before a subset is generated.
Fixed an issue where subsetting data from one workspace appeared in a different workspace.
You can now use UUID columns in the conditions for the Conditional generator.
Fixed an issue where when you deleted a linked column from a column configuration panel, the other linked columns were deleted.
The Timestamp Shift generator can now be assigned to columns where the values use the date format MMddyyyy.
Google BigQuery
  • Improved error handling when rows contain invalid data. Tonic now provides a method to look at the data that caused the error. Fixed the handling of rows to prevent errors on certain data.
  • Fixed how we create views in the destination database.
  • Fixed an issue where Tonic was unable to get the schema for the source data.
  • Fixed an issue where retries of Oracle commands from transient errors failed.
  • Improved resource handling during data generation in order to enable parallel processing.
  • Snowflake on Azure no longer requires CREATE SCHEMA permissions in order to support Preserve Destination mode for tables. Snowflake on AWS with Lambda processing continues to require CREATE SCHEMA permissions in order to support Preserve Destination mode.
  • When Tonic is unable to create a view in the destination database, it now returns a warning instead of an error.
  • Implemented a more accurate method to detect hexadecimal values.
SQL Server
  • Tonic now displays the correct generators for columns that are part of a composite unique index.

V837 - V845

May 26, 2023
The new Business Name generator produces realistic names of businesses or companies. The Business Name generator can be consistent with itself or with other columns. It improves on and is intended to replace the Company Name generator, which is now deprecated.
Fixed an issue on Table View where users could not use the delete icon to remove a generator assignment for a linked column.
Fixed an issue where the job details view for a subsetting job did not always show all of the steps as completed.
Updated the version of pytorch, which is used for data science modeling. This new version addresses some security vulnerabilities.
Fixed an issue where when the Tonic server was air gapped, the Admin Panel did not correctly display the current Tonic version.
Fixed an issue where jobs took longer than expected to complete.
Fixed an issue where the subsetting Graph View did not show how table participation in the subset changed since the most recent subsetting data generation.
Fixed an issue where after a single failure to write logs, the Download Job Logs feature stopped refreshing the logs. Tonic now continues to try to upload logs.
Google BigQuery
  • Fixed an issue with the test connection function for the destination database.
  • Materialized views and routines from the source database are now copied to the destination database.
  • Improved performance for sensitivity scans.
  • Fixed an issue where subsetting failed because of a data type mismatch between a primary key and a foreign key.
  • When TONIC_ORACLE_SKIP_CREATE_DB = true, foreign keys are now correctly enabled on the destination database.
  • Fixed an issue where the source database permissions check provided a false error about insufficient privileges for sequences.
  • For the beta Data Pipeline V2 data generation process, fixed an issue where the data generation process continued even after the job failed.
  • Fixed an issue where the presence of comments caused data generation to fail.
  • Improved performance for sensitivity scans.
SQL Server
  • Added support for user-defined types.

V827 - V836

May 19, 2023
Generator presets are now supported for Enterprise licenses on Tonic Cloud.
For Tonic data encryption, fixed an issue where the previous encryption key environment variable value was saved in the application database, which caused Tonic to use those values even after they were removed.
The Tonic diagnostic logs now include the Tonic worker ID.
Fixed an issue where the Tonic web server would not launch unless the Tonic application database used PostgreSQL v13 or later.
For data connectors other than MongoDB, the sensitivity scan is now parallelized.
Google BigQuery
  • Data generation now works correctly when the region that hosts Google BigQuery for the destination database is different from the region for the source database.
  • Fixed an issue where failed data generation jobs were incorrectly reported as successful.
  • Tonic now handles the TIME data type correctly.
  • For Incremental mode, fixed an issue where the values of timestamp columns on modified rows were not updated in the destination from the source.
  • When TONIC_ORACLE_SKIP_CREATE_DB=true, fixed an issue where the truncation of tables violated foreign key dependencies, which caused jobs to fail.
  • Added an option to enable the TCPS protocol for Oracle database connections. Previously, only TCP was supported. If you enable TCPS, you must also provide a wallet file.
  • Tonic now cleans up temporary destination database tables that were created during subsetting data generation.
  • For Incremental mode, fixed an issue where the values of timestamp columns on modified rows were not updated in the destination from the source.
  • During data generation, Tonic now warns users when an extension that the destination database needs is unavailable for installation.
  • For both Snowflake on AWS and Snowflake on Azure, you can now configure workspaces to limit the schemas to include.

V818 - V826

May 12, 2023
Fixed an issue where users could only select generators that supported uniqueness constraints for columns that were not unique.
Fixed an issue where admin users who did not have edit permissions on any workspaces could not edit presets from the Generator Presets view.
Improved data generation resiliency against transient failures.
Removed erroneous error messages.
To add AWS credentials to containers, you can now mount to ~/.aws/credentials.
Improved error messaging for Table View.
Fixed a display issue where the column configuration panel was too narrow and required horizontal scrolling.
Exporting or copying a workspace no longer requires the workspace to have a valid source database connection.
Reduced the amount of memory needed to run the Tonic web server.
  • Better handling of errors that involve invalid UUIDs.
  • Updated the required permissions for destination database connections. If SELECT ANY DICTIONARY or SELECT_CATALOG_ROLE cannot be granted, then Tonic can use a selection of ALL_ views (not recommended).
  • If TONIC_ORACLE_SKIP_CREATE_DB=true, then external tables are now excluded from the table list in Tonic. Tonic does not process those tables.
  • Fixed an issue where the Data Pipeline V2 flow would hang.
  • Fixed an issue where extensions such as pgcrypto were not transferred when data generation included schema filtering.
  • Improved performance when handling constraints.
Snowflake on AWS
  • As of V823, you can choose whether to use the Lambda process for data generation, which was previously the only option. By default, Snowflake on AWS uses a new, more resilient data generation process. You only need to use the Lambda data generation process for extremely large volumes of data (hundreds of gigabytes to terabytes). For existing workspaces, for versions before V826, the new default process is used. To use the Lambda data generation process, you must update your workspace configuration. As of V826, existing workspaces use the Lambda data generation process.
  • For the temporary CSV files used to retrieve and write source and destination data, you can now specify to use an external stage instead of an S3 bucket. The option to use an external stage is not available when you use the Lambda data generation process.
  • You can now specify different file storage locations for the temporary source and destination data files. In other words, you can have different S3 buckets or different external stages. Note that this option is not available when you use the Lambda data generation process.
  • For the new data generation process, fixed an issue where data generation jobs would hang instead of failing.
Snowflake on Azure
  • Before it runs a data generation, Tonic now verifies that there is a valid value for the Azure Blob Storage account key, which is set as the value of the environment variable TONIC_AZURE_BLOB_STORAGE_ACCOUNT_KEY.
  • Fixed an issue where data generation jobs would hang instead of failing.

V810 - V817

May 5, 2023
For Tonic data encryption, Tonic now only verifies the key for the enabled process. If you only enable decryption, then Tonic only verifies the value of TONIC_DATA_DECRYPTION_KEY. If you only enable encryption, then Tonic only verifies the value of TONIC_DATA_ENCRYPTION_KEY.
Upgraded our Docker images from Ubuntu 20 to Ubuntu 22.04.
Updated to ensure that the Tonic URL reflects the currently active workspace.
Recently started jobs no longer display a start time that occurred several years ago.
On the Data Encryption tab, the option to provide custom initialization vectors is now a toggle instead of radio buttons.
Resolved an issue where Tonic took an extremely long time to load.
  • Reduced the permissions required to test database connections.
  • Changed the required permissions to better support when TONIC_ORACLE_DBLINK_ENABLED is false. For the source database user, you can either grant SELECT ANY DICTIONARY, grant SELECT_CATALOG_RULE, or (not recommended) grant access to the ALL_* views.
  • Improved the error messaging when testing the connection to the destination database.
  • You can now use a connection string to connect to the source and destination databases. Also added support for proxy connections.

V805 - V809

April 28, 2023
Releases 805 and 806 were removed from quay because of a regression that was fixed in later releases. The issue caused performance degradation in data generation.
The new Snowflake on Azure data connector uses Azure Blob Storage to store interim uploaded and generated files.
A copied workspace now includes manual sensitivity designations. A manual sensitivity designation is when you change the sensitivity designation that was assigned by the sensitivity scan to either sensitive or not sensitive.
When the configured encryption or decryption key is not valid - for example, the key is not configured or uses the incorrect size - then Tonic does not allow you to configure Tonic data encryption.
Tonic Cloud now correctly enforces the supported data connectors.
Improved error messaging when subsetting data generation fails because the generator cannot be used with subsetting.
When you change the type of Tonic data encryption (decryption, encryption, or both), Tonic no longer clears the decryption and encryption text fields.
  • Fixed an issue where workspaces that contained views did not load.
  • Fixed an issue where the default Oracle NUMBER type was not compatible with the Integer Key Generator.
  • Removed an invalid error that was returned when users tested data connections.
  • For the beta Data Pipeline V2 generation process, improved the error logic to prevent jobs from hanging when errors occur.
SQL Server
  • Improved the resilience of data generation to transient failures.