Guide to expand Structural to other teams
After the initial Structural implementation, use the following guide as you expand Structural support to other teams in your organization.
Platform overview and value proposition
Before you hold the kickoff for a new team, share the Structural platform overview with them.
Provide information about the reasons to implement Structural and the benefits it provides:
Eliminate data bottlenecks: No waiting for manual data refreshes. Obtain high-utility test data on demand.
Ensure data safety: Automatically protect sensitive data such as PII and PHI to meet internal and external privacy standards.
High-fidelity, realistic data: Test against data that looks, feels, and behaves like production data without the risk of data exposure.
Kickoff: Onboarding and discovery
The goal of this phase is to agree on use cases, resources, and technical requirements.
Roles
The implementation involves the following stakeholders.
During the kickoff, you establish who is assigned to each implementation role. Note that that the same person might have more than one role.
Project manager:
Manages timelines.
Coordinates stakeholders.
Database owner:
Controls access to source data.
Manages connections to destination databases.
Structural users:
Configure generators.
Manage de-identification.
Data consumers:
Validate that the output meets business and testing needs.
Security/compliance officer (if necessary):
Reviews and approves de-identification workflows.
Technical discovery questionnaire
You use the answers to these questions to help determine the configuration of the Structural workspace for the new team.
Use case:
Who are the primary consumers?
What are their expectations of the data? For example, do they plan to use the data for load testing, UI automation?
Business value:
How does this deployment align with the company’s strategic goals for this year?
Database resources:
What database platforms are used for the data? For example: PostgreSQL, Snowflake, SQL Server
What is the size of the data?
Subsetting requirements:
Do you need to shrink massive datasets into targeted "slices" for local development?
Environments:
What environments are in scope? For example: Dev, QA, Staging, Prod-Copy
Validation and data usage:
How will you validate the data initially?
What technical steps are necessary for productive use?
Access and approvals:
Who requires access to the Structural application?
What compliance approvals are needed for the workflow?
Networking strategy:
How does Structural reach your data sources? For example: Private link/peering, static IP address allowlisting
Are there any security policies, standards, or procedures that must be accounted for?
Expectations and milestones
Kickoff and access
Week 1-2
Engage teams for development environment access.
Define data sources and workflows.
Onboard users and conduct training.
Environment ready.
Connectivity verified between Structural and databases.
Initial generation
Week 3-5
Run the initial sensitivity detection. Review the results on Privacy Hub.
Configure generators.
Generate data and validate the output.
Verify the data utility. Does the de-identified data behave like production data?
First value moment.
Functional, safe dataset is available for data consumers.
Meaningful data
Week 5-6
Scale for subsetting (targeted testing)
Complete deployment to designated environments
Integrate into CI/CD for automation
Fully operational.
Self-service automated data pipeline is live.
What to expect from the Tonic.ai team
During the onboarding process for the initial implementation, the Tonic.ai team provides tiered support to ensure that the platform is fully operational. A dedicated Solutions Architect is assigned to guide the implementation.
As you roll out additional use cases and move them into production, the Tonic.ai Support team becomes your primary resource for ongoing technical questions and troubleshooting.
Implementation Engagement Manager:
Coordinates the kickoff
Manages the project timeline and milestones.
Solutions Architect:
Conducts product walkthroughs.
Assists with generator configuration.
Provides technical training.
Customer Success Manager (CSM):
Long-term partner to scale Structural to additional use cases.
Conducts quarterly priority alignments.
Support: Provides high quality troubleshooting, debugging, and assistance.
Best practices
Start small, scale fast: Before you automate the entire ecosystem, focus on one high-impact use case.
Validate early: To identify logic gaps, ensure that data consumers test the output immediately after the first generation.
Automate the refresh cadence: Establish a systematic schedule for data generation. This ensures that downstream environments remain synchronized with the latest production schema and data trends.
Align the refresh cycle with your development velocity.
Enable API calls for workflow automation and programmatic jobs.
Schedule heavy generation jobs during off-peak hours.
Manage schema changes: Ensure the team has a plan to use Structural's automatic schema change to detection to manage upstream database changes without breaking downstream tests.
Track implementations: If you maintain a dynamic list of active and completed implementations, you can more easily:
Target stalled implementations.
Check in on active pipelines to provide updates about new Structural functionality and Structural enhancements.
Here is an example of a tracking table for Structural implementations:

Maintaining value after implementation
After the implementation phase is complete, the focus shifts to sustainability.
Some of the more important indicators of success include:
Data fidelity: Consumers can use Structural-processed data for their intended purposes.
Engineering velocity: Processes are in place that:
Provide consumers with high-quality data in a low-touch manner
Can handle changes to the data and schema.
Low visibility, high impact: Teams rely on Structural data for its quality and availability. They do not need to fully understand the tooling or infrastructure.
Last updated
Was this helpful?