AI Data Security

AI data security answers for enterprise review

DataSafeHouse supports procurement, security, and architecture review with direct answers on model training, deployment boundaries, data isolation, and supported AI supplier options.

Customer data used to train models

No

Customer data and prompts are not used to train foundation models.

Deployment options

On premises or AWS VPC

Deployments can be delivered inside a customer-managed environment, including on-premises infrastructure or an AWS VPC.

Tenant and data isolation

Supported

Tenant and application boundaries, scoped keys, and role-based administrative controls support customer-specific segregation.

Supplier model options

Deployment-specific

Supported model and provider options may include AWS Bedrock-based models, OpenAI-compatible endpoints, Gemini, and local or self-hosted models.

Artifacts Reviewed

Materials typically provided during review

Data flow diagram and deployment topology

Customer-specific data flow diagrams and deployment topology documentation are prepared during solution design and provided during architecture or security review.

Integration inventory and authentication methods

Integration scope, connector boundaries, and authentication methods are documented per deployment based on the systems included in the engagement.

Technical documentation

Architecture, deployment, and operational documentation are available during technical review, including platform control points and operational assumptions.

Hosting platform

Supported deployment models include customer-managed on-premises environments and AWS VPC deployments.

Questionnaire Answers

AI data security questionnaire

Governance structure for AI in place for use, safety, and ethics?

Yes

DataSafeHouse applies governance through scoped administrative access, role-based console operations, policy inheritance, provider and model controls, rate limits, and audit-ready event telemetry.

Security protocols for model training and deployment?

Yes

Customer data and prompts are not used to train foundation models. Deployments can run on premises or in an AWS VPC, with deployment-specific network, access, and operations controls defined during implementation.

Policy in place that restricts use of unapproved AI systems?

Yes

Provider and model access can be controlled through policy resolution at tenant and application scope, including explicit allowlists and request-time enforcement.

Customer AI data segregated and protected from unauthorized access?

Yes

Tenant and application boundaries, scoped keys, role-based administrative permissions, and deployment-specific network boundaries support segregation and access control.

Private AI tenant available for customer-specific deployment?

Deployment-specific

DataSafeHouse supports customer-specific tenant and application boundaries. Dedicated deployment models can be scoped according to customer requirements, including on-premises or AWS VPC environments.

Trains foundation models using customer data or prompts?

No

DataSafeHouse does not use customer data or prompts to train foundation models.

AI data retention and disposal plan?

Deployment-specific

Retention and disposal requirements are defined per deployment and customer policy. Environment-specific handling is documented during implementation and security review.

Names of AI suppliers

Deployment-specific

Supported model and provider options may include AWS Bedrock-based models, OpenAI-compatible endpoints, Gemini, and local or self-hosted models, depending on deployment design and customer policy.

Customer-specific evidence packages, architecture diagrams, integration inventories, and deployment-specific controls are typically provided during solution review or security due diligence.