The Einstein Trust Layer elevates the security of generative AI through data and privacy controls that are seamlessly integrated into the end-user experience. These controls enable Einstein to deliver AI that’s securely grounded in your customer and company data, without introducing potential security risks. In its simplest form, the Trust Layer is a sequence of gateways and retrieval mechanisms that together enable trusted and open generative AI.
The Einstein Trust Layer lets customers get the benefit of generative AI without compromising their data security and privacy controls. It includes a toolbox of features that protect your data—like secure data retrieval, dynamic grounding, data masking, and zero data retention—so you don’t have to worry about where your data might end up. Toxic language detection scans prompts and responses for accuracy and to assure they are appropriate. And for additional accountability, an audit trail tracks a prompt through each step of its journey.
Salesforce designed its open model ecosystem so you have secure access to many large language models (LLMs), both inside and outside of Salesforce. The Trust Layer sits between an LLM and your employees and customers to keep your data safe while you use generative AI for all your business use cases, including sales emails, work summaries, and service replies in your contact center.
Reference:
https://www.salesforce.com/eu/artificial-intelligence/trusted-ai/