AI-based tools can revolutionize how you do business. As this field continues to grow and as generative AI in particular is rapidly adopted across all sectors and technologies, important questions are raised over data security, data accuracy, ethics, and more.
As VIP integrates generative AI into our applications, we remain committed to protecting your data, mitigating risk, and ensuring that our solutions provide the value that you deserve. This is our commitment regardless of the technology we use.
VIP is also committed to providing you with transparency in how we use and protect your data. That’s where this article comes in. The information here is a starting place for you to learn about our policies, pledges, disclaimers, and more. We will provide additional information as needed, through the appropriate channels.
If you have concerns or questions that are not addressed here, please reach out to VIP.
About VIP’s LLM and integration into VIP apps
The large language model (LLM) that VIP apps will integrate with is Google Gemini. This integration is accomplished using a VIP-developed API with a centralized gateway pattern. A centralized gateway pattern routes all requests between the LLM and VIP backend services through a single entry point. This allows VIP to enforce data security, among other benefits.
VIP’s usage of Google Gemini via this API is under VIP’s billable Google account. This ensures that:
VIP data and your data will not be used for Gemini’s future model training.
VIP data and your data is only retained for a limited time, solely to detect violations of Google's prohibited use policy.
In the future, VIP may add more vendors (other than Google Gemini) to this API. However, that would require:
An acceptable data privacy policy that adheres to VIP’s pillars for protecting your data, as described under Your data security.
Being vetted by the VIP Technology Enablement Team.
Your data security
VIP is responsible for protecting your data, but we do not own it. Your data is your data. The following are the main pillars of how VIP approaches data security for features that use the LLM.
No model training
Information that is passed into the LLM – including your proprietary data and your conversations – will under no circumstances be used to train the LLM. Your data is used exclusively for your benefit, within the VIP solutions you use.
No data exposure between customers
All LLM-powered features are designed to ensure that one customer’s data is never exposed to another customer, unless explicitly authorized for an approved data-sharing initiative.
In addition to the LLM and API that VIP uses to ensure data security, VIP enforces this by designing features so that the application – not the LLM – authenticates the user and determines their permissions. The LLM will never be trusted to make authorization decisions or determine which customer's data to access, regardless of user input or its own reasoning.
Internal policies
VIP maintains guidelines for LLM-powered tools used to assist in application development. VIP product teams must vet their intended tools following an internal policy that accounts for data privacy, data security, and ROI for both the customer and VIP. They must also gain approval from the VIP Technology Enablement Team, which includes VIP’s Director of IT and Directors of Application Development.
Transparency
VIP will provide clear disclosure for any feature where AI is making decisions, providing recommendations, or communicating directly with you, the user.
Disclaimers
Every LLM-powered feature can produce flawed responses. Please see the following disclaimers and how VIP mitigates these risks.
AI can make mistakes.
The LLM may:
Invent facts, sources, or data that do not exist. These are called hallucinations.
Produce factual inaccuracies or outdated information.
Provide invalid output.
When considering a new LLM-powered feature, VIP evaluates the potential business impact of each type of error that can occur and determines an appropriate course of action.
When designing an LLM-powered feature, VIP takes measures to mitigate incorrect and invalid output. This may look different depending on the feature and the app that it is within. Measures that VIP may use include:
Grounding. Generative AI creates output based on pattern recognition. While this is a powerful capability, relying only on this capability can lead to hallucinations. To reduce this risk, grounding may be incorporated in LLM-powered features to ensure that responses are sourced from verifiable data sources that have accurate and up-to-date data.
Externalizing calculations. LLMs are not inherently able to perform math operations. Any tasks involving math or complex calculations are outsourced to reliable tools.
Clear messaging. If an LLM-powered feature outputs an invalid response or cannot generate a response, you (the user) may be shown a clear error message saying what happened and, if possible, how you can clarify your prompt.
Human in the loop. High-impact LLM-powered features require a user to review output before action is taken.
AI can create problematic or offensive output.
The LLM may generate text that is inappropriate or biased.
To mitigate this risk, VIP ensures that our partners (currently: Google Gemini) and tools (such as our LLM API) minimize this output as much as practicable. See Google’s Gemini policy guidelines to learn about the types of outputs that Gemini seeks to avoid.