AI Development Policy
Introduction & Scope
This AI Development Policy (the “Policy”) provides how Guidepoint Global (“Guidepoint” or “We”) develops, deploys, and uses our internal artificial intelligence tools including any such tools that we make available to our clients (“Guidepoint’s AI Systems”).
Guidepoint is committed to aligning our AI Systems with our mission to provide expert network services and products responsibly.
This Policy applies to: (1) Guidepoint’s AI Systems and (2) employees, agents, contractors, other Guidepoint personnel, and third-party partners who may build, maintain, and/or interact with Guidepoint’s AI Systems (“Guidepoint Personnel”).
Core Principles
A. Confidentiality and Privacy
We take confidentiality and privacy seriously in the development and use of AI Systems. The following practices help guide our approach:
- Client Confidentiality: Guidepoint Personnel involved in AI development and deployment are expected to preserve client confidentiality throughout all AI-related processes, including development, training, deployment, and use. Any use of Guidepoint’s AI Systems shall be in conformity with Guidepoint’s confidentiality obligations to its clients and its clients’ confidentiality obligations to Guidepoint.
- Use of Third-Party Data: Information from third-party sources may be accessed or processed in accordance with applicable agreements that address data use and related responsibilities.
- Third-Party Tools and Models: The use of external AI tools or models is subject to prior due diligence from appropriate Guidepoint Personnel (e.g., Legal) and must align with applicable contractual requirements and internal policies and procedures. Guidepoint’s use of third-party models is contingent on contractual terms that ensure that any information provided by Guidepoint shall not be used to train such models.
- Access to Training and Log Data: Access to training data and system logs is managed on a need-to-know basis, using role-based permissions to help limit exposure to authorized Guidepoint Personnel.
- Processing Privacy Requests: We do not use personal information in our AI Systems other than unique identifiers associated with an Advisor. If an Advisor issues a request for deletion, correction, or access under applicable data privacy laws, we structure our AI Systems to support requests from verified individuals, as required under applicable law.
B. Ethical and Responsible Use
Guidepoint’s AI Systems are developed and used with attention to fairness, reliability, and accountability. The following principles guide ethical and responsible practices:
- Bias and Fairness: Guidepoint’s AI Systems are designed and deployed with efforts to minimize bias and reduce the risk of discriminatory outputs.
- Ongoing Monitoring: Guidepoint Personnel are expected to regularly monitor AI outputs for signs of bias or discriminatory effects and take appropriate steps to address them in support of fair treatment of clients, expert advisors, and users.
- Accuracy and Reliability: Outputs are reviewed on an ongoing basis, and measures are implemented to reduce the occurrence of inaccuracies (e.g., hallucinations).
- Traceability: AI-generated responses and insights should be verifiable and, where appropriate, traceable to company-approved sources, including through citations or other documented references.
- Use of AI Outputs: Guidepoint Personnel should rely on AI-generated outputs only where those outputs are assessed to be reliable and consistent with internal policies.
- Human Oversight: Human-in-the-loop practices are incorporated, where feasible, to support responsible oversight and decision-making in the use of Guidepoint’s AI Systems.
C. Transparency and Explainability
Transparency and clarity in AI Systems use support-informed decision-making and build trust with users. The following practices apply:
- Disclosure of AI Use: When publishing, releasing, or using Guidepoint’s AI Systems’ outputs – or content informed by such outputs – Guidepoint Personnel are expected to provide clear and visible disclosures, such as “Content generated by AI.”
- Understandability: AI System-generated responses should be easy to understand, and users should be informed about the general capabilities and limitations of Guidepoint’s AI Systems to support responsible use and reduce the risk of misunderstandings.
- Source-Based Outputs: Responses from AI Systems should be based on available transcripts and company-approved content.
- Communication of Limitations: Users should be notified when an AI System powered by a Large Language Model (“LLM”) may be operating with incomplete data or when uncertainty exists in the generated output (e.g., hallucinations).
- System Messages and Disclaimers: Messaging should make clear that users are interacting with an AI System, describe the System’s role, and refer users to original or authoritative sources when appropriate.
- Explanations for AI-Driven Decisions: Where feasible, services that rely on Guidepoint’s AI Systems should include explanations for decisions – such as why a specific expert Advisor was recommended – to promote clarity and accountability.
- Internal Documentation: Key decisions related to the development, deployment, and use of AI Systems should be documented and stored for internal reference.
- Communicating Capabilities: Personnel should represent the capabilities of AI Systems accurately and avoid overstating their advancement or autonomy.
D. Security
We are committed to maintaining robust security practices in the development and use of AI Systems. To that end, we implement the following safeguards:
- Data Protection: We apply reasonable security controls to reduce risks related to the use of personal, confidential, or sensitive data in Guidepoint’s AI Systems.
- Controlled Access: Guidepoint’s AI Systems operate within clearly defined data environments to prevent unauthorized data access, and deployment outside of approved contexts or without authorization from appropriate decision-makers (such as Legal) is prohibited.
- Access Controls: Access to Guidepoint’s AI Systems is restricted to Guidepoint Personnel through credentialed identity management, and where applicable, the use of appropriate authentication protocols including multi-factor authentication.
- System Integrity: We maintain the confidentiality, integrity, and availability of Guidepoint’s AI Systems through reasonable and practical security measures at every stage of the system lifecycle.
- Pre-Use Validation: Guidepoint Personnel are required to validate the integrity of associated data resources and pipelines before accessing new or existing datasets.
E. Risk Management and Guardrails
We apply proactive risk management strategies and implement structured guardrails to enhance the safe and responsible use of Guidepoint’s AI Systems:
- Data Safeguards: Built-in controls are used to mitigate Guidepoint’s AI Systems from processing unauthorized or sensitive data, supporting compliance and reducing risk exposure.
- Monitoring and Assessment: Guidepoint’s AI Systems are monitored for usefulness, groundedness, toxicity, and bias, and are subject to Guidepoint’s AI-specific risk and impact assessments.
- Security Awareness: Guidepoint Personnel are expected to stay informed about emerging security and privacy risks related to AI as well as conduct regular vulnerability scans and audits.
- Transparency and Documentation: Guidepoint’s AI Systems must have clearly defined objectives, and all relevant considerations and data sources must be documented and traceable.
- Continuous Improvement: Guidepoint’s AI Systems are refined over time based on user feedback. Interactions – such as signaling whether a response was helpful – play a key role in improving performance and reliability.