
Our AI Use Policy
We’re dedicated to using AI responsibly and ethically, with a commitment to fairness, inclusivity, and positive impact for all.
1 // General Principles
We are committed to the ethical use of AI, prioritizing fairness, transparency, and accountability.
All AI systems must be designed and deployed in alignment with our core values and mission, ensuring positive societal impact.
Compliance with local, national, and international regulations related to AI usage is mandatory, with a focus on protecting vulnerable populations.
2 // Transparency & Explainability
We disclose when and how AI is used in our products, services, or decision-making processes, emphasizing clarity in applications that impact social equity.
All AI-driven outputs should be understandable and interpretable, especially when impacting critical aspects of people’s lives, such as race, gender, religion, or socioeconomic status.
We maintain clear communication about the role AI plays in high-stakes decisions, such as employment, healthcare, or criminal justice.
3 // Data Privacy & Security
Personal data, including sensitive information related to race, religion, sexual orientation, gender identity, and other protected characteristics, must be securely managed and compliant with data protection laws (e.g., GDPR).
We only use data for AI purposes with explicit consent or when it is necessary and lawful, ensuring that marginalized groups are not disproportionately affected.
AI systems must be designed to ensure data minimization and protect the privacy of individuals from diverse backgrounds.
4 // Fairness, Equity & Bias Mitigation
We actively work to mitigate biases and ensure our AI models do not perpetuate discrimination based on race, religion, gender, sexual orientation, or any other protected status.
Regular audits and assessments are conducted to identify and address disparities in AI outcomes, with a focus on promoting equity.
We prioritize fairness by considering the impact of our AI systems on historically marginalized or underserved communities.
5 // Accountability & Governance
A dedicated AI Ethics Committee oversees the development and use of AI systems, with a specific mandate to consider the impact on different racial, religious, gender, and sexual orientation groups.
Employees are required to report any misuse of AI or violations that may lead to inequities or discrimination through our internal reporting system.
All AI projects must undergo a comprehensive risk assessment to ensure they do not exacerbate social inequities.
6 // AI In Employment & Automation
AI tools are designed to augment human efforts and are implemented with an understanding of their impact on diverse employees, considering factors such as race, gender, and socioeconomic background.
We maintain transparency about how AI may impact roles, especially for marginalized groups, and provide retraining resources to ensure equitable opportunities for all employees.
Decisions influenced by AI, such as hiring or promotions, will always include human oversight to prevent biased outcomes.
7 // Social Equity & Inclusion
We are committed to using AI in ways that advance social equity, reduce discrimination, and promote inclusivity.
Our AI systems are evaluated for their impact on various demographic groups, ensuring they do not reinforce stereotypes or marginalize individuals based on race, religion, gender identity, sexual orientation, or other personal characteristics.
We engage with diverse communities to understand the societal impact of our AI systems and make improvements based on their input.
8 // External Partnerships & Collaboration
We require that partners and vendors using AI on our behalf adhere to ethical practices that promote diversity, equity, and inclusion.
Collaborative AI research must align with this policy and undergo review to ensure it benefits all groups equitably.
9 // Innovation & Continuous Improvement
We are committed to exploring and integrating advances in AI that promote social good while being mindful of equity and inclusivity.
Our teams receive continuous training on bias mitigation and the ethical use of AI, with a focus on understanding the impact on different communities.
We actively seek input from diverse stakeholders, including advocacy groups focused on race, religion, gender, and sexual orientation, to improve our AI practices.
10 // AI System Safety & Robustness
AI systems are rigorously tested to ensure they are reliable and do not produce harmful or biased outcomes that affect people differently based on race, gender, religion, or sexual orientation.
Safeguards are in place to detect and respond to unintended consequences, ensuring our AI systems promote safety and equity.
We regularly update and maintain AI systems to ensure they remain secure, effective, and fair to all users.