Establishing governance frameworks, risk mitigation strategies, and compliance standards that ensure AI systems are aligned with legal, regulatory, and societal expectations across the enterprise
Requirements
- Deep expertise in AI ethics, governance, and regulatory compliance
- Strong understanding of AI/ML technologies, model lifecycle, and risk management
- Familiarity with global AI regulations (e.g., EU AI Act, NIST AI RMF, OECD principles)
Responsibilities
- Define and operationalize the enterprise-wide Responsible AI framework
- Establish governance structures such as AI Ethics Review Boards and model risk committees
- Develop and maintain AI risk assessment methodologies and audit protocols
- Partner with AI product, data science, and engineering teams to embed responsible AI practices into model development workflows
- Define and implement model monitoring standards for fairness, explainability, robustness, and bias mitigation
- Develop dashboards and reporting mechanisms to track compliance with responsible AI metrics
Other
- 10+ years of experience in governance, risk, compliance, or AI-related roles
- Experience in regulated industries (e.g., finance, healthcare, tech) is a plus
- Certifications in AI ethics, risk management, or compliance frameworks are desirable
- Bachelor’s degree required; advanced degree in Law, Ethics, Data Science, Public Policy, or related field preferred