Position Summary
We are seeking a strategic and technically proficient AI Security and Governance Lead to design, implement, and oversee our organization’s framework for AI security, governance, compliance, and responsible use. This individual will define and operationalize standards, controls, and risk management strategies that ensure the secure, ethical, and compliant deployment of AI systems across the enterprise.
The ideal candidate combines cybersecurity with a strong understanding of AI/ML technologies, regulatory frameworks, and ethical risk management. This role is pivotal in ensuring our AI initiatives remain secure, trustworthy, and aligned with both business objectives and societal expectations.
Key Responsibilities
AI Security & Risk Management
- Develop and lead the AI Security Program, ensuring that AI systems are designed, implemented, and maintained securely throughout their lifecycle.
- Conduct and oversee AI threat modeling, adversarial resilience assessments, and data security evaluations for AI/ML models.
- Implement secure MLOps practices (model versioning, data integrity, model drift detection, etc.) and ensure compliance with internal and external security standards.
AI Governance & Compliance
- Define and maintain AI governance frameworks, including policies on model explainability, data lineage, bias management, and auditability.
- Ensure compliance with global AI regulations and standards, including EU AI Act, NIST AI RMF, ISO/IEC 42001, GDPR, and other emerging requirements.
- Develop reporting and documentation standards for AI risk, incident management, and compliance audits.
Ethical AI & Responsible Use
- Drive organizational adoption of Responsible AI principles, ensuring fairness, transparency, human oversight, and accountability.
- Partner with respective stakeholder and business teams to implement AI use policies and review potential risks related to bias, privacy, and unintended consequences.
- Support training and awareness programs to promote AI literacy and ethical decision-making among staff.
Collaboration
- Serve as the subject-matter expert (SME) for AI security, governance, and compliance.
Qualifications
Required:
- Bachelor’s degree in Computer Science, Cybersecurity, Data Science, or related field (Master’s preferred).
- 8+ years of experience in cybersecurity, governance, or risk management.
- 3+ years of experience with AI/ML systems, including data governance, model lifecycle management, or AI assurance.
- Proven experience developing and implementing governance or compliance frameworks (e.g., ISO 27001, NIST RMF, SOC2, etc.).
- Excellent communication and leadership skills — able to translate complex risks into actionable business guidance.
Preferred
- Advanced certifications such as CISSP, CISM, CRISC, or AI-related certifications (e.g., NIST AI RMF, ISO/IEC 42001 practitioner).
- Experience with AI model auditing, explainability tools, and monitoring solutions.
- Familiarity with large language models (LLMs), generative AI security, and responsible AI frameworks.
Key Competencies
- Strategic and analytical thinking
- Deep technical knowledge of AI and cybersecurity
- Strong leadership and stakeholder management
- Excellent written and verbal communication skills