At Schwab, the business problem is to ensure AI systems set industry benchmarks for safety, fairness, and transparency, and to maintain the trust that is central to the business.
Requirements
- Expertise in fairness, alignment, adversarial robustness, or interpretability/explainability.
- Experience with responsible generative AI challenges and risk mitigations.
- 7+ years in AI/ML research and development using Python.
- Familiarity with regulatory frameworks (AI-specific or financial sector) and responsible AI standards.
- Published research in AI safety, alignment, or governance (e.g., FAccT, NeurIPS).
- Experience with LLMs and deploying LLM-powered applications.
- Skills in adversarial testing, red-teaming, and risk assessment for AI deployments.
Responsibilities
- Design and implement innovative methods for bias detection and develop technical guardrails aligned with responsible AI principles.
- Collaborate with cross-functional teams—research, product, legal, compliance, and risk—to ensure Schwab’s AI systems are safe, fair, and transparent.
- Build and maintain monitoring systems for AI models, integrating human-in-the-loop and automated metrics for compliance at scale.
- Influence executive decision-making and regulatory engagement while driving trusted AI solutions for millions of clients.
Other
- Master’s degree (or equivalent) in Computer Science, Engineering, Data Science, Social/Applied Sciences, or related field—or equivalent experience.
- 6+ years in AI ethics, AI research, Security, Trust & Safety, or similar roles (academic doctoral experience counts).
- Strong analytical and communication skills for technical and non-technical audiences.
- 401(k) with company match and Employee stock purchase plan
- Paid time for vacation, volunteering, and 28-day sabbatical after every 5 years of service for eligible positions