Friday, February 6, 2026

OpenAI fills $555000 AI safety role with senior hire from Anthropic.

Share

OpenAI has appointed Dylan Scandinaro, a former Anthropic AI safety researcher, to head its Preparedness function, a role offering compensation up to $555,000. Confirmed by Sam Altman, the hire underscores how AI safety, governance, and risk readiness are now central to OpenAI’s strategic priorities as it develops increasingly powerful AI systems.

The move highlights intensifying talent competition between OpenAI and Anthropic, reflecting a broader trend where safety and responsible deployment roles are evolving from support functions into core strategic positions. As AI capabilities outpace regulation, the industry faces a critical balance between hiring top safety leaders and advancing robust, global governance frameworks.

Strategically, OpenAI’s investment in senior safety leadership signals its commitment to risk-aware AI scaling, positioning it to lead in responsible AI innovation while navigating regulatory and ethical challenges worldwide.

Overall, Dylan Scandinaro’s appointment demonstrates that people and process will shape the next era of AI safety, making executive hires as impactful as policy frameworks.

Read more

Local News