
Ryan Kidd, Co-Executive Director of MATS, shares an inside view of the AI safety field and the world’s largest AI safety research talent pipeline.
PSA for AI builders: Interested in alignment, governance, or AI safety?
Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: He discusses AGI timelines, the blurred line between safety and capabilities work, and why expert disagreement remains so high.
In the second half, Ryan breaks down MATS’ research archetypes, what top AI safety organizations are looking for, and how applicants can stand out with the right projects, skills, and career strategy.
本期《认知革命》播客邀请到MATS联合执行主任瑞安·基德,深入探讨AI安全研究现状、AGI时间线预测以及MATS作为全球最大AI安全人才输送渠道的运作机制。MATS拥有446名校友遍布各大AI安全组织,其导师阵容包括Redwood Research、Anthropic、DeepMind等机构的顶尖研究者。
即使最权威的预测也存在2030-2033年的范围差异,且超级智能可能在AGI出现后6个月到10年内出现。这种不确定性使得分散投资不同研究路径成为唯一可辩护的立场。
AI系统表现出比预期更好的价值观对齐,但可能是“碎片化”智能而非连贯的优化器。需要警惕“急剧左转”可能性——AI内部处理方式发生根本变化后获得长期目标。
“所有安全工作从根本上都是能力工作。”RLHF的历史表明,安全技术常成为能力突破的催化剂。完全隔离的安全研究需要极端保密和资源,在实践中难以实现。
MATS通过识别不同研究原型(连接者、迭代者、放大器),构建多元化人才梯队。随着领域成熟,对“连接者”和“放大器”的需求正在增长。
瑞安·基德通过MATS的视角,描绘了一个既充满希望又需谨慎的AI安全图景。领域在技术进步和人才储备上取得显著进展,但核心挑战——时间线不确定、安全与能力交织、欺骗风险持续——仍然严峻。MATS作为人才枢纽,通过培养多元化研究原型,为应对这些挑战构建必要的人力资本基础。对于有志于AI安全的研究者,现在正是通过MATS等渠道进入领域、贡献解决方案的关键时刻。
申请信息:MATS 2026年夏季项目申请截止日期为1月18日,详情访问 matsprogram.org/TCR
Ryan Kidd, Co-Executive Director of MATS, shares an inside view of the AI safety field and the world’s largest AI safety research talent pipeline.
PSA for AI builders: Interested in alignment, governance, or AI safety?
Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: He discusses AGI timelines, the blurred line between safety and capabilities work, and why expert disagreement remains so high.
In the second half, Ryan breaks down MATS’ research archetypes, what top AI safety organizations are looking for, and how applicants can stand out with the right projects, skills, and career strategy.