[NeurIPS Best Paper] 1000 Layer Networks for Self-Supervised RL — Kevin Wang et al, Princeton

[NeurIPS Best Paper] 1000 Layer Networks for Self-Supervised RL — Kevin Wang et al, Princeton

Latent Space: The AI Engineer Podcast
about 1 month ago28m

From undergraduate research seminars at Princeton to winning Best Paper award at NeurIPS 2025, Kevin Wang, Ishaan Javali, Michał Bortkiewicz, Tomasz Trzcinski, Benjamin Eysenbach defied conventional wisdom by scaling reinforcement learning networks to 1,000 layers deep—unlocking performance gains that the RL community thought impossible.

We caught up with the team live at NeurIPS to dig into the story behind RL1000: why deep networks have worked in language and vision but failed in RL for over a decade (spoiler: it's not just about depth, it's about the objective), how they discovered that self-supervised RL (learning representations of states, actions, and future states via contrastive learning) scales where value-based methods collapse, the critical architectural tricks that made it work (residual connections, layer normalization, and a shift from regression to classification), why scaling depth is more parameter-efficient than scaling width (linear vs.

quadratic growth), how Jax and GPU-accelerated environments let them collect hundreds of millions of transitions in hours (the data abundance that unlocked scaling in the first place), the "critical depth" phenomenon where performance doesn't just improve—it multiplies once you cross 15M+ transitions and add the right architectural components, why this isn't just "make networks bigger" but a fundamental shift in RL objectives (their code doesn't have a line saying "maximize rewards"—it's pure self-supervised representation learning), how deep teacher, shallow student distillation could unlock deployment at scale (train frontier capabilities with 1000 layers, distill down to efficient inference models), the robotics implications (goal-conditioned RL without human supervision or demonstrations, scaling architecture instead of scaling manual data collection), and their thesis that RL is finally ready to scale like language and vision—not by throwing compute at value functions, but by borrowing the self-supervised, representation-learning paradigms that made the rest of deep learning work.

Episode Content
Original Audio

深度强化学习新突破:千层网络如何改变游戏规则?

概述

这篇播客采访了NeurIPS最佳论文奖得主团队,他们通过创新的自监督强化学习方法,成功将神经网络深度扩展到千层级别,打破了传统强化学习只能使用浅层网络的限制。这项研究揭示了强化学习与自监督学习融合的新可能性,为机器人等领域带来了新的发展路径。

关键话题

1. 研究背景与动机

  • 领域差异:在自然语言处理和计算机视觉领域,大规模深度网络已成为标准范式,但在强化学习领域,传统方法仍停留在2-3层的浅层网络
  • 核心问题:团队质疑为什么强化学习不能像其他深度学习领域那样通过增加网络深度获得性能提升
  • 突破方向:转向自监督强化学习,学习状态、动作和未来状态的表示,而非传统的价值函数

2. 技术突破关键

  • 架构创新:单纯增加深度会导致性能下降,必须结合残差连接、层归一化等特定架构组件
  • 目标函数转变:使用对比损失(判断未来状态是否属于同一轨迹),将学习负担从稀疏、有偏的Q学习转移到可扩展的分类问题上
  • 参数效率:增加深度时参数增长接近线性,而增加宽度时参数呈二次增长,深度扩展在参数效率和样本效率上都更优

3. 实验发现

  • 性能跃升:在特定深度结合正确架构后,性能出现成倍增长,而非渐进式改进
  • 数据需求:需要超过5000万次状态转移的数据量才能看到显著性能提升
  • 计算可行性:千层网络实验可在单张80GB H100 GPU上完成,便于复现

4. 领域意义与边界模糊

  • 重新定义强化学习:该方法不直接最大化奖励,而是通过表示学习解决任务,处于强化学习与自监督学习的交叉点
  • 智能系统构建:研究表明构建智能系统可能需要融合无监督学习、监督学习和强化学习的所有洞见
  • 根本洞见:通过将强化学习任务转化为表示学习问题,能够借鉴语言和视觉领域已被证明可扩展的技术

核心洞见与启示

1. 方法论的转变

  • 从价值学习到表示学习:传统强化学习扩展性差的核心在于价值函数学习的固有困难,而表示学习提供了更稳定的学习目标
  • 架构与目标的协同:单独增加深度或改变目标函数都不够,必须是特定架构与合适目标函数的结合才能实现突破

2. 扩展性的新理解

  • 解锁多维度扩展:深度网络的训练成功同时解锁了批大小扩展的能力,表明网络容量是传统强化学习中批大小扩展效果不佳的关键限制
  • 协同扩展效应:沿深度扩展后,可以进一步探索宽度、批大小等多个维度的协同扩展,类似语言模型的发展路径

3. 实际应用前景

  • 机器人领域潜力:为目标条件强化学习提供了可扩展的替代方案,减少对大量人工监督或演示数据的依赖
  • 效率权衡:深度扩展在参数效率上优于宽度扩展,为资源受限的场景提供了优化方向
  • 部署策略:“深度教师,浅层学生”可能成为可行的部署范式,通过知识蒸馏将深度模型的性能迁移到更高效的浅层网络

4. 研究范式启示

  • 跨领域借鉴:强化学习可以从其他深度学习领域的成功中汲取经验,特别是架构设计和学习目标方面
  • 数据收集革新:GPU加速环境使快速收集大规模强化学习数据成为可能,为扩展性研究提供了必要基础
  • 界限的模糊化:最好的智能系统可能不是纯粹的无监督、监督或强化学习,而是这些方法的有机融合

未来方向

  1. 极限扩展探索:在充足算力支持下,沿深度、宽度和批大小多个维度同时扩展,测试强化学习能力的理论极限
  2. 模型压缩与部署:研究如何通过知识蒸馏等技术将深度模型的性能迁移到更高效的推理模型中
  3. 跨任务泛化:探索学到的表示在不同任务和领域间的可迁移性
  4. 与世界模型的连接:深入研究这种方法与构建环境模型或世界模型之间的理论联系

这项研究不仅展示了深度网络在强化学习中的可行性,更重要的是提供了一种新的方法论视角:通过重新定义学习目标和巧妙结合架构创新,可以打破领域传统限制,开启新的研究方向。它提醒研究者,有时最大的突破来自质疑最基本的假设,并勇敢地跨越不同领域间的界限。


Original Description

From undergraduate research seminars at Princeton to winning Best Paper award at NeurIPS 2025, Kevin Wang, Ishaan Javali, Michał Bortkiewicz, Tomasz Trzcinski, Benjamin Eysenbach defied conventional wisdom by scaling reinforcement learning networks to 1,000 layers deep—unlocking performance gains that the RL community thought impossible.

We caught up with the team live at NeurIPS to dig into the story behind RL1000: why deep networks have worked in language and vision but failed in RL for over a decade (spoiler: it's not just about depth, it's about the objective), how they discovered that self-supervised RL (learning representations of states, actions, and future states via contrastive learning) scales where value-based methods collapse, the critical architectural tricks that made it work (residual connections, layer normalization, and a shift from regression to classification), why scaling depth is more parameter-efficient than scaling width (linear vs.

quadratic growth), how Jax and GPU-accelerated environments let them collect hundreds of millions of transitions in hours (the data abundance that unlocked scaling in the first place), the "critical depth" phenomenon where performance doesn't just improve—it multiplies once you cross 15M+ transitions and add the right architectural components, why this isn't just "make networks bigger" but a fundamental shift in RL objectives (their code doesn't have a line saying "maximize rewards"—it's pure self-supervised representation learning), how deep teacher, shallow student distillation could unlock deployment at scale (train frontier capabilities with 1000 layers, distill down to efficient inference models), the robotics implications (goal-conditioned RL without human supervision or demonstrations, scaling architecture instead of scaling manual data collection), and their thesis that RL is finally ready to scale like language and vision—not by throwing compute at value functions, but by borrowing the self-supervised, representation-learning paradigms that made the rest of deep learning work.