[State of Evals] LMArena's $100M Vision — Anastasios Angelopoulos, LMArena

[State of Evals] LMArena's $100M Vision — Anastasios Angelopoulos, LMArena

Latent Space: The AI Engineer Podcast
about 1 month ago24m

From building LMArena in a Berkeley basement to raising $100M and becoming the de facto leaderboard for frontier AI, Anastasios Angelopoulos returns to Latent Space to recap 2025 in one of the most influential platforms in AI—trusted by millions of users, every major lab, and the entire industry to answer one question: which model is actually best for real-world use cases?

We caught up with Anastasios live at NeurIPS 2025 to dig into the origin story (spoiler: it started as an academic project incubated by Anjney Midha at a16z, who formed an entity and gave grants before they even committed to starting a company), why they decided to spin out instead of staying academic or nonprofit (the only way to scale was to build a company), how they're spending that $100M (inference costs, React migration off Gradio, and hiring world-class talent across ML, product, and go-to-market), the leaderboard delusion controversy and why their response demolished the paper's claims (factual errors, misrepresentation of open vs.

closed source sampling, and ignoring the transparency of preview testing that the community loves), why platform integrity comes first (the public leaderboard is a charity, not a pay-to-play system—models can't pay to get on, can't pay to get off, and scores reflect millions of real votes), how they're expanding into occupational verticals (medicine, legal, finance, creative marketing) and multimodal arenas (video coming soon), why consumer retention is earned every single day (sign-in and persistent history were the unlock, but users are fickle and can leave at any moment), the Gemini Nano Banana moment that changed Google's market share overnight (and why multimodal models are becoming economically critical for marketing, design, and AI-for-science), how they're thinking about agents and harnesses (Code Arena evaluates models, but maybe it should evaluate full agents like Devin), and his vision for Arena as the central evaluation platform that provides the North Star for the industry—constantly fresh, immune to overfitting, and grounded in millions of real-world conversations from real users.

Episode Content
Original Audio

从伯克利地下室到AI竞技场之王:Arena如何重塑AI模型评估

概述

本期播客邀请到了AI模型评估平台Arena的联合创始人Anastasios Angelopoulos。他分享了Arena如何从一个伯克利的学术项目(LMSYS)蜕变为一家获得1亿美元融资的独立公司,并详细阐述了其核心使命、运营原则、面临的挑战以及对AI评估生态的深远影响。

核心话题与讨论

1. 起源与蜕变:从学术项目到独立公司

  • 孵化与起步:Arena最初是风险投资人Ansh的孵化项目。Ansh在伯克利发现了团队,并在他们甚至没决定创业前就提供了资源和资金支持,甚至做好了“如果团队不想创业,随时可以退出”的准备,这是一种非常激进的投资方式。
  • 关键决策点:团队最终决定成立公司,是因为他们意识到,只有通过公司化运营,才能获得将Arena平台规模化、实现其使命所需的资源、分发能力和平台质量。学术项目或非营利组织的框架无法支撑这一愿景。
  • 品牌时刻:公司成功获得了“Arena”这个简洁的X(原Twitter)账号名,标志着品牌独立性的重要一步。

2. Arena是什么:核心理念与市场定位

  • 核心价值:Arena是一个基于真实用户、真实使用场景的有机反馈,来衡量、理解和推进前沿AI能力的平台。用户输入自己的真实问题(提示词),对模型的输出进行投票,从而生成动态、可靠的排行榜。
  • 关键数据
    • 平台已发生超过2.5亿次对话
    • 月活对话量达数千万次,是最大的LLM消费平台之一。
    • 拥有超过500万用户,其中约25%以软件为生,用户群体非常多样化。
    • 约一半用户处于登录状态,便于平台进行用户理解和调查。
  • 与竞争对手的差异:主要区别于Artificial Analysis等平台。
    • Arena:基于有机、真实的用户使用和投票
    • Artificial Analysis:基于汇总和独立重新运行公开基准测试,更像“AI界的Gartner”,侧重于咨询和分析报告。

3. 运营原则与未来规划

  • 不可妥协的原则平台诚信是第一位的。公共排行榜是一种“公益”,绝不接受付费上榜或付费撤榜。排行榜分数完全由数百万用户的真实投票统计得出,旨在透明、公平地反映模型性能。
  • 技术栈升级:为了更好的开发体验、性能和招聘便利,Arena正在从Gradio全面转向React。Gradio曾帮助其达到百万级规模,功不可没。
  • 发展方向
    • 垂直化与多模态:推出“专家竞技场”,展示模型在法律、医学、金融等专业领域的表现。并计划向视频等多模态评估拓展。
    • 扩大评估范围:现有的CodeArena已开始支持对AI智能体框架(如Devin)的评估,未来可能更广泛地纳入框架而不仅仅是模型。
    • API开放:正在考虑,但作为初创公司,当前首要任务是集中精力把核心的“竞技场”体验做到极致。

4. 回应争议与行业影响

  • “排行榜幻觉”论文事件:Arena遭遇了一篇名为“排行榜幻觉”的论文批评,主要指控其进行未公开的“私下测试”(预发布模型评估)导致不公。
  • Arena的回应:团队公开发布回应,指出论文中存在一系列事实错误,例如错误陈述了开源与闭源模型的比例。Arena强调,预发布测试(使用“秘密代号”如Nano Banana)是公开、透明的社区活动,深受用户欢迎,且不影响已发布模型排行榜的统计可靠性。
  • Nano Banana的里程碑意义:这个代号(源自Anthropic产品经理Naina的昵称)代表的模型在图像生成领域引发了巨大轰动,甚至影响了谷歌等巨头的市场策略。这使团队认识到,多模态(尤其是图像/视频生成)模型可能成为AI最具经济价值的领域之一,在营销、设计、内容创作等方面需求巨大。

5. 社区建设与挑战

  • 成功关键:为用户提供持久的历史记录等核心价值功能,是驱动用户登录和留存的重要因素。
  • 核心认知:在消费市场,每个用户都是赢得的,也必须每天重新赢得。用户是善变的,必须持续思考如何为他们提供价值。
  • 人才招募:Arena正在积极招募消费产品、机器学习、B2B、市场营销等领域的顶尖专家,以组建高绩效团队。

主要启示与行动要点

  1. 真实反馈的价值:在AI评估中,基于海量真实用户场景的有机数据,比单纯的基准测试更能反映模型的实用能力和用户体验。
  2. 诚信即生命:对于评估平台,保持中立、透明、拒绝付费干扰是建立公信力和长期价值的基石。
  3. 创业孵化新范式:投资人Ansh“先支持,后决定”的激进孵化模式,为有潜力的学术团队提供了宝贵的试错空间和转型支持。
  4. 多模态是重要赛道:图像、视频生成等AI多模态能力正在创造巨大的经济价值,是内容创作、营销等领域增长最快的应用场景。
  5. 社区驱动增长:对于消费级产品,提供不可替代的核心用户价值(如Arena的模型比较和投票),并通过功能(如历史记录)增强用户粘性,是留存和增长的关键。
  6. 评估范畴正在扩展:AI评估不再局限于语言模型,正在向代码智能体、专业垂直领域和多模态方向快速演进。

行动号召:Anastasios邀请各领域的顶尖人才加入Arena,也欢迎像Cognition(Devin的创造者)这样的AI公司合作,将他们的智能体框架接入CodeArena等平台进行公开评估,共同塑造AI能力的衡量标准。


Original Description

From building LMArena in a Berkeley basement to raising $100M and becoming the de facto leaderboard for frontier AI, Anastasios Angelopoulos returns to Latent Space to recap 2025 in one of the most influential platforms in AI—trusted by millions of users, every major lab, and the entire industry to answer one question: which model is actually best for real-world use cases?

We caught up with Anastasios live at NeurIPS 2025 to dig into the origin story (spoiler: it started as an academic project incubated by Anjney Midha at a16z, who formed an entity and gave grants before they even committed to starting a company), why they decided to spin out instead of staying academic or nonprofit (the only way to scale was to build a company), how they're spending that $100M (inference costs, React migration off Gradio, and hiring world-class talent across ML, product, and go-to-market), the leaderboard delusion controversy and why their response demolished the paper's claims (factual errors, misrepresentation of open vs.

closed source sampling, and ignoring the transparency of preview testing that the community loves), why platform integrity comes first (the public leaderboard is a charity, not a pay-to-play system—models can't pay to get on, can't pay to get off, and scores reflect millions of real votes), how they're expanding into occupational verticals (medicine, legal, finance, creative marketing) and multimodal arenas (video coming soon), why consumer retention is earned every single day (sign-in and persistent history were the unlock, but users are fickle and can leave at any moment), the Gemini Nano Banana moment that changed Google's market share overnight (and why multimodal models are becoming economically critical for marketing, design, and AI-for-science), how they're thinking about agents and harnesses (Code Arena evaluates models, but maybe it should evaluate full agents like Devin), and his vision for Arena as the central evaluation platform that provides the North Star for the industry—constantly fresh, immune to overfitting, and grounded in millions of real-world conversations from real users.