Meta Poaches 8 OpenAI Researchers in 72-Hour AI Talent War
Meta Platforms poached eight OpenAI researchers in just 72 hours last week, capping a frenetic months-long hiring blitz aimed at building a 50-strong “superintelligence” unit under the direct supervision of chief executive Mark Zuckerberg. The unprecedented defection, combined with Meta’s multibillion-dollar bet on Scale AI, signals a no-holds-barred race to catch up with rivals in artificial general intelligence (AGI).
The scramble comes as Meta’s flagship Llama 4 model falters, insiders say, and as rival OpenAI scrambles to stop further staff losses amid talk of nine-figure signing bonuses.
Zurich Defection Sparks Talent Exodus
Meta’s raid began on June 26, when it poached OpenAI Zurich team members Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai. All three had established OpenAI’s Swiss outpost in 2024 and were considered crucial to its European research pipeline.
Two days later, Meta confirmed it was hiring four more OpenAI researchers—Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren—bringing the week’s haul to seven. A WIRED investigation corroborated the departures, adding that four OpenAI researchers are leaving the company to go to Meta, with their Slack profiles deactivated the moment they resigned.
The eighth coup arrived quietly: on June 1, Meta hired Trapit Bansal, best known for co-developing the o1 reasoning model alongside OpenAI co-founder Ilya Sutskever. Bansal worked closely with co-founder Ilya Sutskever and, insiders say, is now overseeing Meta’s reasoning-focused benchmarks.
Meta’s ‘Recruiting Party’ Playbook
The mastermind behind the exodus is Zuckerberg himself, according to people present at his dinners in Lake Tahoe and Palo Alto. Meta CEO Mark Zuckerberg is personally spearheading the formation of a new AI research team and is hand-picking approximately 50 top AI researchers and engineers to join it. Invitations land via a WhatsApp channel cheerfully named “Recruiting Party 🎉,” two attendees said.
Zuckerberg’s proximity tactics extend to office real estate. New hires are seated within earshot of his glass-walled Menlo Park cubicle, emphasizing the project’s stature inside Meta’s famously flat hierarchy, current employees said.
Money Talks: The Bonus Debate
Compensation remains the sharpest flashpoint. OpenAI chief Sam Altman told his brother on a June 18 podcast that Meta started making giant offers, including $100 million signing bonuses, to pry staff away. Days later, Meta Chief Technology Officer Andrew Bosworth shot back in a leaked town hall recording: “Sam is just being dishonest here,” arguing that actual packages hover between $ 5 million and $ 20 million in stock and incentives.
Lucas Beyer publicly dismissed the nine-figure figure as “fake news,” industry observers noted, and recruiters say the eye-watering benchmark serves more as negotiating leverage than a real-world payout.
Why the rush to open the checkbook? Multiple sources cite mounting frustration over Meta’s next-generation model. Performance issues have led Meta Platforms Inc. to postpone the release of its newest AI model, Llama 4 after it scored poorly on reasoning and maths tests. Internal benchmarks indicate that Llama 4 is not meeting Meta’s expectations in key areas, such as reasoning and mathematical tasks—a setback to Zuckerberg’s pledge to lead in open-source AGI.
Llama 4 Stumbles Drive Urgency
The stalled model looms large over every recruitment conversation, according to three engineers who have been briefed on the effort. Meta’s existing team struggled to surpass the 16 percent mark on the Ader Polyglot coding benchmark, falling behind specialist models from Google and Anthropic. While executive leadership wants to ship an update by early Q4, skeptics note that dataset quality and retrieval infrastructure—not headcount—remain the bottleneck.
Still, personnel matters. Hongyu Ren, for example, leads the post-training at OpenAI for the o3 and o4 mini models, an area where Meta lacks in-house expertise. Shengjia Zhao, another recruit, is recognized for his expertise in deep learning research and contributed to the startup’s GPT-4 model—knowledge that could translate into more robust pre-training pipelines.
Scale AI Deal Lays Data Groundwork
Headcount alone cannot rescue a faltering model. In mid-June, Meta bolstered its data backbone with a strategic $14.3 billion investment in Scale AI, acquiring a 49 percent stake. The move hands Meta priority access to Scale’s annotation services and a seat next to founder Alexandr Wang, who will co-lead the superintelligence unit while retaining his board post.
Several analysts view the partnership as a way to circumvent the constraints of proprietary datasets. Scale works with OpenAI, Anthropic and Google, offering Meta an invaluable window into competitor data strategies—assuming confidentiality firewalls hold.
What Meta Gains, What OpenAI Loses
Inside OpenAI, morale is bruised but not broken, according to observers. Slack channels registering departures fill with cheering emojis for colleagues moving on—then turn somber as managers distribute urgent retention forms. Senior leadership granted equity refreshes for key staff and previewed a more generous parental leave package, according to an internal memo.
OpenAI still boasts world-class research breadth, but losing two entire project leads and a veteran engineer in one week dents its momentum. The Zurich trio led Europe-centric compliance filters, sources say; their departure complicates OpenAI’s planned EU roll-out.
Meta, meanwhile, fills glaring gaps. Zhai’s expertise spans retrieval-augmented generation at Scale; Bi specializes in multimodal alignment; Yu once led DeepMind’s perception work. Together, they form the backbone of what one insider calls the “AGI kitchen sink,” a heady mix of model-of-everything ambitions and rapid shipping mandates.
Yet, risk shadows the rewards. Culture-clash veterans warn that Meta’s historically advertising-centric DNA may find pure-research hires restless. And parachuting newcomers into an already crowded hierarchy can breed redundancy—one engineer jokes that Meta now has “three teams rewriting the same data loader in PyTorch.”
Broader Implications for the AI Arms Race
The latest exodus crystallizes two truths. First, the AI labor pool is tightening, driving bid-up dynamics that are unprecedented even in the frothy tech cycles. Second, open-source evangelism—Meta’s calling card—offers a powerful magnet for researchers tired of closed-door secrecy.
Industry watchers expect further consolidation. Amazon, Microsoft, and Apple have all floated eight-figure packages to senior staff this year, recruiters say. Regulators may yet scrutinize bundled stock options that tether talent for a decade, though antitrust tools remain blunt instruments against human mobility.
For now, the center of gravity in AGI shifts by the week. Meta’s raid proves that, in the race to build machines, reason, culture, and cash trump loyalty. Whether the strategy yields scientific breakthroughs or expensive churn will shape the narrative heading into 2026.
Key Takeaways:
- Meta hired eight OpenAI researchers in three days, including the entire Zurich office leadership.
- Zuckerberg personally leads a 50-person “superintelligence” recruitment drive, wooing prospects via WhatsApp and dinner parties.
- Rumored US$100 million signing bonuses are disputed; Meta’s CTO calls Altman’s claim “dishonest.”
- Llama 4’s underwhelming performance and delayed release fuel Meta’s urgency to import expertise.
- A $14.3 billion stake in Scale AI secures training data and brings founder Alexandr Wang inside Meta’s AGI effort.
- The talent war signals escalating consolidation as tech giants vie for scarce researchers capable of delivering artificial general intelligence.
