CHINA SAYS FUCK YOU TO DEI. DEEPSEEK R2 LEAKS CONFIRMED.
THE SAME WAY DEEPSEEK R2 WILL PULL AHEAD OF FIRST WORLD AI, CHINESE HUMANOID ROBOTICS COMPANIES WILL DO THE SAME. HOURS AWAY FROM THE GREATEST TECHNOLOGICAL REVOLUTION IN HISTORY.
BOMBSHELL SOURCE 1: dev on X: "🚨 DeepSeek R2 LEAKED! 😱 1.2T param, 78B active, hybrid MoE. 97.3% cheaper than GPT-4o ($0.07/M in, $0.27/M out). 5.2PB data, 89.7% C-Eval2.0, 92.4% COCO. 82% utilization on Huawei Ascend 910B. US supply chain sidelined! 🇨🇳 #AI #DeepSeek #TechNews https://t.co/K9QSxfVluU" / X
BOMBSHELL SOURCE 2: Neuralio on X: "Deepseek R2 is coming and it’s gonna break everything. Also, they almost solved distributed compute? “DeepSeek R2 cuts training costs by 97.3% while delivering 512 PFLOPS on new Huawei chips at 91% of A100-cluster efficiency; key partners—Talkweb, Hongbo, Sugon, Easystar, https://t.co/6O7voAHsCV" / X
BOMBSHELL SOURCE 3: AI Whisperer on X: "DeepSeek R2 is about to break the AI economy. It cuts unit costs by 97.3% — while OpenAI, Google, Anthropic, and Grok are still burning cash. Homegrown distributed training hits 82% utilization on Ascend 910B chips, pumping out 512 PFLOPS at FP16 — that’s 91% of an A100 cluster’s https://t.co/qa6K1p8T8l" / X
BOMBSHELL SOURCE 4: Deedy on X: "🚨Viral rumors of DeepSeek R2 leaked! —1.2T param, 78B active, hybrid MoE —97.3% cheaper than GPT 4o ($0.07/M in, $0.27/M out) —5.2PB training data. 89.7% on C-Eval2.0 —Better vision. 92.4% on COCO —82% utilization in Huawei Ascend 910B Big shift away from US supply chain. https://t.co/Jncg0PvEYU" / X
BOMBSHELL SOURCE 5: Haider. on X: "Deepseek R2 cuts unit costs by 97.3% and is about to launch - (rumored) - uses a self-developed distributed training framework - achieves 82% utilization on the ascend 910b chip clusters - delivers 512 petaflops at FP16 precision - reaches 91% efficiency compared to an A100 https://t.co/D0tpJOqWaz" / X