On DeepSeek

Is it really doomsday for U.S. AI companies? The harbinger of the apocalypse appears to be a blue whale.

Nvidia’s stock is down 12.5%. There’s a broad tech sell-off, and Big Tech seems a little uneasy.

The reason? A Chinese hedge fund built and trained a state-of-the-art LLM to give their spare GPUs something to do.

DeepSeek’s R1 model reportedly performs on par with OpenAI’s cutting-edge o1 models. The twist? They claim to have trained it for a fraction of the cost of models like GPT-4 or Claude Sonnet—and did so using GPUs that are 3-4 years old. To top it off, the DeepSeek API is priced significantly lower than the OpenAI API.

Why did this trigger a sell-off of Nvidia (NVDA)?

  • It shows that building cutting-edge models doesn’t require tens of thousands of the latest Nvidia GPUs anymore.
  • DeepSeek’s models run at a fraction of the cost of large LLMs, which could shift demand away from Nvidia’s high-end hardware.

For U.S. companies, this is a wake-up call. The Biden-era export restrictions didn’t have the intended impact. But for anyone building on AI, there’s a silver lining:

  • Building LLMs and reasoning models is no longer limited to companies throwing billions at compute.
  • This will likely kick off an arms race as U.S. companies race to optimize costs and stay competitive with DeepSeek.
  • Data sovereignty will still matter—most companies won’t want their data processed by a Chinese-hosted model. If DeepSeek’s approach proves viable, expect U.S. providers to replicate it.

Categories AI Tags