The Chinese firm has pulled back the curtain to expose how the top labs may be building their next-generation models. Now things get interesting.
The DeepSeek drama may have been briefly eclipsed by, you know, everything in Washington (which, if you can believe it, got even crazier Wednesday). But rest assured that over in Silicon Valley, there has been nonstop,
Yoshua Bengio, often called the 'godfather of AI', warns that the fierce competition between OpenAI and China’s DeepSeek could compromise AI safety.
DeepSeek—built in two months with a lean team and outdated chips—just dethroned ChatGPT as the #1 app on the US App Store. This isn't just a wake-up call; it's a siren blaring in the ears of every Indian founder,
DeepSeek’s R1 AI model competes with OpenAI’s o1 reasoning model across math, coding, and science on an even playing field at 3% of the cost.
The announcement confirms one of two rumors that circled the internet this week. The other was about superintelligence.
The Chinese startup DeepSeek released an AI reasoning model that appears to rival the abilities of a frontier model from OpenAI, the maker of ChatGPT.
Some AI researchers hailed DeepSeek’s R1 as a breakthrough on the same level as DeepMind’s AlphaZero, a 2017 model that became superhuman at the board games Chess and Go by purely playing against itself and improving, rather than observing any human games.
With Meta’s Llama series already making waves in the AI community, the forthcoming Llama 4 promises to push boundaries even further. Zuckerberg outlined ambitious plans for this next-generation model, as well as Meta’s broader vision for personalised AI assistants, multimodal capabilities, and open-source collaboration.
Aravind Srinivas, CEO of Perplexity AI, is shaping AI and tech, leading a TikTok-ByteDance merger while facing U.S. Green Card delays.
This new approach, based on natural selection, dramatically improves the reliability of large language models for practical tasks like trip planning. Here's how it works.
OpenAI’s new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model’s suggestions to change two of the Yamanaka factors to be more than 50 times as effective—at least according to some preliminary measures.