DeepSeek AI: Key Facts, Figures, and Growth Statistics

DeepSeek AI is a Chinese artificial intelligence startup that burst onto the global tech scene with unprecedented speed and impact.

Founded in 2023 under the umbrella of a Hangzhou-based hedge fund, DeepSeek’s open-source large language models (LLMs) have quickly come to rival those of industry leaders like OpenAI and Google – but at a fraction of the cost.

Within two weeks of releasing its first free chatbot app in early 2025, DeepSeek shot to the top of app store charts in the United States.

Its latest AI model’s debut even triggered a global tech stock sell-off, erasing nearly $1 trillion in market value from companies like Nvidia, Oracle, and Meta.

In short, DeepSeek has rapidly become a name to know in AI, combining massive scale with open accessibility.

Below, we break down all the essential facts, figures, and milestones behind DeepSeek’s meteoric rise.

What Is DeepSeek and Who Is Behind It?

DeepSeek is an independent AI research lab founded in May/July 2023 by Liang Wenfeng, who is also co-founder of the High-Flyer quantitative hedge fund in China.

High-Flyer fully backs and owns DeepSeek, allowing the startup to focus on pure research and efficiency rather than immediate profits.

DeepSeek has developed a series of advanced LLMs that rival the capabilities of OpenAI’s ChatGPT, Google’s Gemini, and other Western models. Uniquely, DeepSeek makes its model weights “open-source” (openly available) and the service is free to use, contrasting with many U.S. competitors’ closed, paywalled approaches. This openness – combined with novel training techniques – enabled DeepSeek to train enormous models at extremely low cost.

For example, the company claims it trained its 2024 DeepSeek-V3 model (with 671 billion parameters) for only around $6 million, versus an estimated $100 million spent to train OpenAI’s GPT-4.

Such cost-efficiency and transparency have led some observers to hail DeepSeek’s emergence as a “Sputnik moment” for AI in the West, signaling a new wave of international competition in artificial intelligence.

Rapid User Growth and Adoption Milestones

DeepSeek’s growth in users has been explosive. The startup launched its first public chatbot app (powered by the DeepSeek-R1 model) in January 2025, and within two weeks it had rocketed to the #1 free app on Apple’s App Store in the U.S., surpassing OpenAI’s ChatGPT app.

In fact, on January 27, 2025, DeepSeek was the top free iOS app in 52 countries, and similarly climbed high on Google Play rankings.

This surge was driven by the buzz around DeepSeek’s open AI model releases and their remarkable cost-performance ratio – news that even sent Nvidia’s stock price down 17% as investors digested the implications.

  • Fastest to Millions of Users: DeepSeek reached its first 1 million users in about 14 days after launch, which was slower than ChatGPT’s record (5 days), but then hit 10 million users in just 20 days, twice as fast as ChatGPT took to reach 10M. In other words, once momentum picked up, DeepSeek’s adoption accelerated even more rapidly than ChatGPT’s did. This makes DeepSeek one of the fastest-growing consumer applications ever, rivaling viral apps like Threads in early growth velocity.
  • App Downloads: By late January 2025, the DeepSeek mobile app had been downloaded over 2.6 million times across iOS and Android. Remarkably, the app jumped from about 1 million total downloads to 2.6 million in just a single weekend as it trended at #1. More than 80% of DeepSeek’s downloads up to that point occurred within one week of its App Store debut – a testament to the viral word-of-mouth growth. Within roughly 15 days of launch, the app surpassed 10 million global downloads, and reports suggest that by early February 2025 it may have exceeded 20 million+ installs worldwide as its popularity snowballed. By January 2025, DeepSeek was estimated to have over 22 million daily active users globally – an astonishing user base for an app that was brand new that month.
22.15 million daily active users

DeepSeek’s user base scaled to over 22.15 million daily active users worldwide by January 2025, reflecting the AI chatbot’s viral growth.

  • Global Reach: Although DeepSeek originated in China, its user adoption has been truly international. China accounted for only about 23% of total app downloads as of late January 2025, with huge uptake coming from other countries including the United States (~15%) and Egypt (~6%), among others. In fact, DeepSeek amassed 33.7 million monthly active users by January and became the world’s 4th most popular AI app by user count, with its top markets being China, India, and Indonesia (together ~51% of users) followed by noticeable usage in the US and Europe. This global traction underscores that demand for powerful free AI tools spans many languages and regions, not just China.

DeepSeek’s AI Models and Release Timeline

A big part of DeepSeek’s success is its series of advanced AI models developed in rapid succession.

In less than two years, the company released multiple LLMs, each pushing the envelope in size or capability.

Here is a brief timeline of DeepSeek’s model releases and their key features:

  • May 2023 – Founding: DeepSeek AI is established as a spin-off from High-Flyer’s AI research division. Equipped with funding from the hedge fund (which reportedly had ¥100 billion in assets under management, ~$15 billion), DeepSeek set out to build large-scale AI models efficiently.
  • Nov 2023 – DeepSeek Coder: Release of the first model, an open-source AI coding assistant. Ranging from 1B to 33B parameters, DeepSeek Coder was trained mostly on code (87% code, 13% natural language) to help with programming tasks. It was released under a very permissive license for free research and commercial use.
  • Dec 2023 – DeepSeek LLM (67B): The lab’s first general-purpose language model with 67 billion parameters, approaching GPT-4-level performance in many tasks. This proved DeepSeek could compete with established AI giants in broad natural language understanding.
  • May 2024 – DeepSeek-V2 (236B): A major upgrade introducing a 236 billion-parameter model (with 21B active parameters) utilizing an innovative Mixture-of-Experts (MoE) architecture and Multi-head Latent Attention. DeepSeek-V2 achieved much higher inference efficiency and more economical training, demonstrating state-of-the-art results at lower cost.
  • July 2024 – DeepSeek-Coder V2: An improved coding model with 236B parameters, featuring an extended 128,000-token context window (huge input length) and support for 300+ programming languages. This model tackled more complex coding and math reasoning problems by handling very large inputs (like entire codebases).
  • Dec 2024 – DeepSeek-V3 (671B): The third-generation flagship model, with a staggering 671 billion parameters (37B active). DeepSeek-V3 employed advanced MoE layers and FP8 mixed-precision training, achieving new benchmarks in multi-domain language performance while keeping training costs low. It marked DeepSeek’s entry into the ultra-large model club, yet still with an open MIT license release.
  • Jan 2025 – DeepSeek-R1: Debut of DeepSeek’s most advanced model to date, focused on reasoning and problem-solving. DeepSeek-R1 also totals ~671B parameters and was trained via a pure reinforcement learning approach for complex reasoning tasks. Upon its release (alongside the DeepSeek chatbot app), R1 was made fully open-source. It is considered competitive with OpenAI’s top models in many areas – for example, matching or beating them on math and coding benchmarks – while remaining free to use. DeepSeek-R1’s open release is what spurred the industry “shock waves,” as it proved that a lean Chinese startup could produce a GPT-4 class AI model at a tiny fraction of the cost, and give it away openly.

These models illustrate DeepSeek’s rapid innovation cycle. In roughly one year, the team progressed from a niche code model to a full-fledged GPT-4 competitor.

Notably, all of DeepSeek’s major models have been released with open licenses (mostly MIT license) for anyone to use or build upon.

This is a stark contrast to most proprietary models and has helped fuel a community of developers – over 5 million downloads of DeepSeek model files on Hugging Face were reported, with hundreds of derivative models already spawned by the open-source community.

Performance vs. Competitors

Despite being newer to the scene, DeepSeek’s AI models hold their own against those from OpenAI, Google, and others:

  • Benchmark Results: DeepSeek-R1 in particular has demonstrated world-class performance on several AI benchmarks. For instance, on a challenging math reasoning exam (AIME 2024), R1 scored 79.8%, slightly edging out OpenAI’s reference model (79.2%). On a software engineering benchmark (SWE-bench), DeepSeek-R1 also narrowly beat OpenAI’s model (49.2% vs 48.9%). In general knowledge tests, OpenAI kept a small lead (e.g. 91.8% vs 90.8% on MMLU), but the gap is minor. The takeaway is that DeepSeek’s latest model is on par with the best in many domains – a remarkable achievement for a newcomer.
  • Cost and Pricing Advantages: DeepSeek’s strategy emphasizes efficiency and low cost, and this shows in its operations. According to reports, training the cutting-edge R1 model cost DeepSeek only $5.58 million, versus the tens or hundreds of millions spent by Western labs on similar models. CEO Sam Altman of OpenAI has indicated GPT-4 cost upwards of $100 million to build. DeepSeek achieved its low costs by clever techniques like mixture-of-experts layers (which reduce active parameters) and using less powerful but sufficient hardware during U.S. export restrictions. This efficiency carries over to user pricing as well – DeepSeek’s API, when available, is dramatically cheaper than OpenAI’s. For example, 1 million output tokens on DeepSeek’s model cost about $2.19, versus $60 for the equivalent from OpenAI’s GPT-4 (o1 model). Input token costs are similarly lower by an order of magnitude. Moreover, the DeepSeek chatbot is currently free for consumers, whereas OpenAI’s ChatGPT has a paid premium tier. This pricing gap has certainly helped attract users and businesses to DeepSeek.
  • Industry Impact: The arrival of DeepSeek’s high-performance, low-cost AI has sent shockwaves through the tech industry. When DeepSeek-R1 launched and its app went viral in January 2025, it prompted many to question the competitive moat of established players. Investors saw DeepSeek’s success as a sign that AI advancements could come more cheaply than expected, which in turn might reduce the dominance (and spending power) of incumbents. This sentiment contributed to a sharp sell-off in tech stocks: for example, Nvidia’s stock plunged ~17% in one day on fears that demand for its high-end AI chips might slow if companies follow DeepSeek’s efficient approach. Across big tech, about $1 trillion in combined market cap was wiped out in the aftermath of DeepSeek’s debut. Clearly, DeepSeek has upended assumptions in the AI sector, proving that a small, focused team can challenge the giants. Major firms from Microsoft to Google are now on alert, and traders speak of a “DeepSeek effect” influencing AI investments globally.

Challenges and Future Outlook

While DeepSeek’s rise has been extraordinary, it also faces some challenges and questions going forward:

  • Content Safety and Moderation: Because DeepSeek’s models are open and not heavily gated, some tests have found they may fail to block harmful or disallowed content. In fact, one evaluation noted DeepSeek-R1 did not filter any “harmful prompts” in the test, indicating a 100% failure rate in that safety check. This contrasts with models like OpenAI’s which have more refined moderation (albeit at the cost of being closed-source). DeepSeek will need to improve guardrails as its user base grows to avoid misuse of its AI.
  • Regulatory and Privacy Concerns: Being a Chinese company, DeepSeek operates under China’s AI regulations and censorship rules. Observers have raised concerns that DeepSeek’s compliance with Chinese government content policies could limit its responses on sensitive topics. Additionally, DeepSeek’s privacy policy states that user data is stored on servers in China, which has prompted some wariness about data security and potential government access. As DeepSeek expands globally, such issues may invite scrutiny from foreign regulators. Already, some U.S. officials and analysts have speculated about whether apps like DeepSeek might face bans or restrictions due to national security considerations, similar to past debates around Chinese tech apps.
  • Competition and Innovation: The AI field moves quickly, and DeepSeek will have to continuously innovate to maintain its edge. Competitors like OpenAI, Google, Meta, and new startups are not standing still – for instance, Google’s Gemini model and others are on the horizon in 2025. DeepSeek’s open-source approach means it doesn’t directly profit from users, so its business model relies on indirect methods (enterprise services, API usage, or simply the backing of High-Flyer’s fund). It remains to be seen how sustainable this model is long-term and whether DeepSeek will seek commercial avenues like paid products or partnerships.

Conclusion

In summary, DeepSeek has emerged as a powerful disruptor in the AI landscape, combining massive-scale models, fast growth, and open accessibility.

It has garnered tens of millions of users in a matter of weeks and demonstrated that cutting-edge AI need not come with exorbitant price tags.

For AI enthusiasts, researchers, and businesses, DeepSeek’s evolution will be important to watch.

It represents not just a single product, but a broader shift toward more open, affordable AI innovation.

By filling gaps left by more closed-off rivals, DeepSeek has quickly earned a place among the top AI model providers in the world – and it may well push the entire industry in new directions.

Ibrahim Khuzam

Ibrahim Khuzam

Ibrahim Khuzam is a technology writer and founder of several platforms focused on AI, SEO, and open-source models. He writes in-depth articles about LLM performance, integrations, and multilingual capabilities, helping developers and businesses navigate AI adoption.

Articles: 51

Leave a Reply

Your email address will not be published. Required fields are marked *