Skip to content

DeepSeek” Reveals a “Theoretical” Profit Margin Exceeding 500%

  • by

Chinese AI startup DeepSeek revealed some financial figures on Saturday, stating that its “theoretical” profit margin could be more than five times its costs, shedding light on the secrecy surrounding business models in the AI industry.

The company, which has shaken Silicon Valley with its innovative and cost-effective approach to building AI models, announced on platform “X” (formerly Twitter) that the inference cost for its “V3” and “R1” models compared to sales during a 24-hour period on the last day of February resulted in profit margins of 545%.

DeepSeek Costs

Inference refers to the computing power, electricity, data storage, and other resources required to make AI models operate in real-time, according to a Bloomberg report reviewed by “Al-Arabiya Business.” However, DeepSeek clarified that its actual revenue is significantly lower for various reasons, including the fact that only a small portion of its services are monetized, it offers discounts during peak hours, and the costs do not include all research, development, and training expenses for building its models.

While these impressive profit margins are theoretical, the disclosure comes at a time when the profitability of AI startups and their models is a major topic of interest among tech investors.

Companies ranging from industry leader OpenAI to Anthropic are experimenting with different revenue models, from subscriptions to usage fees and licensing charges, as they race to develop more advanced AI products.

However, investors are skeptical about these business models and their return on investment, sparking a debate about the feasibility of achieving profitability in the near future.

On Saturday, DeepSeek stated that its online service has a “profit margin compared to cost of 545%,” providing an overview of its operations, including how it improves computing power through load balancing—managing traffic to distribute work evenly across multiple servers and data centers. The company also mentioned that it has worked on innovations to enhance the amount of data processed by its AI model within a given time frame and to improve latency, which is the wait time between a user submitting a query and receiving a response.

Leave a Reply

Your email address will not be published. Required fields are marked *