DeepSeek and state or military use: What concerns are being raised?

Artificial intelligence is no longer just a fascinating tech experiment—it’s now a powerful strategic tool. And few companies have made as much noise in such a short time as DeepSeek, a Chinese AI startup that’s quickly becoming a central figure in global conversations about open-source large language models (LLMs). While DeepSeek has gained praise for making advanced AI models openly accessible, it has also raised alarm bells—especially around how its technologies might be used by governments or militaries.

In this article, we’ll explore why DeepSeek’s open-weight LLMs are raising red flags, what their potential uses are in state or military contexts, and how experts and regulators are responding. Let’s unpack the story behind the headlines.

Understanding DeepSeek: A quick introduction

Launched in 2023, DeepSeek made an immediate impression by releasing powerful, open-access LLMs such as DeepSeek-MoE and DeepSeek-Coder. Unlike many Western AI models that are tightly controlled behind APIs, DeepSeek’s models are downloadable, modifiable, and usable without restrictions.

Some key features include:

  • Strong multilingual capabilities
  • High performance in code generation and reasoning
  • Efficient operation on consumer or mid-range hardware
  • Open-weight distribution with permissive licenses

These features have attracted a wide range of users—from developers and startups to researchers. But it’s also attracted attention from national security analysts, who are concerned about how these tools might be used in more covert or strategic ways.

The geopolitical backdrop: AI as a national asset

AI has become more than just a tool for businesses or scientists—it’s now seen as a pillar of national power. Governments are investing heavily in AI development, knowing that dominance in this area could mean economic, military, and even ideological influence.

DeepSeek’s rapid growth is seen by many as a milestone in China’s AI trajectory. While U.S. and EU companies tend to put ethical guidelines and usage restrictions on their models, DeepSeek’s open design allows:

  • Customization for domestic or military purposes
  • Use in closed, unmonitored systems
  • Adoption by institutions that wouldn’t otherwise have access to such tech

In this light, DeepSeek is both an innovation and a risk—especially when considering how easily its models can be deployed beyond the developer community.

How could states or militaries use DeepSeek?

Let’s break down the potential uses of large language models like DeepSeek’s in state or military contexts:

Information warfare and disinformation

LLMs can create persuasive, human-like content at scale. This makes them ideal for:

  • Spreading fake news across multiple languages
  • Automating social media manipulation
  • Generating deepfake scripts
  • Enhancing psychological operations (PSYOPS)

Cyber operations

LLMs can assist in both cyber defense and offense:

  • Writing or debugging malicious code
  • Scanning software for vulnerabilities
  • Translating technical documentation
  • Simulating human activity in cyber operations

Strategic analysis and simulations

Governments and militaries could use DeepSeek to:

  • Simulate geopolitical scenarios or war games
  • Analyze foreign policy texts and military documents
  • Summarize or translate intercepted communications
  • Provide decision support in crisis situations

Surveillance and social control

Combined with biometric and surveillance tools, DeepSeek models could be used to:

  • Analyze large volumes of personal communications
  • Predict behavior using social media or messaging patterns
  • Enhance risk profiling in population monitoring systems

These applications aren’t theoretical—they’re already being explored around the world using similar tools. DeepSeek’s open nature just lowers the barrier for doing it locally and without oversight.

The open-weight dilemma: Innovation vs. security

One of DeepSeek’s defining qualities is openness. Their models are open-weight, meaning anyone can download and run them. There are no hidden APIs, subscriptions, or licenses to request. That’s great for accessibility, but it creates serious risks.

What happens when powerful AI models are available to anyone?

  • There’s no monitoring of how the model is used.
  • Bad actors can run models offline, making them harder to trace.
  • Fine-tuning can be done for political, extremist, or military agendas.

In contrast, companies like OpenAI or Anthropic gate their models through APIs, apply filters, and monitor usage. This doesn’t eliminate risk, but it allows for intervention.

Looking back: AI and the military have always been close

The idea of using AI in military contexts isn’t new. Since the 1980s, militaries have worked with expert systems to automate logistics, threat detection, and strategy simulations. Over time, these systems evolved into more advanced tools:

  • Autonomous drones
  • Real-time satellite image analysis
  • Predictive analytics for troop movements
  • Natural language processing for intelligence gathering

What DeepSeek brings to the table is unrestricted access to this level of capability for a broader group of users—including state actors who may not play by the same ethical rules.

What makes DeepSeek models technically useful for militaries?

DeepSeek-MoE (Mixture of Experts) uses a special architecture where only a subset of the model activates for each task. This means:

  • Massive models can be run more efficiently
  • Specific modules can be fine-tuned for certain topics (e.g., weapons, politics)
  • Deployment on mid-tier hardware becomes realistic

So, a nation-state could theoretically train DeepSeek on classified documents, run it offline, and integrate it into defense infrastructure—without ever touching a Western cloud service.

What the experts are saying

Security professionals, AI ethicists, and even tech company executives have voiced concern:

“We need a better framework for dual-use AI. Openness without accountability is not sustainable.” — Yann LeCun, Meta

“DeepSeek is a wake-up call. The age of centrally controlled AI is over.” — Raja Koduri, ex-Intel

“Europe must act now. We can’t regulate after these models are already embedded in sensitive systems.” — Marietje Schaake, former EU Parliament member

These quotes highlight the tension: openness drives innovation but also creates unmonitored risk.

How are governments responding?

Governments and alliances like NATO, the EU, and the U.S. are beginning to address this issue. Some of the key moves include:

  • Exploring export controls on AI models and GPUs
  • Drafting new laws around AI misuse and foreign deployment
  • Considering international treaties for AI safety

The EU’s upcoming AI Act might even include classification for open models that exceed a certain size or capability threshold, especially if they’re considered high-risk or dual-use.

Where do we go from here?

Managing the risks of open-weight AI like DeepSeek’s will require a mix of strategies:

  • International collaboration on AI safety norms
  • Ethical licensing for open-source releases
  • Monitoring and auditing of large-scale training runs
  • Responsible deployment practices from developers

Ultimately, it comes down to balance: keeping AI accessible for good, while minimizing its potential for harm.

DeepSeek has undeniably pushed the frontier of open AI—but with great power comes great responsibility. Its models offer incredible value to developers and researchers, but they also raise difficult questions about safety, control, and misuse.

As the world wrestles with these questions, DeepSeek will continue to be part of the conversation—whether as a pioneer of openness, a cautionary tale, or both.



Image(s) used in this article are either AI-generated or sourced from royalty-free platforms like Pixabay or Pexels.

Did you enjoy this article? Buy me a coffee!

Buy Me A Coffee