Popular: CRM, Project Management, Analytics

Users Shift From ChatGPT to Claude Amid Defense Contract Controversy

4 Min ReadUpdated on Mar 3, 2026
Written by Suraj Malik Published in AI News

A growing number of AI users are migrating from ChatGPT to Claude following renewed controversy over U.S. defense contracts and AI ethics.

The shift comes after Anthropic declined to proceed with a Department of Defense contract that reportedly involved expanded use cases around mass domestic surveillance and fully autonomous weapons. The company exited the agreement, citing conflicts with its safety commitments.

Hours later, OpenAI announced that it would take on a Pentagon agreement of its own, stating that safeguards would be in place. The decision sparked backlash from some users who viewed the move as inconsistent with earlier safety positioning.

Political Escalation and Industry Fallout

The dispute quickly escalated into a broader political confrontation.

President Donald Trump reportedly directed federal agencies to stop using Anthropic’s products. Meanwhile, Defense Secretary Pete Hegseth moved to classify Anthropic as a potential “supply-chain threat,” a designation that could have significant commercial consequences.

The episode has intensified public debate over how AI companies should engage with defense agencies, and who should ultimately control the deployment of powerful AI systems.

For some users, the controversy became a tipping point.

Claude Sees Surge in Sign-Ups

Following the Pentagon deal announcement, Claude climbed to the top of the free apps chart in the U.S. App Store. Anthropic reported record daily sign-ups, with free users increasing by more than 60% since January and paid subscriptions more than doubling in 2026.

Many users say they view Claude as the more ethically cautious alternative, particularly in light of Anthropic’s decision to walk away from the defense contract.

How Users Are Exporting Their ChatGPT Data

As interest in switching grows, users are taking steps to preserve their data before leaving ChatGPT.

Inside ChatGPT, users can navigate to:

Settings → Personalization → Memory

From there, they can review stored information, update it, and manually copy key details they wish to retain.

For a complete export, users can go to:

Settings → Data Controls → Export Data

ChatGPT then emails text and JSON files containing chat history. The process may take time, especially for accounts with extensive conversation records.

Some users also manually copy important conversations or ask ChatGPT to summarize their key preferences, frequent topics, and custom instructions before transitioning.

Importing Context Into Claude

Claude includes a memory feature that allows users to store persistent context. To prepare it:

  1. Go to Settings → Capabilities and ensure Memory is enabled.
  2. Start a new chat and provide a structured summary of personal preferences and context.
  3. Use a prompt such as:
    “Here’s important context I’d like you to remember. Update your memory about me with this.”

Experts recommend summarizing exported ChatGPT data rather than pasting raw logs. Users can ask Claude to analyze exported text and extract key preferences before storing the condensed summary.

Finally, users can verify the import by asking Claude to restate what it has learned and correcting any inaccuracies.

Anthropic clarified that Claude’s memory feature is available to both free and paid users.

Deleting a ChatGPT Account

For users seeking a complete exit, account deletion requires more than canceling a subscription.

Steps typically include:

  • Returning to Settings → Personalization → Memory and deleting stored memories
  • Optionally sending a message such as “Delete all my memory and personalized data”
  • Using account management settings to permanently delete the account

Simply canceling a paid plan does not erase stored data.

A Broader Debate Over AI Governance

The migration reflects more than just feature comparison. It highlights a deeper public debate about the relationship between AI companies and governments.

As frontier AI models become embedded in defense, surveillance, and national security contexts, users are increasingly weighing ethical positioning alongside functionality.

The current wave of switching suggests that for many users, questions about governance and corporate alignment now matter as much as model performance.

Whether this shift proves temporary or marks a longer-term realignment in the AI assistant market remains to be seen.

Post Comment

Be the first to post comment!

Related Articles