Microsoft has officially banned its employees from using the DeepSeek app—an increasingly popular AI assistant—due to concerns over data privacy and potential propaganda risks. In a Senate hearing this week, Microsoft vice chairman and president Brad Smith made the company’s stance clear: “At Microsoft, we don’t allow our employees to use the DeepSeek app.”
DeepSeek, available on both desktop and mobile, has been the subject of growing scrutiny from Western governments and organizations. But this marks the first time a U.S. tech titan has publicly confirmed internal restrictions against the app. Smith explained the ban stems from the threat of user data being stored on Chinese servers, placing it within reach of Chinese intelligence under that country’s sweeping data laws. “DeepSeek’s answers could be influenced by Chinese propaganda,” Smith added.
Despite the ban on the app, Microsoft did roll out DeepSeek’s R1 model on its Azure cloud platform earlier this year. The move stirred controversy, but the company defended its decision by pointing out a key distinction: DeepSeek is open-source. That means developers can host the model locally, bypassing China’s data infrastructure. Still, even a self-hosted version doesn’t entirely eliminate concerns, especially around algorithmic bias and content security.
DeepSeek’s privacy policy confirms it stores user data on Chinese-based servers and complies with domestic law—laws that require companies to cooperate with state intelligence agencies when asked. The app also filters sensitive topics, including mentions of political dissent, Tibet, and other areas that trigger automatic censorship in Chinese systems.
Although Microsoft does not permit the DeepSeek app on its platforms, it’s not launching a blanket crackdown on AI chat rivals. Apps like Perplexity remain available in the Microsoft Store, while competitors like Google’s Gemini or Chrome browser appear to be absent—either due to policy or mutual rivalry.
Notably, Microsoft claims it took active steps to sanitize DeepSeek’s model before offering it on Azure.
“We’ve gone inside the model and changed it to remove harmful side effects,” said
Smith, though he declined to provide technical details, referring inquiries back to his Senate remarks.
Microsoft’s move reflects a growing shift in how Western tech companies handle the geopolitics of AI. As generative AI tools become more integrated into professional workflows, the question of where models are trained, hosted, and controlled is no longer just technical—it’s political. And for Microsoft, the line between competition and national security has never been thinner.