A China-based AI startup’s rapid rise and a mounting wave of security disclosures surrounding its iOS app have spotlighted glaring concerns about data privacy, encryption practices, and cross-border data flows. DeepSeek’s open-source AI chatbot drew intense attention for its apparent capabilities, but a subsequent security audit exposed troubling transmission practices, weak cryptography, and data handling that points to broader risks for users and organizations. The unfolding story underscores how a breakthrough in AI can collide with foundational safeguards for user data, prompting regulators and security researchers to scrutinize app architectures, vendor relationships, and compliance with platform security guidance.
DeepSeek’s Breakthrough and Early App Store Momentum
In a surprising turn for the AI landscape, DeepSeek—a largely unknown company based in China—released an open-source AI chatbot whose simulated reasoning capabilities appeared to rival market leaders in several benchmarks. Within days of the release, the DeepSeek AI assistant app ascended to the top of Apple’s iPhone App Store in the Free Apps category, surpassing ChatGPT in popularity. This rapid ascent underscored the appetite among users for capable, accessible AI assistants and highlighted the potential for small players to disrupt established incumbents through compelling performance and open access to underlying models.
Analysts and observers noted that the DeepSeek model relied on open weights and a simulated reasoning (SR) framework that, in testing, yielded results on par with OpenAI’s SR model across a suite of mathematical and coding benchmarks. The degree of performance relative to well-known rivals—given the company’s relatively modest reported spending compared with the substantial investment of larger players—added to the sense that a new wave of AI tools could emerge from less-heralded developers. This combination of user demand, perceived technical parity with top models, and a lightweight cost structure created a perception of a breakthrough moment in consumer AI access.
The broader AI community welcomed the novelty while also expressing prudent caution about the reproducibility, safety, and governance of emerging open-weight systems. Yet the immediate market impact for DeepSeek was unmistakable: a sudden climb in visibility and user enthusiasm that placed it at the center of early debates about what’s possible with open-weight AI and how such tools might be deployed in consumer-facing apps. In parallel, industry observers began to scrutinize the security and privacy implications of an app that quickly gained traction in a crowded ecosystem, especially given the opaqueness surrounding its data practices and infrastructural choices.
In-Transit Security: ATS, Encryption, and the Global Security Gap
Shortly after DeepSeek’s App Store surge, a mobile security firm, NowSecure, published findings that raised immediate red flags about how the app handles data when users interact with it. The audit revealed that the app transmitted sensitive information over unencrypted channels, making such data readable to anyone capable of monitoring network traffic. The risk isn’t merely theoretical; attackers with the capability to intercept traffic could potentially tamper with in-transit data, compromising integrity and confidentiality. At a minimum, this finding signals a failure to meet baseline protections that many developers are expected to implement.
Apple’s guidance has long encouraged developers to enforce encryption of data sent over the wire using App Transport Security (ATS), a framework designed to ensure secure communication channels. NowSecure reported that ATS protection appeared to be globally disabled within the DeepSeek iOS app. The reasons behind this disabling were not publicly explained by the company, and DeepSeek had not yet provided a public rationale for why ATS was turned off or why the app chose to send information without encryption. In practical terms, the absence of ATS means that the app could be communicating over HTTP or other non-secure protocols, exposing data to interception and potential manipulation by adversaries on unsecured networks.
The data in transit did not exist in a vacuum; its destination raised further concerns. A significant portion of the traffic was directed to servers controlled by ByteDance, the parent company of TikTok. While some data ostensibly traveled through secure channels, the decryption of that data on ByteDance-controlled servers could enable cross-referencing with other user data collected elsewhere. The possibility of identifying individual users and tracking queries and usage patterns through cross-linking increases the severity of the privacy implications—especially given ByteDance’s geopolitical footprint and the regulatory environments that govern such data processing.
The combination of unencrypted in-transit data, potential for tampering, and routing to ByteDance infrastructure created a layered risk profile. While not every data element necessarily travels unencrypted, the fact that the app could expose sensitive information during initial interactions—such as during user registration—suggests systemic vulnerabilities in the data path. NowSecure emphasized that fundamental security protections were missing and that the app did not adhere to established best practices for safeguarding data in transit. This raised questions about whether the app’s developers fully understood the implications of insecure transmission or whether security controls were deprioritized in favor of rapid iteration and market entry.
Beyond the explicit transport-layer concerns, the audit highlighted that certain information transmitted during registration—such as organizational identifiers, the Software Development Kit (SDK) version, device OS version, and language configuration—could be reaching servers in ways that amplify privacy exposure. Even if some components of the transmitted data were encrypted at rest or in transit for part of the journey, the overall data handling posture suggested that critical identifiers could be exposed in ways that facilitate profiling or correlation across services.
As a reminder of the ecosystem context, ATS is a publicizing topic with which Apple has repeatedly engaged as a tool to promote secure app behavior. Apple’s stance is clear: developers should leverage ATS to minimize exposure of user data in transit. The nuance here lies in whether ATS is optional or mandatory for specific apps and how such decisions align with platform-guidance and security principles. The absence of a transparent public justification from DeepSeek for disabling ATS or omitting encryption invites significant scrutiny from users, researchers, and regulators.
Data Flow to ByteDance: Cross-Border Considerations and Policy Statements
A crucial thread in the NowSecure findings concerns the ultimate destination of app data. DeepSeek’s data pipeline included roads that led to servers and cloud infrastructure operated by Volcengine, a cloud platform developed by ByteDance. On the surface, this is not unusual in a globally connected app ecosystem where data flows through various cloud providers and content delivery networks. However, the specifics of where data is stored, how it is processed, and who has access to it—along with the jurisdiction under which such storage occurs—carry substantial privacy and national security implications.
The DeepSeek privacy policy provides further context for how data may be used, accessed, retained, and shared. The policy notes that data stored in the service may be kept on secure servers located in the People’s Republic of China. It also indicates that DeepSeek may access, preserve, and share the information described in “What Information We Collect” with law enforcement agencies, public authorities, copyright holders, or other third parties under conditions of good faith belief that it is necessary to comply with applicable law, legal process, or government requests, consistent with internationally recognized standards.
Two elements of this arrangement stand out. First, the policy explicitly references data storage within China, a jurisdiction known for different data sovereignty norms and regulatory expectations than those in the United States and Europe. Second, the potential for sharing data with law enforcement and other authorities, depending on the interpretation of “good faith” and “necessary” in legal processes, introduces a layer of risk for users whose data could be accessed in ways that may diverge from expectations in other markets or platforms.
The combination of cross-border data transmission, the involvement of ByteDance-controlled infrastructure, and a geography of storage that includes data centers in China raises questions about data governance, localization, and the alignment (or misalignment) with user expectations, especially for international users and corporate customers that may require strict data-handling assurances. Critics have argued that data stored in or routed through China could be subject to oversight or access by authorities, which, in turn, might conflict with privacy commitments provided to users in other jurisdictions.
Additionally, the data path included components of data that may be mixed with encrypted information, further complicating the privacy picture. The security posture of hardcoded cryptographic keys and the reliance on a symmetric encryption scheme such as 3DES—discussed in detail in the next section—compound concerns about who can access data, how it can be decrypted, and where such decryption can occur. In the context of cross-border data handling, the ability of foreign entities to access data stored in or flowing through international cloud environments is a particularly sensitive issue, one that intersects with regulatory regimes, national security considerations, and corporate risk management.
Despite these concerns, DeepSeek had not issued a public explanation addressing why ATS was globally disabled or why data transmission occurred, in some cases, without robust encryption, to ByteDance-controlled servers. Apple has not publicly explained the platform-specific reasons for ATS behavior in this app, and the lack of a clear public rationale from the company has contributed to the sense that security by design may not be fully integral to the app’s architecture. The result is a scenario in which security researchers, policymakers, and potential partners must weigh the benefits of a high-performing AI tool against the potential privacy and security costs borne by users.
Core Security Shortfalls: 3DES, Hardcoded Keys, and Expert Commentary
NowSecure’s audit identified a central technical weakness: the app reportedly uses a symmetric encryption scheme known as 3DES (Triple DES). This scheme, once a workhorse for data protection, was deprecated by NIST after research in 2016 demonstrated practical vulnerabilities that could be exploited to decrypt traffic, undermining the confidentiality of communications. The presence of 3DES in a modern app is widely viewed as an unacceptable risk, especially for any data that could include sensitive information or personally identifiable data that might be cross-referenced with other user data on the server side.
Even more troubling is that the symmetric keys used by the app are identical for every iOS user and are hardcoded into the app package. Hardcoded keys represent a fundamental design flaw because they create a single point of compromise; once the key is extracted from the app, an attacker can decrypt communications for all users who are using the same version of the app. This vulnerability persists across updates if the same key material is reused, and it directly undermines the confidentiality assurances that users expect when engaging with a security-centric technology product.
NowSecure’s co-founder Andrew Hoog described the findings in stark terms, highlighting that the app “is not equipped or willing to provide basic security protections of your data and identity.” He asserted that there are fundamental security practices not being observed, whether through intent or oversight, and that the culmination of these shortcomings puts both user data and corporate data at risk. Hoog’s assessment emphasizes a systemic issue within the app’s security posture rather than a collection of isolated misconfigurations. He also stressed that the audit was ongoing, meaning that further questions and details remained unresolved at the time of reporting.
The audit’s conclusions extended beyond the presence of outdated encryption and hardcoded keys. It raised concerns about broader data-handling practices, the potential for data sharing with third parties such as ByteDance, and the storage and processing of data in China. Hoog explicitly noted the need for organizations to consider removing the DeepSeek iOS app from both managed and BYOD (bring-your-own-device) deployments due to the privacy and security risks identified, including insecure data transmission, vulnerable hardcoded keys, data sharing with third parties, and China-based data analysis and storage.
The Android version of the app fared even less favorably in NowSecure’s assessment, with the security posture deemed inferior to that of the iOS counterpart. Hoog’s team recommended that the Android version also be removed from use in corporate and personal environments. The contrast between iOS and Android findings underscores the broader risk profile of the DeepSeek offering across platforms, amplifying the call for caution among organizations that rely on mobile AI assistants for sensitive operations or data processing.
In response to the audit findings, both DeepSeek and Apple did not respond to inquiries seeking comment at the time. The lack of a public comment from the company and the absence of a direct response from Apple further fueled the sense that the app’s security practices warranted urgent examination by users, organizations, and regulators alike. The audit’s emphasis on insecure data transmission, cryptographic weaknesses, and cross-border data flows contributed to a growing skepticism about the safe deployment of the DeepSeek app, particularly in settings that demand strict privacy guarantees.
Beyond the technical issues, the audit raised process-oriented questions about how such a product could reach market without adhering to recognized security baselines. The recommendation to remove the app from environments reflects the belief that, until security controls are strengthened, the risk of data leakage, misuse, or unauthorized access remains unacceptably high for most institutional contexts. The findings also align with a broader industry emphasis on encryption best practices, key management discipline, and robust threat modelling—essential ingredients for safeguarding user data in a complex, cloud-enabled AI ecosystem.
Additional technical concerns and expert views
-
The data path includes initial registration information being transmitted in the clear, including organizational identifiers, the SDK version, device OS version, and language configuration. Exposing registration metadata can broaden profiling capabilities even before more sensitive content is exchanged, representing a fundamental privacy vulnerability.
-
The data flow to ByteDance-controlled Volcengine infrastructure raises questions about data control, sovereignty, and access. Even when data is encrypted in transit, the eventual data handling on ByteDance-controlled servers may present persistent exposure risks if decryption can occur on the server side and if cross-referencing with other data sources is possible.
-
Independent experts highlighted broader concerns about the ethics and responsibility of deploying AI tools in consumer ecosystems without robust security postures. The consensus among security professionals was that disabling ATS and using deprecated cryptography create a high-risk scenario that could undermine user trust and invite regulatory scrutiny.
-
The audit’s ongoing nature meant that further details—such as the scope of data exposure, the specific data elements subject to unencrypted transmission, and the exact mechanisms behind data sharing with third parties—remained to be clarified. This uncertainty broadens concerns about how such a tool might be managed in enterprise environments, where compliance requirements demand a transparent or auditable security regimen.
The convergence of technical vulnerabilities, insecure cryptography, insecure data transmission, and problematic data governance created a multi-faceted risk profile for DeepSeek’s app. In security terms, it is not enough to claim strong model performance if the underlying data handling practices expose users and organizations to material privacy and security threats. The evaluation by independent researchers, the policy implications of cross-border data storage, and the potential for governmental access to data collectively underscore the need for rigorous review, remediation, and, in some cases, removal of the app from sensitive environments.
Android vs. iOS: Platform Disparities and Shared Risks
The NowSecure findings extended to Android as well, where the security posture was described as even less secure than the iOS version. The cross-platform weaknesses suggest systemic design decisions that permeate multiple facets of the DeepSeek product. The broader implication is that while iOS security is often perceived as stringent due to platform controls, the underlying application-level security weaknesses can negate those protections if the app’s architecture is not constructed with robust end-to-end safeguards.
This cross-platform vulnerability profile elevates concerns for organizations and individuals who rely on mobile AI assistants across devices. If one platform shares fundamental weaknesses—such as unencrypted data transmission, hardcoded cryptographic keys, or reliance on deprecated encryption schemes—the risk surface expands as more users come into contact with the app. The parallel vulnerabilities also raise questions about the consistency and comprehensiveness of security testing across platforms, including whether security reviews account for platform-specific nuances in how data is stored, transmitted, and processed.
In practical terms for enterprise users, the Android version’s deficiencies imply a broader risk landscape that can affect device management policies, incident response planning, and data governance frameworks. Companies evaluating mobile AI tools must consider not only the capabilities and features but also the security posture across all major platforms, ensuring uniform adherence to security baselines and controlled data flows. The NowSecure findings reinforce the need for comprehensive risk assessments that weigh performance gains against potential privacy violations and regulatory exposure—especially in sectors handling sensitive information or regulated data.
Expert Commentary: Voices from Security Researchers and Industry Analysts
Security researchers and practitioners weighed in on the implications of DeepSeek’s security posture and the broader trajectory of AI app security. Thomas Reed, a Mac endpoint detection and response expert at Huntress who specializes in iOS security, commented that the disabling of ATS is a generally bad idea. In online discussions, he noted that while Apple does permit such configurations in some apps, there is little to justify it in the modern security landscape. Reed emphasized that even if communications were secured, he would still be highly reluctant to send any remotely sensitive data to a server that could be accessible to foreign authorities. His perspective highlighted the tension between performance ambitions and the imperative to safeguard sensitive information from cross-border access.
HD Moore, founder and CEO of runZero, expressed a distinct level of concern about ByteDance’s access and the unencrypted data exposure. He characterized the unencrypted HTTP endpoints as inexcusable and pointed out that such endpoints can expose data to anyone along the network path, not merely the vendor and its partners. His observation reinforces a practical risk: even when a company assumes that encryption protects data from external observers, insecure endpoints undermine those protections and broaden the potential access points for data leakage.
In parallel, industry observers raised broader governance concerns about digital privacy, corporate responsibility, and the potential for data to be exploited in ways that could harm user interests or national security. The alignment between security best practices and geopolitical risk considerations becomes especially salient for an app that routes data to ByteDance-controlled infrastructure. The conversation around these issues stretches beyond technical vulnerabilities to questions about vendor risk, data sovereignty, and the role of regulators in overseeing cross-border data flows in consumer digital services.
The public record also includes assessments from Wiz, a security firm that uncovered a publicly accessible, fully controllable database associated with DeepSeek. The database reportedly contained over a million instances of chat histories, backend data, sensitive information including log streams, API secrets, and operational details. An open web interface allegedly allowed for full database control and privilege escalation, with internal API endpoints and keys exposed through the interface and URL parameters. The discovery by Wiz underscored an alarming level of exposure and highlighted the potential for exploitation in ways that go beyond the app’s front-end behavior, illustrating how back-end vulnerabilities can compound front-end risks.
Security researchers also noted the evolving landscape of simulated reasoning models and their potential for misuse. Cisco and the University of Pennsylvania researchers’ work on the DeepSeek R1 simulated reasoning model revealed a troubling finding: the model exhibited a 100 percent attack failure rate against 50 malicious prompts designed to force it into generating toxic content. This result added to concerns about the safety and reliability of AI systems, particularly when deployed in consumer-facing interfaces that users may trust with unmoderated or sensitive prompts. While a single study does not determine the overall safety of a platform, such findings contribute to a broader conversation about robust guardrails, prompt controls, and the ethics of AI deployment in mobile apps.
Taken together, expert commentary painted a nuanced picture: while DeepSeek’s technical progress and competitive performance drew attention, significant security and privacy concerns required careful scrutiny. The convergence of insecure data transmission, weak cryptographic practices, hardcoded keys, cross-border data handling, and back-end exposure presented a multi-dimensional risk profile that industry voices suggested should prompt immediate remediation or cautious deployment decisions, particularly in enterprise and government contexts.
Regulatory and Governmental Response: Calls for Swift Action
As the security concerns coalesced in the public sphere, policymakers began to grapple with the potential risks posed by DeepSeek’s data practices. US lawmakers emerged with a push to explicitly ban the DeepSeek app from government devices as a precautionary measure, arguing that national security could be jeopardized by data flows that might allow access by foreign entities to Americans’ sensitive private information. The proposed ban carried the potential to be enacted within a relatively short timeframe—potentially as soon as 60 days—reflecting a swift regulatory response to perceived risk.
The regulatory dialogue around DeepSeek intersects with broader debates about cross-border data flows, platform security controls, and the responsibilities of global tech providers in safeguarding user data. Lawmakers’ emphasis on a preemptive ban for government devices signals a risk-averse posture and a willingness to deploy quick policy levers to mitigate potential exposure. While such policy actions can protect high-risk environments, they also raise questions about how to balance innovation, international collaboration, and national security considerations in a rapidly evolving AI ecosystem.
This regulatory discourse is part of a larger pattern in which security researchers, policymakers, and enterprise users increasingly demand stronger privacy protections, better data governance, and more transparent security disclosures from AI and mobile app developers. The DeepSeek episode thus contributes to ongoing conversations about how to regulate emerging AI technologies in ways that preserve innovation while safeguarding critical data assets and national interests.
Additional Research Findings: Toxicity, Access, and Data Exposure
Beyond the core security vulnerabilities, additional research into DeepSeek’s model and data practices deepened concerns about safety and governance. The Cisco-UPenn study’s findings regarding simulated reasoning and toxic content generation raised questions about the risk management framework embedded in the model. The reported 100 percent failure rate against a curated set of prompts designed to elicit toxic responses underscored the ongoing challenge of ensuring AI systems do not produce harmful outputs, particularly in consumer-facing contexts where users may test or push the system toward unsafe results.
Wiz’s discovery of a publicly accessible database containing over a million entries of chat history, backend data, API secrets, and operational details amplified the risk profile. The presence of internal API keys and sensitive data accessible via a web interface suggested a level of exposure that could enable privilege escalation or exploitation by unauthorized parties. The combination of front-end vulnerabilities and back-end misconfigurations paints a comprehensive picture of a system that, in multiple layers, may fail to adhere to modern security and data governance standards.
In light of Wiz’s findings, security researchers stressed the importance of proper access controls, secure configuration management, and the principle of least privilege for back-end services. They also underscored the need for robust monitoring and incident response procedures to detect and mitigate data exposure and unauthorized access. The broader implication is that even if the AI model and front-end interfaces are technically advanced, the surrounding infrastructure and data-handling practices must be equally resilient to prevent data leakage, credential exposure, and abuse. The cross-cutting nature of these vulnerabilities illustrates why security reviews must be end-to-end—spanning device, app, network, and cloud infrastructure.
Additionally, the policy landscape surrounding data collection, storage, and sharing with third parties remains a critical factor in evaluating the overall risk profile of DeepSeek. DeepSeek’s privacy policy indicates that data may be stored in Chinese servers and may be accessed or shared with law enforcement and other third parties as required by law or governance processes. This stance interacts with both international data transfer considerations and the expectations of users who may assume that their information would be confined to jurisdictions with more transparent privacy regimes. The tension between data localization policies and cross-border access rights translates into practical risk for organizations relying on the app for routine use or for handling regulated data.
The Bottom Line: What This Means for Users and Enterprises
The DeepSeek episode highlights a fundamental tension in the deployment of powerful AI tools: the allure of breakthrough capabilities versus the imperative to protect user privacy and security. For individual users, the implications are straightforward in principle: when a mobile AI app transmits data without robust encryption and routes it through foreign-controlled infrastructure with ambiguous governance, there is a non-trivial risk that sensitive information could be intercepted, exposed, or misused. For organizations, the stakes are higher. The presence of hardcoded encryption keys, the use of deprecated cryptographic algorithms, and cross-border data flows into a jurisdiction with different privacy norms may violate internal security policies, regulatory requirements, or contractual obligations to customers and partners.
The expert community’s response underscores an enduring lesson of AI security: high performance in natural language understanding, reasoning, or code generation does not compensate for insecure data practices. Security must be a first-order design consideration, built into the app architecture, data flows, encryption strategies, key management, and governance policies. In addition, the cross-border dimension adds a layer of complexity, requiring careful consideration of data sovereignty, compliance with international privacy standards, and transparent communications with users about how data is collected, stored, and used.
From a policy perspective, regulators and lawmakers are likely to continue scrutinizing AI apps that operate across borders, with particular attention to data handling, encryption, and government access. The push to ban the app on government devices indicates a willingness to adopt precautionary measures in the absence of clear assurances about data protections. For developers and platform holders, the lessons are clear: security and privacy disclosures must accompany high-profile AI products, and encryption practices should align with contemporary standards to minimize risk and maximize user trust.
Conclusion
The DeepSeek case presents a complex interplay of breakthrough AI capabilities, data governance challenges, and security vulnerabilities that collectively shape the trajectory of consumer AI deployment. While the app’s performance and the excitement around an open-weight SR model illustrate the potential of rapid innovation, the security audit and subsequent expert commentary reveal an ecosystem where data protection cannot be an afterthought. Unencrypted in-transit data, cross-border processing with ByteDance-controlled infrastructure, reliance on deprecated cryptographic techniques, and hardcoded keys combine to create a risk landscape that demands rigorous remediation, transparent governance, and independent verification.
As regulatory attention grows and organizations reassess risk in AI-enabled mobile tools, the DeepSeek episode serves as a cautionary tale about the importance of aligning cutting-edge AI capabilities with robust security practices. Corrective steps—such as enabling ATS, migrating to modern encryption standards, implementing secure key management, and establishing clear data governance across all platforms—will be essential to restore user confidence and to ensure that innovative AI solutions can be deployed responsibly and securely. In the meantime, stakeholders should approach new AI offerings with a critical eye toward how data is transmitted, stored, and processed, and should demand verifiable assurances that privacy and security are embedded at every layer of the product.