Published On: May 8, 2026
Authored By: Manan Jhamb
Chandigarh University
Abstract
Trade secret law, originally designed for an era of physical vaults and internal servers, is straining under the weight of artificial intelligence. The conviction of a former Google engineer for AI trade secret theft, the passage of California’s landmark AI transparency law, and litigation by xAI challenging mandatory training data disclosure have together created an unprecedented collision between the legal imperatives of secrecy and transparency. This article argues that trade secret law is not obsolete, it is incomplete. A revised framework, incorporating tiered classification, cross-border enforcement, and harmonized transparency regimes, is urgently needed to protect innovation while serving the public interest in an AI-driven economy.
I. Introduction: When AI Collides with Trade Secret Law
In early 2026, a former Google engineer was convicted of theft of AI trade secrets and economic espionage in a case that sent shockwaves through Silicon Valley. Around the same time, content creator MrBeast resolved a trade secret suit he had filed against a former employee. These cases are not isolated incidents, they are symptoms of a broader legal crisis in which laws designed before the advent of the internet and AI are being asked to govern technologies their drafters could never have anticipated.
The tension is perhaps most visible in California, where a new AI transparency law took effect on January 1, 2026, requiring AI developers to publicly disclose high-level information regarding their training data. xAI, Elon Musk’s AI company, is currently challenging this law in federal court, asserting that mandatory disclosure of training data constitutes an unconstitutional taking of trade secrets. This dispute has created an unavoidable collision between the public interest in transparency and the commercial imperative of secrecy, and existing law is ill-equipped to resolve it.
II. Why Trade Secret Law Needs to Be Updated
Trade secret law was originally built upon two foundational concepts: “reasonable measures” to maintain secrecy, and “non-ascertainability”, meaning the information cannot be readily obtained by proper means. Under this framework, a company’s source code, algorithms, and training methodology could be safely protected by storing them on internal servers, restricting access, and requiring employees to sign non-disclosure agreements.
The development of AI systems has fundamentally altered this landscape. AI development requires vast computational resources, massive training datasets, and complex algorithms refined over years of research. Yet the very sophistication that makes these systems valuable also makes them newly vulnerable. Today, adversaries may exploit prompt injection attacks, data scraping, and AI-enabled inference technologies to reverse-engineer proprietary processes directly from public-facing systems, without ever accessing the underlying source code. Courts are only beginning to grapple with these realities, and the law has not kept pace.
III. Transparency and Secrecy: The New Disclosure Rules in California
California’s AI transparency law exemplifies the conflict between disclosure and secrecy at its sharpest. Companies are required to make publicly available high-level information about how their AI systems have been trained. Businesses, however, argue that disclosing details about their training programs hands competitors a significant advantage. Regulators, by contrast, contend that the public has a legitimate interest in understanding how AI systems are built, particularly regarding safety, fairness, and the risk of embedded bias.
This conflict is not unique to California; it reflects a global regulatory trend toward greater AI transparency. The legal system has not yet developed a principled framework for determining when the public interest in disclosure overrides a company’s interest in secrecy, and the costs of that uncertainty are mounting.
IV. The “What” vs. “How” Problem
On January 20, 2026, the Seventh Circuit issued a ruling establishing a critical distinction in trade secret law. The court held that the “what”, a functional description of what software does, is not protectable as a trade secret, while the “how”, the specific algorithms, source code, and architecture, is protectable.
This distinction is collapsing in the age of AI. When a model’s outputs can be reverse-engineered simply by interacting with it, when a sophisticated user can infer the underlying logic of a system by asking the right questions, the boundary between “what” and “how” dissolves. The trade secret, in effect, becomes visible through the product itself. Courts have not yet articulated a coherent doctrine for this reality.
V. The International Response
United States: Congress is considering comprehensive trade secret legislation to address the global AI arms race. Several bills currently before Congress address trade secret theft, the enforcement of trade secret rights, and the recognized inadequacy of existing mechanisms, the Defend Trade Secrets Act and the Economic Espionage Act, to address cross-border theft and modern forms of misappropriation.
China: In February 2026, China’s State Administration for Market Regulation published five significant cases of unfair competition involving AI, establishing that algorithms can qualify as trade secrets and that violations may attract fines of up to RMB 360,000. The message is unambiguous: innovation must be protected, even as AI transforms the competitive landscape.
European Union: The EU is attempting to maintain the human authorship requirement for intellectual property while simultaneously increasing transparency obligations for AI systems. In doing so, it is pursuing two goals that can appear contradictory, promoting innovation through protection, and promoting public safety through disclosure.
Global Trends: In December 2025 and January 2026, intellectual property experts worldwide observed that trade secrets are under greater pressure than at any prior point in history, owing to heightened employee mobility, escalating cyber-attacks, and intensifying global competition in AI development.
VI. The Uncomfortable Truth: Current Trade Secret Law Is Partly Obsolete
Trade secret law needs to be updated, but not in the direction many assume. The problem is not that we are protecting too much; it is that we are protecting too little, while simultaneously demanding a degree of secrecy that is no longer achievable. The current framework is incomplete for four reasons:
1. Reverse Engineering Is Easier Than Ever Before
AI systems operate in public. A competitor can test their boundaries, probe their outputs, and infer their underlying logic without ever accessing the underlying source code. Traditional trade secret law assumes that keeping source code confidential is sufficient protection. In an AI context, this assumption is no longer valid. The current standards for “reasonable measures” are inadequate to protect AI trade secrets from inference attacks conducted through publicly available interfaces.
2. The Paradox of Transparency
Regulators increasingly demand transparency from AI systems to ensure safety, fairness, and accountability. Intellectual property law, however, rewards secrecy. These two paradigms are in direct conflict, creating legal uncertainty that discourages investment and stifles innovation. Without a principled framework for reconciling them, companies face an impossible choice between regulatory compliance and competitive survival.
3. Globalization Outpaces Enforcement
Trade secret theft now occurs instantaneously across borders. The existing legal framework, U.S. courts applying U.S. remedies, cannot effectively reach actors in jurisdictions such as China or Russia, where cooperation on intellectual property enforcement is limited. A domestic statute without international reach is, in many cases, a remedy without a remedy.
4. Employee Mobility in Hyper-Competitive Markets
When an AI specialist moves from Google to OpenAI, the questions of what they know and what they may legitimately use become acutely important. While the Google case demonstrates that trade secret law can address the most egregious violations, the standards for determining the permissible scope of an employee’s knowledge remain ambiguous in the context of the AI industry, where human expertise is the most valuable trade secret of all.
VII. The Solution: A Revised Framework for AI-Era Trade Secrets
Seven reforms are needed to update trade secret law for the AI era:
1. Create a Tiered Classification System for AI Trade Secrets
Rather than binary protection, either a trade secret or not, legislation should establish three tiers:
Tier 1 : Core Algorithmic Secrets: Provide the strongest protection for the foundational mathematical operations, architecture, and training methodology that are unique to a given AI system, with correspondingly serious consequences for misappropriation.
Tier 2 : Data and Training Inputs: Require documentation of reasonable safeguards, encryption, access controls, audit logs, calibrated to the sensitivity of the training data.
Tier 3 : Performance Specifications: Permit companies to disclose what a system does and how it performs, while protecting the underlying mechanisms that produce those results.
2. Harmonize Trade Secret Protection with Mandatory Transparency Regimes
Rather than choosing between trade secrets and transparency, legislation should:
– Permit companies to invoke trade secret exemptions from transparency requirements, modeled on legislative proposals addressing AI training data and copyright.
– Grant courts authority to issue protective orders preventing disclosure of sensitive information during regulatory proceedings.
– Develop redacted disclosure regulations permitting companies to provide risk mitigation information without revealing underlying algorithms.
3. Enhance Cross-Border Enforcement
While the Defend Trade Secrets Act provides domestic remedies, a revised framework must also include:
– Mutual Legal Assistance Treaties (MLATs) providing international cooperation in trade secret enforcement for AI systems.
– International data-sharing agreements facilitating coordination between governments on economic espionage prosecutions.
– Sanction regimes that impose costs on governments that knowingly shelter AI trade secret thieves.
4. Update the Standard of “Reasonable Measures” for AI Systems
The Google case demonstrates that companies can prevail in trade secret litigation, but only when they have meticulously documented their security efforts. New standards for reasonable measures should account for context:
-Public-facing AI systems: Reasonable measures include robust access controls, behavioral monitoring, and regular auditing of model interactions.
-Internal AI systems: Existing measures, firewalls, non-disclosure agreements, clearance protocols, remain adequate.
-Licensed AI models: Anti-reverse-engineering provisions and non-compete clauses in licensing agreements must be explicitly and consistently enforceable.
5. Create an Optional AI Trade Secret Registry
Establish a voluntary, non-public registry permitting companies to formally record and date-stamp their trade secret assertions. Like copyright registration, this would:
– Shift the burden to alleged infringers to demonstrate independent development or lawful reverse engineering.
– Provide definitive proof of ownership in misappropriation actions.
– Reduce litigation costs by creating an evidentiary baseline for trade secret ownership disputes.
6. Establish Guidelines on Prompt Injection Attacks and AI Reverse Engineering
Courts and regulators need clear guidance on what constitutes improper means of obtaining trade secrets in an AI context. Specifically:
– Prohibit prompt injection attacks designed to extract proprietary methods from AI systems.
– Clarify that data scraping from public APIs violates trade secret law when the underlying data is confidential.
– Require disclosure of AI-enabled reverse engineering techniques used in the development or competitive analysis of AI systems.
7. Establish a Middle Ground Between Innovation Incentive and Public Interest
The most important reform is conceptual: trade secret protection need not mean total secrecy. A modern framework should:
– Mandate licensing for safety-critical AI systems, those with potential for catastrophic failure, ensuring public access to safety-relevant information.
– Require algorithmic audits by independent third-party auditors, with trade secret protections maintained through confidentiality agreements.
– Develop “public interest exceptions” permitting disclosure in litigation involving AI systems implicated in bias, discrimination, or safety failures.
Conclusion: A Framework for the Future, Not the Past
Trade secret law is not obsolete, it is incomplete. The real problem is that we are attempting to fit AI, an entirely new category of technology, into a legal framework designed for software, manufacturing, and traditional forms of innovation. The fit is poor, and the gaps are becoming dangerous.
What is needed is a faster legislative response, most pending AI bills are already twelve or more months behind the technology they purport to govern. International coordination is equally essential: trade secrets stolen abroad are of limited legal consequence unless enforcement can reach across oceans. And perhaps most importantly, the apparent opposition between transparency and trade secret protection must be reframed as a design problem, not an irreconcilable conflict.
The conviction of the Google engineer demonstrates that trade secret law can function when violations are clear and evidence is available. The California transparency law demonstrates that the public has legitimate and enforceable interests in disclosure. The xAI lawsuit demonstrates that the current legal framework generates conflicting obligations that neither innovators nor regulators can easily navigate.
The path forward is not to eliminate trade secrets or to resist transparency, it is to build a better system: one in which innovation is protected, the public is meaningfully informed, and the legal certainty necessary to invest in AI development is restored.


