Nabanita De isn’t building another AI company. She’s attempting something far more radical: a system that decides who gets to use data—and on whose terms.
The modern internet runs on an unspoken bargain: everything you create, publish, or upload can—and likely will—be used in ways you never explicitly agreed to. For years, that tradeoff powered growth. In the age of artificial intelligence, it is becoming untenable.
believes the system itself is broken—and that fixing it requires more than regulation, policy debates, or after-the-fact enforcement. It requires infrastructure.
As founder and CEO of PrivacyLicense.ai, De is working on what she describes as a “privacy operating system” for the AI era, anchored by a concept that sounds deceptively simple: an AI Privacy License. The idea is to transform permission into something programmable—embedding rights, restrictions, and economic terms directly into how data is accessed and used.
If it works, it would mark a fundamental shift in how the internet operates—not as an open extraction layer, but as a negotiated ecosystem where usage is defined upfront, not disputed later.
From Information Chaos to Control Systems
Long before the current wave of generative AI, De was already confronting the consequences of an unstructured internet. During her time at the University of Massachusetts Amherst, she led the development of a browser extension designed to detect misinformation on social platforms. The project spread rapidly, drawing global attention and surfacing a deeper realization: the problem wasn’t just false content—it was the absence of systems governing how information moves and evolves.
That insight now sits at the core of her work. Where earlier efforts attempted to correct the outputs of the internet, De is focused on redesigning its inputs—establishing rules at the point where data enters and interacts with intelligent systems.
The Missing Layer in the AI Economy
Artificial intelligence has advanced faster than the frameworks meant to contain it. Models are trained on vast datasets, often without clear attribution, compensation, or consent. The legal system, built for a slower era, is struggling to keep pace.
De’s approach sidesteps that lag. Instead of relying on enforcement after violations occur, she is attempting to encode compliance into the system itself. In her view, privacy is not a constraint on innovation but the mechanism that makes scalable, trustworthy innovation possible.
It is a contrarian stance in an industry still optimized for speed over structure—but one that is gaining urgency as disputes over ownership and usage intensify.
Scaling Trust in a System Built for Extraction
Before founding her current venture, De worked within large-scale technology environments where data flows are measured not in megabytes but in exabytes, and where small inefficiencies can cascade across billions of users. That experience shapes her conviction that any meaningful solution must operate at the level of infrastructure, not policy.
The challenge, however, is not just technical. Systems that redefine control over data inevitably collide with existing incentives. The companies that benefit most from unrestricted access are unlikely to adopt limitations without clear economic upside. At the same time, fragmented global regulations make universal standards difficult to implement.
What De is attempting, then, is not simply a product—but a shift in alignment, where respecting data ownership becomes more valuable than exploiting its ambiguity.
Rewriting the Terms of Participation
There is a quiet but significant transition underway in the digital economy. Creators are questioning how their work is used. Enterprises are reassessing risk. Governments are exploring new regulatory boundaries. What has been missing is a unifying system that translates these pressures into something actionable.
De’s vision positions privacy as that system—a shared language between humans and machines, where permissions are not buried in legal documents but expressed in formats technology can interpret and enforce in real time.
It is an ambitious bet, and far from guaranteed. Standards only take hold when they become unavoidable, and the path from idea to adoption is often longer than expected.
Yet the direction of pressure is clear. As AI continues to expand, the cost of ambiguity around data usage is rising. The internet’s original model—open by default, governed later—is beginning to fracture.
What Comes Next
If the last two decades of the internet were defined by access and scale, the next may be defined by boundaries—who can use what, how, and why. That transition will not be driven by technology alone, but by the systems that determine how technology is allowed to operate.
Nabanita De is attempting to build one of those systems.
And if she’s right, the future of the internet won’t be decided by the most powerful models—but by the rules that govern them.
