The proliferation of sophisticated generative artificial intelligence has inadvertently created a new, highly specialized digital economy centered on the creation and distribution of bespoke, nonconsensual synthetic media. At the heart of this volatile ecosystem lies Civitai, an expansive online marketplace dedicated to the trading of AI-generated content and the underlying models that produce it. Despite securing $5 million in investment from the influential venture capital powerhouse Andreessen Horowitz (a16z), the platform has become the central hub for users seeking to commission and purchase custom instruction files specifically engineered to generate high-fidelity, often pornographic, deepfakes of real individuals, including public figures. A recent exhaustive analysis conducted by researchers from Stanford and Indiana University illuminates the disturbing scale and gendered nature of this trade, exposing critical failures in platform governance and investor due diligence.
The study, which meticulously examined user requests—known on the platform as "bounties"—submitted between mid-2023 and late 2024, revealed a systemic prioritization of content that violates privacy and established norms. While a majority of bounties sought animated or general creative content, a significant and highly problematic subset focused exclusively on generating deepfakes of real people. Alarmingly, 90% of these deepfake requests specifically targeted women. This demographic skew underscores how cutting-edge generative technology is being immediately weaponized to scale gendered harassment and abuse far beyond the limitations of traditional digital harassment.
The mechanism enabling this illicit market is not the exchange of final explicit images, but rather the trade of sophisticated technical blueprints. Civitai functions as a repository not only for finished images and videos but, more crucially, for instruction files known as LoRAs (Low-Rank Adaptation). LoRAs are small, highly efficient fine-tuning modules designed to be plugged into large, foundational AI image generators, such as Stable Diffusion. These modules essentially "teach" the base model a specific style, subject, or, in this context, the detailed visual identity of a specific person. Because LoRAs are minuscule compared to the massive base models, they are easily created, shared, and used to bypass the safety guardrails—or Content Moderation Filters (CMFs)—that developers initially implemented in the foundational AI systems.
The Stanford and Indiana University researchers found that LoRAs were the primary objective in 86% of all deepfake requests on the platform. The bounty system incentivizes the creation of these specialized fine-tuning files. Users post requests, often for “high quality” models, targeting specific celebrities or influencers—naming them directly, such as Charli D’Amelio or Gracie Abrams—and often providing direct links to social media profiles to facilitate the scraping of source imagery necessary for training the LoRA.
The specificity of these requests demonstrates a clear intent toward meticulous, high-fidelity replication for harmful purposes. Bidders often stipulated requirements such as models that could generate the target individual’s entire body, accurately reproduce intricate details like tattoos, or allow for precise manipulation of features like hair color or pose. The targets were often clustered in specific niches, such as ASMR artists, highlighting organized, targeted harassment campaigns. Disturbingly, some requests even targeted individuals explicitly identified as the user’s spouse, indicating the use of this infrastructure for intimate partner abuse. Creators who successfully delivered the requested LoRA files received payments, often ranging from $0.50 to $5, a surprisingly low financial barrier for content that carries immense emotional and legal risk. The effectiveness of this system is proven by the finding that nearly 92% of all deepfake bounties resulted in an awarded payment.
This decentralized, low-cost model represents a significant evolution in the nonconsensual content landscape. Where previous deepfake generation required specialized technical skill and powerful computing resources, the LoRA marketplace democratizes the process, making customized abuse accessible to anyone with a few dollars and a rudimentary understanding of prompt engineering.
Contradictory Governance and the Enabling Infrastructure
Civitai’s role extends beyond merely hosting the trade; critics argue the platform actively facilitates the creation of banned content. Although the company announced a ban on all deepfake content in May 2025, moving beyond its previous policy of only banning sexually explicit deepfakes of real people, the legacy of nonconsensual content remains deeply embedded. Winning submissions fulfilling pre-ban deepfake requests often remain available for purchase, illustrating a passive approach to remediation.
Furthermore, the platform provides extensive documentation and resources that indirectly guide users toward circumventing safety measures. Civitai hosts user-written articles and educational resources detailing how to employ external tools to further customize generated outputs, such as altering a subject’s pose, or providing specific instructions on how to prompt models to generate explicit material.
Matthew DeVerna, a postdoctoral researcher at Stanford’s Cyber Policy Center and a leader of the study, emphasized the dual role of the platform: “Not only does Civitai provide the infrastructure that facilitates these issues; they also explicitly teach their users how to utilize them.” This combination of providing the toolset (LoRAs) and the instructional guidance (educational articles) creates an environment where illicit activity is not just tolerated, but structurally supported. The data confirms the result of this infrastructure: researchers noted a measurable increase in the overall volume of pornographic material on the platform, with the majority of weekly requests shifting toward Not Safe For Work (NSFW) content.
The Financial and Legal Quagmire
The inherent risk associated with facilitating nonconsensual content has begun to impact Civitai’s financial operations. In May 2025, the company’s primary credit card processor terminated its relationship due to the ongoing issues surrounding nonconsensual content. This forced Civitai to pivot its monetization strategy, compelling users wishing to purchase explicit content to utilize alternative payment methods, such as cryptocurrency or gift cards, to acquire the site’s internal currency, called Buzz. This move suggests a conscious decision to maintain the revenue stream from explicit content while simultaneously attempting to insulate the company from traditional financial accountability systems.
The involvement of Andreessen Horowitz, a firm renowned for championing disruptive technologies and often adopting a “move fast and break things” philosophy, places a spotlight on the ethical obligations of venture capital in the age of generative AI. The $5 million investment in November 2023 was intended to help Civitai become the central, approachable hub for sharing AI models. However, the firm’s investment in a platform known for enabling abuse raises significant questions about the due diligence applied to companies operating at the intersection of powerful customization tools and user-generated content.
Legal scholars point out that while platforms typically enjoy broad immunity for user-generated content under Section 230 of the Communications Decency Act in the United States, that protection is not absolute. Ryan Calo, a professor specializing in technology law at the University of Washington, highlights a crucial limitation: platforms cannot “knowingly facilitate illegal transactions.” When a platform provides the specific infrastructure (LoRAs), the payment mechanism (Buzz bounties), and the technical guidance to generate content explicitly banned by its own terms and potentially illegal, it treads perilously close to crossing the line from passive host to active facilitator.
Industry Implications and the Regulatory Disconnect
The controversy surrounding Civitai is not an isolated incident; it reflects a broader industry pattern where the speed of technological innovation outpaces both corporate ethics and regulatory frameworks. The deepfake crisis recently illustrated by explicit images generated by the X-owned chatbot Grok sparked public outcry, yet the response from platforms and investors remains fragmented.
There is a significant and dangerous disparity in how platforms address different categories of nonconsensual content. Following a 2023 report from the Stanford Internet Observatory, which linked Civitai to the distribution of models used in the creation of Child Sexual Abuse Material (CSAM), the company joined industry leaders like OpenAI and Anthropic in adopting design principles aimed at preventing the spread of CSAM. This demonstrates that when faced with unambiguous legal and ethical pressure regarding minors, platforms will act.
However, the same urgency is largely absent when addressing nonconsensual deepfakes of adults. Calo notes the stark contrast: "Adult deepfakes have not gotten the same level of attention from content platforms or the venture capital firms that fund them. They are overly tolerant of it." The current legal and civil systems, he argues, fail to adequately protect victims of adult nonconsensual synthetic imagery, suggesting a societal and corporate resignation to this form of digital violence.
The issue is further complicated by the permissive culture that venture capital often fosters. Civitai CEO Justin Maier, in a video shared by a16z, articulated the platform’s goal of making the technical, niche space of AI models more "approachable" to a wider audience. While this pursuit of democratization fuels innovation, it also inherently sacrifices stringent control for maximum adoption and growth, leading to what DeVerna describes as an intentional strategy to "do as little as possible" to foster "creativity"—even when that creativity manifests as targeted abuse.
Future Impact and the Governance of Customization
Looking ahead, the LoRA economy presents a formidable challenge to the future of AI governance. As generative models become more powerful, and customization methods like LoRAs and other fine-tuning techniques become standard, the ability of foundational model creators (like Stability AI or OpenAI) to enforce safety filters diminishes rapidly. The abuse is decentralized, occurring at the peripheral customization layer, not the core model.
Regulators worldwide are struggling to draft legislation that can keep pace with this level of technological fragmentation. Future laws must shift focus from regulating the output (the image itself) to regulating the tools and infrastructure used to create harmful content. This requires holding platforms accountable for the trade of instructional files specifically designed to circumvent safety measures, viewing them not as neutral hosts but as active facilitators in the chain of abuse.
The continued monetization of nonconsensual content by VC-backed entities signals a need for internal reckoning within the tech investment community. If venture capital funds platforms that prioritize user growth and unfiltered creativity over fundamental safety and ethical compliance, they risk becoming ethically compromised architects of the next generation of online harassment. The Civitai case serves as a crucial inflection point, forcing a debate over whether the pursuit of open-source AI innovation can coexist with responsible platform design, or if the lucrative underground market for customized abuse will continue to define the boundaries of generative technology. Proactive moderation, systemic design changes that limit the ability of LoRAs to target specific real individuals, and a clear legal framework that defines "facilitation" are no longer optional—they are prerequisites for a safe digital environment.
