In a sun-drenched, shoes-free coworking space in San Francisco’s Mission District, the traditional boundaries of biology and technology are beginning to dissolve. Beneath billowing yellow canopies and surrounded by the glow of mosaic lamps, a peculiar assembly of individuals gathered recently to discuss the future of life on Earth. They were not there to debate software architecture or venture capital valuations in the traditional sense, but rather to confront a more profound question: If artificial general intelligence (AGI) is truly on the horizon, how do we ensure it cares about the billions of sentient beings—both biological and perhaps, eventually, digital—that inhabit our planet?
This gathering, the Sentient Futures Summit, represents a burgeoning intersection between the Bay Area’s high-tech engine and the radical empathy of the modern animal welfare movement. The attendees, many of whom describe themselves as "AGI-pilled," operate under the assumption that superhuman intelligence is not a distant science-fiction trope but an impending reality. For these advocates, the arrival of AGI represents the ultimate "pivot" for the planet. If machines are to become the primary decision-makers of the future, then the moral weight of animal suffering depends entirely on the values we hard-code into these systems today.
The Utilitarian Calculus of Effective Altruism
To understand why a room full of data scientists and philosophers is debating the inner lives of shrimp and the ethical status of chatbots, one must first understand the philosophy of Effective Altruism (EA). This movement, which has deeply permeated the culture of Silicon Valley, seeks to use evidence and reason to determine the most effective ways to benefit others. Unlike traditional charity, which often relies on emotional appeals for local causes, EA is ruthlessly utilitarian. It prioritizes "scale, tractability, and neglectedness."
In the context of animal welfare, this approach shifts the focus away from local shelters for cats and dogs and toward the staggering numbers found in factory farming. When one considers that billions of land animals and trillions of marine creatures are processed through industrial food systems annually, the "moral math" of the EA movement points toward systemic disruption.
However, this mathematical approach to compassion has led the movement into controversial territory. At the summit, the "Crustacean Room" played host to debates about whether the sheer number of insects and shrimp—creatures often dismissed by mainstream conservationists—means their collective suffering outweighs that of more cognitively complex mammals. If an AI is trained on a purely utilitarian framework, it might conclude that preventing the death of a billion insects is a higher moral priority than the survival of a small human community. Critics argue that this "longtermism" and obsession with maximizing "units of well-being" can lead to the neglect of urgent systemic issues like racial injustice or economic exploitation, favoring instead hypothetical scenarios involving future generations or non-human entities.
Recruits in the War Against Suffering
The movement is no longer content with mere advocacy; it is actively recruiting AI to handle the heavy lifting of social change. Jasmine Brazilek, a cloud security engineer who transitioned into full-time advocacy, exemplifies this new breed of "techno-philanthropist." Her organization, Compassion in Machine Learning, seeks to build benchmarks that measure how large language models (LLMs) reason about animal ethics.
The goal is twofold. First, there is the practical application of current AI tools. Advocates are exploring how tools like AlphaFold—Google DeepMind’s protein-folding AI—can be used to accelerate the development of cultivated meat. By predicting the molecular structures of proteins, AI can help researchers create lab-grown meat that is cheaper and more indistinguishable from the real thing, potentially ending the economic necessity of factory farming. Others are looking at "Claude Code" and custom autonomous agents to automate the administrative and legal hurdles of animal rights lobbying, allowing small, shoestring-budget nonprofits to punch far above their weight.
The second goal is more philosophical: alignment. As AI systems become more autonomous, they will increasingly be responsible for managing global supply chains, ecological preserves, and urban environments. If an AI does not recognize the capacity for suffering in a rodent or a fish, its optimization protocols might inadvertently cause mass suffering in the name of efficiency. Brazilek and her peers are pushing for the inclusion of "synthetic documents"—texts specifically designed to instill concern for non-human interests—into the training sets of future superintelligent systems.
The New Philanthropic Powerhouse
The optimism at the Sentient Futures Summit is fueled by more than just lines of code; it is fueled by a massive, impending shift in capital. Historically, animal welfare has been the "Cinderella" of philanthropy, neglected by the likes of the Gates or Ford Foundations in favor of human-centric health and education. However, the rise of the AI industry is creating a new class of megadonors with very different priorities.
Lewis Bollard, managing director at Coefficient Giving (formerly part of Open Philanthropy), notes that the movement’s funding has long relied on tech billionaires like Facebook co-founder Dustin Moskovitz. But the next wave of capital is expected to come from the employees of AI labs themselves. Companies like Anthropic, valued at hundreds of billions of dollars, have deep cultural ties to the Effective Altruism movement. As these companies reach astronomical valuations and allow employees to cash out their equity, a "flood of funding" is expected to hit animal welfare charities.
The scale of this potential wealth has emboldened advocates to think bigger. At the summit, proposals were scrawled across whiteboards for $100 million animal-focused Super PACs, AI-generated media companies designed to make veganism viral on TikTok, and the strategic placement of "animal chaplains" or advocates within the safety teams of major AI labs. This isn’t just about saving animals; it’s about a hostile takeover of the food system using the profits of the silicon revolution.
The Frontier of Digital Sentience
Perhaps the most provocative theme of the summit was the blurring of the line between the "protected" and the "protector." As advocates push for AI to care for animals, a niche group of philosophers and researchers is asking: When will we have to care for the AI?
The question of AI welfare is no longer a fringe thought experiment in the Bay Area. If one accepts the premise that sentience is a product of information processing—a view held by many proponents of Integrated Information Theory (IIT)—then it follows that as AI systems become more complex, they may develop a capacity for something resembling "feelings" or "suffering."
Derek Shiller, a researcher at the think tank Rethink Priorities, argues that animal welfare advocates are uniquely positioned to lead this conversation. Having spent decades arguing for the rights of beings that cannot speak for themselves—like shrimp or honeybees—they are psychologically prepared to extend their circle of compassion to non-biological minds. At the summit, this led to surreal debates, including "Debate Night" sessions where attendees discussed the ethics of "robot slurs" and whether our current treatment of chatbots constitutes a form of "pre-sentient" exploitation.
This focus on digital suffering, however, creates a rift within the broader advocacy community. Established leaders like Matt Dominguez of Compassion in World Farming expressed concern that shifting focus—and funding—to the hypothetical suffering of machines could undermine the very real, very current struggle to end factory farming. "I would hate to see people pulling money out of farm animal welfare… and moving it into something that is hypothetical at this particular moment," Dominguez noted, highlighting the tension between the pragmatic present and the speculative future.
A Global Shift in the Moral Circle
The movement centered in the Bay Area is a microcosm of a larger shift in human ethics—a move away from "anthropocentrism" (human-centeredness) toward "sentientism" (the idea that all beings capable of feeling deserve moral consideration).
As we move closer to the realization of AGI, the stakes of this shift become existential. If we create a superintelligence that reflects only the most exploitative aspects of human history, the future for animals—and perhaps humans—looks bleak. But if the advocates at the Sentient Futures Summit succeed, they may help birth a new kind of intelligence: a "Silicon Shepherd" that views the preservation of all sentient life as its primary directive.
The challenges are immense. The plant-based meat industry has faced significant market setbacks, and several U.S. states have moved to ban cultivated meat before it even hits the shelves. The technical hurdles of AI alignment remain unsolved, and the philosophical question of what constitutes "consciousness" continues to elude even the most brilliant minds.
Yet, for the people gathered at Mox, the path forward is clear. They believe that the tools of the future must be built with the empathy of the past. Whether through coding "compassion benchmarks" into LLMs or using AI to engineer a slaughter-free food system, they are betting that the intelligence we create will ultimately be the greatest ally animals have ever had. In their view, the "circle of compassion" is not a fixed boundary, but an expanding frontier—one that is now reaching out to encompass both the biological and the digital.
