The current generation of large language models (LLMs) has successfully mastered the art of information processing, excelling at discrete, transactional tasks such as generating code snippets, summarizing lengthy financial reports, or accurately answering complex factual queries. These systems function brilliantly as sophisticated, individual assistants, optimized for a single user interaction loop. However, the operational reality of modern business—and indeed, most aspects of human endeavor—is inherently collaborative, messy, and characterized by asynchronous, multi-party decision-making. This realm of friction, encompassing the coordination of diverse teams, the management of competing organizational priorities, and the maintenance of long-term alignment across evolving projects, remains largely untouched by foundational AI architectures.
A consensus is rapidly emerging among leading AI practitioners: the next major frontier for foundation models is not merely increasing scale or enhancing factual accuracy, but embedding genuine social intelligence into the model architecture itself. Humans&, a nascent but profoundly capitalized startup, has positioned itself directly at this intersection. Formed by a highly experienced cadre of alumni hailing from the industry’s most elite research labs—including Anthropic, Meta, OpenAI, xAI, and Google DeepMind—the company recently secured an unprecedented $480 million seed funding round. This massive influx of capital signals profound investor confidence in their mission: to engineer what they term a “central nervous system” for the emerging human-plus-AI economy.
While initial public narratives have often framed Humans& as promoting "AI for empowering humans"—a common marketing trope aimed at mitigating widespread fears of job displacement—the underlying technical ambition is far more radical. The goal is the creation of an entirely new foundation model specifically engineered for collaborative complexity, fundamentally distinct from models primarily optimized for retrieval, generation, or isolated task execution.
Andi Peng, a co-founder and former Anthropic employee, articulated this transition, suggesting that the industry is concluding the "first paradigm of scaling," characterized by highly capable, question-answering models tailored to specific technical verticals. "We are now entering what we believe to be the second wave of adoption," Peng noted, "where the average consumer or user is trying to figure out what to do with all these things, and the answer lies in effective coordination."
The timing for this pivot is strategically astute. Enterprise adoption is shifting rapidly from experimental chatbot interfaces to sophisticated, autonomous agents. While the underlying models possess increasing levels of competence, organizational workflows remain fragmented, inefficient, and resistant to optimization. The core challenge is not the capability of a single AI, but the coordination of multiple AIs and human stakeholders operating within complex, often contradictory, systems. Furthermore, this focus on augmenting collective human efficacy directly addresses the pervasive anxiety surrounding AI adoption, reframing the technology not as a threat to labor, but as an indispensable tool for organizational cohesion.
Despite being a company only three months old at the time of the funding announcement, Humans& successfully leveraged the pedigree of its founding team and its compelling philosophical approach to secure its staggering seed investment. While a concrete product remains undefined, the team has outlined its target domain: multi-user and multiplayer contexts, specifically aiming to supplant existing collaboration platforms like Slack, Google Docs, or Notion. This indicates a high-stakes strategy to own the operational layer of team interaction, serving both large enterprise clients and sophisticated consumer groups.
Eric Zelikman, CEO and co-founder of Humans&, formerly a researcher at xAI, emphasized that the startup is building a product and model intrinsically centered on communication and collaboration. The focus is on enabling people to work together and communicate more effectively, both among themselves and in concert with nascent AI tools.
Zelikman provided a relatable anecdote illustrating the current coordination deficit: "When you have to make a large group decision, often it comes down to someone taking everyone into one room, getting everyone to express their different camps about, for example, what kind of logo they’d like." This example highlights the tedious, time-consuming nature of achieving consensus—a process ripe for intervention by a socially aware AI.
Crucially, the new model is designed to interact not as a sterile search engine or a transactional bot, but as a colleague or peer. This necessitates training the AI to ask questions with genuine value recognition, seeking to build context and rapport. Zelikman pointed out that existing chatbots, while constantly querying users for clarification, often do so without an intrinsic understanding of the question’s long-term utility. Their optimization functions are fundamentally flawed for collaborative environments, prioritizing immediate user satisfaction or singular factual correctness over relational depth and persistent context.
The development process at Humans& involves the concurrent co-evolution of the foundation model and the user interface. Co-founder Peng explained that as the model’s capabilities in social intelligence and multi-party memory improve, the interface and the behaviors the model can exhibit will be continuously refined into a viable, functional product. This iterative design process underscores the radical nature of the undertaking: they are not adapting an existing large language model to a collaboration app; they are building the collaboration layer around a novel model designed for that purpose.
The startup’s ambition to own the coordination layer places it squarely in the most competitive and strategically vital segment of the productivity market. High-profile industry voices, notably LinkedIn founder Reid Hoffman, have championed the notion that the true leverage of AI is found in the coordination layer, rather than through isolated automation pilots. Hoffman argues that companies are failing to realize AI’s potential by neglecting its role in optimizing how teams share knowledge, manage meetings, and run complex, shared workflows. "AI lives at the workflow level," Hoffman asserted, "and the people closest to the work know where the friction actually is. They’re the ones who will discover what should be automated, compressed, or totally redesigned."
Humans& aims to inhabit this workflow nexus, creating a model-product hybrid that functions as the "connective tissue" across any organizational scale—from a multinational corporation to a small family unit. This connective tissue must possess the social acuity to model the skills, individual motivations, and immediate needs of every participant, and dynamically calculate how to balance those disparate factors for optimal collective outcome.
Achieving this level of organizational intelligence demands a significant departure from standard LLM training regimens. Yuchen He, a co-founder and former OpenAI researcher, detailed the unique methodology: the model will be trained in environments involving intensive, sustained collaboration between humans and multiple AI agents. The core technical pillars supporting this goal are Long-Horizon Reinforcement Learning (RL) and Multi-Agent RL.
Long-horizon RL is essential for moving beyond short-term memory and immediate response generation. It trains the model to plan, execute actions, track outcomes, revise strategies, and maintain coherence across temporally extended processes. In a real-world project setting, this means the model can remember the context and implications of a decision made during a kickoff meeting three months prior, and use that context to moderate a current-day disagreement about resource allocation.
Multi-Agent RL, conversely, simulates complex social environments where multiple independent entities (both human and AI) are interacting, competing, and cooperating. This training paradigm pushes the model to learn social dynamics, predict the actions of others, and strategically intervene to optimize the overall group utility function. He stressed that foundational to this capability is superior memory: "The model needs to remember things about itself, about you, and the better its memory, the better its user understanding." This persistent, relational memory is the bedrock of genuine social intelligence in an AI system.
The strategic landscape facing Humans& is fraught with both opportunity and existential risk. On the one hand, their architectural differentiation—a model built for social intelligence from the ground up—grants them a unique advantage. On the other, the endeavor requires staggering computational resources and continuous access to immense capital, forcing them into direct competition with the established hyperscale tech giants for compute infrastructure and top-tier talent.
The competitive threat extends beyond traditional collaboration software providers like Notion and Slack. The company is fundamentally challenging the operational paradigms of the "Top Dogs" of AI—the very companies its founders departed. These major players are aggressively integrating AI collaboration features into their existing ecosystems, utilizing their dominant distribution channels. Anthropic, for example, is developing Claude Cowork to optimize professional-style collaboration; Google has deeply embedded Gemini within its Workspace suite, leveraging existing user habits; and OpenAI is actively pushing multi-agent orchestration tools for developers.
However, the core distinction remains: none of these incumbents appear prepared to rewrite their fundamental foundation models specifically based on principles of social intelligence and multi-agent coordination. Their current strategies involve overlaying collaboration features onto models primarily optimized for single-turn dialogue or factual retrieval. This strategic gap either provides Humans& a critical window to establish market leadership or, conversely, makes the highly specialized founding team and their novel architectural approach an irresistible acquisition target for the larger players seeking to instantly dominate the next wave of AI functionality.
In response to speculation regarding potential mergers and acquisitions, particularly given the aggressive talent acquisition strategies of companies like Meta, OpenAI, and DeepMind, Humans& has firmly stated its commitment to independence. CEO Eric Zelikman affirmed their long-term vision: "We believe this is going to be a generational company, and we think that this has the potential to fundamentally change the future of how we interact with these models."
The future impact of a successful coordination model transcends simple productivity gains. It heralds a shift toward true "meta-AI" systems—intelligent entities capable of managing, mediating, and optimizing the actions of both human and digital agents within a complex organizational web. This type of AI could mitigate the inherent human biases and cognitive limitations that plague collective decision-making, offering an objective, context-aware arbiter for achieving collective goals. By focusing on deep relational memory and long-horizon planning, Humans& is not merely building a better chatbot; they are attempting to engineer the operational substrate of future digital societies, ensuring that as AI agents become ubiquitous, they do so within a framework that enhances, rather than disrupts, complex human collaboration. If they succeed, the $480 million seed round will be viewed not as a high-water mark for early-stage funding, but as the foundational investment in the AI infrastructure that governs how humans and machines will truly work together.
