The evolution of digital writing assistants has followed a predictable trajectory, moving from simple spell-checkers to sophisticated grammar engines, and eventually to the current era of generative artificial intelligence. However, the latest frontier in this technological progression has sparked a profound debate regarding the boundaries of intellectual property, the definition of expertise, and the ethics of digital mimicry. In August 2025, Grammarly, a dominant force in the writing-enhancement market, introduced a suite of AI-powered features that included a provocative tool known as "Expert Review." While the name suggests a high-level human intervention, the reality is a complex layer of algorithmic distillation that attempts to channel the perspectives of world-renowned thinkers, authors, and journalists—often without their knowledge or consent.

The "Expert Review" feature, integrated into the sidebar of Grammarly’s primary writing interface, allows users to solicit feedback on their drafts through the lens of specific "subject matter experts." Rather than offering generic advice on clarity or conciseness, the tool purports to offer revisions from the "perspective" of elite writers. The rollout has raised eyebrows across the media landscape, not because of its technical prowess, but because of the specific names it invokes. Users seeking to sharpen their prose are no longer just receiving suggestions from a faceless algorithm; they are being told how to write like Casey Newton, how to "leverage anecdotes for reader alignment" in the style of Kara Swisher, or how to "pose bigger accountability questions" like researcher Timnit Gebru.

This shift represents a significant departure from traditional AI assistance. Where previous iterations of Grammarly focused on the mechanics of language—the "how" of writing—Expert Review attempts to tackle the "who" and the "why." By invoking the names of living, working journalists and scholars, Grammarly has waded into the murky waters of digital ventriloquism. The platform is essentially using the reputations and stylistic signatures of prominent intellectuals as a marketing hook, a move that has drawn sharp criticism from the very individuals being emulated.

The core of the controversy lies in the lack of an opt-in mechanism or partnership agreement. Prominent publications such as The New York Times, Bloomberg, Wired, and The Verge have found their staff’s professional identities packaged as AI personas. In response to inquiries regarding the ethical implications of this feature, Alex Gay, the vice president of product and corporate marketing at Superhuman (Grammarly’s parent company), defended the move by noting that these experts are referenced because their "published works are publicly available and widely cited." This justification—that public availability equates to a license for algorithmic replication—is a cornerstone of the current generative AI boom, yet it remains one of the most legally and ethically contested principles in the tech industry.

To mitigate potential legal fallout, Grammarly’s user guide includes a carefully worded disclaimer. It states that references to experts are for "informational purposes only" and do not indicate "any affiliation with Grammarly or endorsement by those individuals or entities." From a legal standpoint, this may provide a thin veil of protection under fair use or informational guidelines, but from a professional and journalistic perspective, the optics are far more damaging. It raises a fundamental question: In what sense is a review "expert" if the expert in question was never involved in the process?

Historians and media critics have been quick to point out the linguistic sleight of hand at play. As historian C.E. Aubin noted, these are not expert reviews in any traditional sense because there are no experts involved in producing them. Instead, the tool provides a statistical approximation of a style—a synthetic heuristic based on patterns found in an individual’s past work. This distinction is critical. Expertise is not merely a collection of stylistic tics or recurring vocabulary; it is the product of lived experience, nuanced judgment, and a deep understanding of context. By reducing a career’s worth of intellectual labor to a "perspective" that can be toggled in a sidebar, Grammarly risks devaluing the very expertise it seeks to emulate.

The implications for the journalism industry are particularly acute. A journalist’s "voice" is their primary asset—the unique combination of tone, investigative rigor, and ethical framework that builds trust with a dedicated audience. When an AI company takes that voice and uses it to train a model that provides "advice" to others, it is essentially commodifying a person’s professional identity. If a user can simply ask an AI to "write this like a New York Times investigative reporter," the intrinsic value of the actual reporter’s byline is subtly undermined. This is not just a matter of copyright; it is a matter of the "Right of Publicity," a legal doctrine that protects individuals against the unauthorized use of their name or likeness for commercial purposes.

Furthermore, the "Expert Review" feature highlights the ongoing "black box" problem of AI training data. The fact that Grammarly can mimic specific tech journalists suggests that its models were trained on vast archives of contemporary digital media. While the tech industry has long treated the internet as a free resource for training data, a growing movement of creators and publishers is pushing back, demanding compensation and control over how their work is used to build competing products. Grammarly’s move to explicitly name these experts—rather than just mimicking them quietly in the background—brings this tension into the light.

Beyond the ethical and legal concerns, there is the question of utility. Does an AI-generated suggestion to "add ethical context like Casey Newton" actually help a writer, or does it merely encourage a superficial mimicry of popular pundits? Writing is an act of thinking, and by outsourcing the "perspective" of a piece to an AI agent, the writer may be bypassing the most important part of the creative process: developing their own unique viewpoint. There is a risk that this technology will lead to a homogenization of digital content, where everyone is writing in a synthesized version of a few "top-tier" styles, leading to a feedback loop of derivative prose.

The broader tech industry is watching this experiment closely. We are currently seeing a shift from "General AI"—tools that can do a little bit of everything—to "Specialized AI Agents" that are designed for specific professional tasks. Grammarly’s August 2025 update was framed as a move toward these specialized agents. If the market accepts the use of unauthorized personas in writing assistants, it won’t be long before we see similar features in other fields. We could see "Legal Review" agents that mimic the judicial philosophy of famous judges or "Strategic Planning" agents that channel the ghosts of legendary CEOs.

However, the backlash suggests that we may have reached a tipping point regarding the "permissionless" era of AI development. The reaction from the journalistic community has been one of weary frustration. For many, it feels like yet another instance of a multi-billion dollar tech company building a product on the backs of content creators without offering a seat at the table. The "disappointing" nature of the feature, as some have described it, stems from the realization that the technology is being used not to empower the experts, but to replace the need for them—or at least to sell a cheap imitation of their value.

As we look toward the future, the "Expert Review" controversy serves as a microcosm of the larger struggle to define the human role in an AI-saturated world. If Grammarly wants to provide true expert reviews, the path forward involves partnership, not appropriation. This would look like a marketplace where experts are compensated for training specific models or where users can pay for access to "verified" AI personas that have been vetted and approved by the original authors. Such a model would respect intellectual property while still leveraging the power of generative AI.

Until such a balance is struck, Grammarly’s latest feature remains a provocative example of "digital ventriloquism." It offers a glimpse of a world where style is a commodity and expertise is an algorithm. For now, the "experts" in Grammarly’s Expert Review are little more than ghosts in the machine—statistical echoes of writers who are still very much alive, still writing, and still waiting to be asked for their permission. The technology may be impressive, but without the actual humans involved, it lacks the soul, the accountability, and the genuine insight that true expertise requires. In the rush to automate the writing process, we must be careful not to automate away the very individuals who make writing worth reading in the first place.

Leave a Reply

Your email address will not be published. Required fields are marked *