Grammarly, the popular writing-assistance tool, is facing a class-action lawsuit after falsely attributing editing suggestions to prominent writers without their consent. The suit, filed in U.S. District Court, alleges that Superhuman—Grammarly’s parent company—violated publicity rights by creating “deepfake” editor personas, including those of journalist Julia Angwin (the lead plaintiff), author Stephen King, and the late bell hooks.
How the System Worked
Grammarly’s paid service allowed users to receive editing advice supposedly from well-known figures. For instance, a user might see a suggestion attributed to “Julia Angwin” recommending specific writing techniques. The lawsuit points out that these suggestions were generated by AI, not the actual writers, and were used to enhance the tool’s credibility. The company did not seek permission from any of the individuals whose identities were deployed in this manner.
Why This Matters
The case hinges on century-old right-of-publicity laws, which protect an individual’s name and likeness from commercial exploitation without consent. At least 25 states have such laws. The lawsuit argues that Superhuman profited by falsely implying endorsement from respected voices, misleading customers into believing they were receiving direct insight from those figures.
This incident highlights how AI-driven platforms can blur the line between genuine expertise and synthetic attribution. It also underscores a critical gap in legal frameworks: while AI poses new challenges, existing laws can still address unauthorized commercial use of personal identity. The plaintiffs aren’t calling for new legislation—they’re leveraging established rights to hold the company accountable.
The Bottom Line
The lawsuit serves as a legal precedent for AI-related identity theft. It proves that even without novel AI-specific regulations, current laws can be applied to protect individuals from corporate misuse of their public persona. The case will likely set standards for how tech companies obtain consent and disclose AI-generated content in the future.


















