The FTC has finalized a rule prohibiting fake reviews, including those generated by AI. The rule, effective 60 days after its Federal Register publication, targets deceptive practices like fake endorsements, AI-made reviews, and review censorship. Violations could result in fines up to $51,744 per incident.
The U.S. Federal Trade Commission (FTC) has finalized a rule to combat fake reviews, including those generated by AI. This rule, approved by a unanimous 5-to-0 vote, will take effect 60 days after being published in the Federal Register. The rule aims to tackle deceptive practices like fake reviews and endorsements, which have become widespread in online marketplaces.
The new rule bans AI-generated reviews, paid reviews, and any review from someone without genuine experience with a product. Businesses are also prohibited from selling or buying reviews, manipulating review platforms, and using legal or physical threats to suppress negative reviews. The rule also restricts company insiders from giving undisclosed testimonials.
Violations of the rule can result in fines of up to $51,744 per fake review. However, the courts will determine penalties based on the specifics of each case. The rule's enforcement is expected to improve the reliability of online reviews, making it harder for companies to engage in deceptive practices.
The FTC’s journey to this rule began with an advanced notice of proposed rulemaking in November 2022. The rule was formally proposed on June 30, 2023, and has now been finalized. This effort is part of the FTC’s broader mission to protect consumers from deceptive practices in the digital age, particularly with the rise of generative AI.
Do you think these penalties are enough to deter fake reviews?
Each week we select most important sector news and statistic
so that you can be up to speed