A January 2026 study from Omnisend reports that consumer trust in online reviews remains high, a finding that emerges even as separate research details the widespread presence of fake and AI-generated content, often termed 'AI slop'. This analysis of recent reports highlights a complex dynamic in consumer sentiment, where reliance on user-generated feedback persists alongside growing skepticism about content authenticity online.
The core issue revolves around a reported disconnect between consumer trust and the reality of the digital marketplace. While a significant majority of shoppers depend on reviews to make purchasing decisions, platforms are simultaneously battling a large volume of inauthentic content. According to an article from digitalcommerce360.com, this creates "a kind of loop where people are overwhelmingly skeptical of AI, yet still depend on content that AI can easily manipulate." This situation presents an ongoing challenge for both e-commerce platforms and consumers navigating the product discovery process.
What We Know So Far
- A study from Omnisend conducted in January 2026 found that 84% of Americans trust online product reviews, with 33% reporting they trust them more than they did two years prior.
- Separate research from Capital One, published in March, indicates that 30% of all online reviews are considered 'fake or ungenuine', according to digitalcommerce360.com.
- The same Capital One report found that 82% of consumers had encountered what they believed to be a fake review at least once during the past year.
- E-commerce platform Amazon reported that it blocked or removed more than 275 million fake reviews from its site in 2024 alone.
- To combat inauthentic reviews, Amazon also reported spending over $500 million and employing 8,000 people in 2024.
Recent Reports on AI-Generated Content and Consumer Sentiment
Recent data presents a nuanced picture of consumer attitudes toward online reviews. The Omnisend study from January 2026 is a key data point, reporting that 84% of American consumers trust online product reviews. The study further specified that a third of these consumers feel their trust in reviews has actually increased over the last two years. This suggests a continued, and in some cases growing, reliance on peer feedback as a crucial part of the online shopping experience. One source quoted in digitalcommerce360.com suggests a potential reason: "In the age of AI, people are naturally turning to other people for reassurance."
However, this high level of reported trust exists alongside data indicating a significant problem with review authenticity. Research from Capital One, as reported by digitalcommerce360.com, found that an average of 30% of online reviews are 'fake or ungenuine'. That research also noted that the volume of fake reviews is reportedly growing 12.1% faster than the total number of all online reviews. This highlights the scale and growth rate of the challenge that platforms and consumers face when trying to distinguish authentic feedback from manufactured content, including AI-generated 'slop'.
What is 'AI Slop' and Its Effect on Review Authenticity?
While the term 'AI slop' refers generally to low-quality, mass-produced AI content, its effect on review authenticity is seen in the large-scale efforts required to combat it. The actions of major e-commerce platforms provide a clear metric for the scope of the problem. For example, Amazon's efforts in 2024 underscore the resources being dedicated to maintaining the integrity of review systems. The company stated it blocked or removed over 275 million fake reviews during that year.
This initiative was supported by a substantial financial and human resource investment. Amazon reported spending more than $500 million and dedicating a team of 8,000 employees specifically to the task of fighting fake reviews and other forms of fraud. These figures, reported by digitalcommerce360.com, illustrate the operational scale required to police user-generated content and attempt to preserve a trustworthy environment for consumers. The sheer volume of removed reviews demonstrates the persistent effort by bad actors to manipulate consumer perception through inauthentic feedback, a challenge that platforms must continuously address.










