Showing posts with label reviews. Show all posts
Showing posts with label reviews. Show all posts

Tuesday, 1 October 2024

FTC Reaches Final Ruling on Fake Reviews and Spam Social Proof

Much of this could be seen as a natural progression towards the kinds of platforms that were and always will be engineered with younger audiences in mind. But there are also those in broader tech circles who believe that it’s more down to Google’s own evolution. Which could be seen by many as a step (or series thereof) in the wrong direction.

Google is no longer the simple, clean, useful and helpful search engine it once was. It's now complex, comprehensively ad-cluttered and almost completely impossible to understand. And with each new 'enrichment' that comes along, it gets worse.


 

Not that any of this means curtains for the world's top search engine – at least not for the time being. But what we can be sure of is that something even as seemingly infallible as Google is vulnerable to the winds of change – especially where the needs and expectations of younger web users are concerned.

After what at least feels like an eternity of wondering why it hadn’t already happened, the Federal Trade Commission (FTC) has finally clamped down on false feedback. Specifically, new rules are set to be introduced later this year that will make fake customer reviews and testimonials illegal.

This includes (but isn’t limited to) common practices such as:

· Buying Reviews – Something most self-respecting businesses wouldn’t even consider, but research suggests is beyond rife.

· Misrepresenting Company-Controlled Review Websites as Independent – An all-too common practice that misleads people into buying into highly biased, one-sided and ultimately false information.

· Fake Indicators of Social Media Influence – This includes the sale, purchase or distribution of anything that could mislead the public as to your status on social media.

According to the FTC, civil penalties will be imposed against any violators found to be in breach of any of the above.

But given the potential scope of the issue as it exists today, it seems almost impossible that each and every brand that’s bought into practices like this to date will have implemented policies to ensure compliance in a matter of weeks.

What Does ‘Fake Indicators of Social Media Influence’ Mean?

This is perhaps going to be the most challenging issue to police properly. Why paying for positive reviews is fairly commonplace, the number of social media accounts using artificial inflation to boost their profiles is incalculable.

From the smallest brands to the biggest businesses to influencers and even politicians, it’s no secret that much (if not most) of their influence is often attributed to bots.

And it’s not difficult to understand why. Key in a quick online search and you’ll be returned with hundreds (if not thousands) of sketchy sellers from around the world, selling everything from Facebook followers to TikTok views to custom-written reviews.

According to the FTC, the official rules will apply to “any metrics used by the public to make assessments of an individual’s or entity’s social media influence, such as followers, friends, connections, subscribers, views, plays, likes, reposts and comments” which are not genuine reflections of the opinions and experiences of real people.

Influencers Under Increased Scrutiny

All of which represents yet another attempt to crack down on fake engagement and artificial inflation of key social media metrics. The message for brands and businesses being clear – don’t attempt to illegally ‘buy’ your way to social media fame and fortune.

But it’s not quite as simple as this – at least not for brands that work closely with high-profile influencers. As a general rule of thumb, the larger an influencer’s audience, the higher the likelihood a proportion (potentially large) of their follower-base is comprised of bots. And by associating yourself with them (and perhaps having their followers directly or indirectly endorse you), any dubious dealings on their behalf could reflect badly on you.

For the time being, no such rules or regulations exist in most other major markets – shy of the policies of the platform's themselves. Either way, it should be seen as an important wake-up call for any businesses still relying on purchased social proof.

You might be getting away with it for now, but you’ll eventually find yourself in the regulatory crosshairs.

Friday, 20 October 2023

Google Takes Aim at AI-Generated Reviews


In a recent announcement via its Merchant Centre, Google has made it clear: they're tightening their grip on AI-generated content. In this instance, by targeting reviews produced by artificial intelligence.

Under the newly introduced section in Google's Product Ratings policies named "Automated Content," a statement reads, "We don't allow reviews that are primarily generated by an automated program or artificial intelligence application. If you have identified such content, it should be marked as spam in your feed using the <is_spam> attribute."





While this move emphasises the importance of authentic user-generated content, it also places a new responsibility on businesses and website owners to identify and flag AI-generated reviews.

The catch? Google doesn't provide explicit guidelines on detecting AI-generated content. This lack of clarity is significant, considering that even the most advanced AI content detectors have their limitations.

Understanding the Challenge

Identifying AI-generated content can be tricky because AI algorithms are becoming increasingly capable of mimicking human language and behaviour. They can instantly create reviews that look and sound genuine at first glance.

Google's decision to place the responsibility on website owners stems from the fact that AI-generated content can distort consumers' perceptions and trust in reviews, ultimately impacting purchasing decisions.

The Challenge for Website Owners

The absence of specific guidelines from Google means that website owners must rely on their judgment to spot AI-generated reviews.

While there's no foolproof method, certain indicators can help you separate the genuine from the artificial:

1. Check for Overly Positive or Negative Language: AI-generated reviews often exaggerate emotions. Look for extreme positivity or negativity that seems out of place. Genuine reviews tend to be more balanced.

2. Language Quality: AI-generated content may contain subtle grammar errors or unnatural phrasing. Pay attention to the overall language quality of the review.

3. Inconsistent Reviewer Profiles: If a reviewer has a suspiciously high number of reviews or if their profile is incomplete, it might be a red flag. Genuine reviewers typically have a more varied history.

4. Duplicate Content: AI-generated reviews may appear across different products or websites with minor variations. Perform a quick search to see if the same review appears elsewhere.

5. Review Timing: AI-generated reviews might be posted in quick succession or during non-peak hours when real users are less active. Look for unusual posting patterns.

6. Review Length and Detail: Genuine reviews often provide specific details about the product or service. Beware of overly brief, vague, or overly detailed reviews that don't seem natural.

7. Inconsistent Reviewer Behaviour: Watch for reviews that contradict a reviewer's previous sentiments or those that seem disconnected from their previous reviewing history.

8. Engage with Reviewers: Responding to reviews can help you gauge their authenticity. AI-generated reviewers are unlikely to engage in meaningful conversations.

Google's move to clamp down on AI-generated reviews reflects the world’s collective concerns about the authenticity and reliability of online content. Fake reviews are the firing line today, but it is entirely likely that a more extensive policy covering other types of AI-generated content will follow in the not-too-distant future.