Give us a call or drop us a line anytime, we endeavour to answer all enquiries within 24 hours on business days.
We will be happy to answer your questions.


    YouTube Systemically Harvesting Views with Junk Content, Largest-Ever Crowdsourced Study Reveals

    Everyone has been served weird recommendations by YouTube’s algorithm on a number of occasions, but just how bad could it get? Much worse than expected, an insightful post based on new research published by Mozilla reveals, questioning the platform’s credibility as it continues to feed users hate speech, political extremism and/or conspiracy junk/disinformation.

    The crowdsourced study — the largest-ever into YouTube’s recommender algorithm — drew on data from more than 37,000 YouTube users who installed a browser extension (called RegretsReporter) that lets users self-report YouTube videos they “regret” watching. A subset of 1,162 volunteers from 91 countries submitted reports generated between July 2020 and May 2021 flagging 3,362 regrettable videos that the report draws on directly. Curious for a summary of key insights from the study? Let’s dive in:

    • Triggering users to keep eyeballs stuck to ad inventory. As per the report, piles of “bottom-feeding”/low-grade/divisive/disinforming content have been powered by YouTube’s AI. These include videos spreading COVID-19 fear-mongering, political misinformation and “wildly inappropriate” children’s cartoons. The research finds out that by triggering people’s sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation, the platform is systemically trying to harvest views. 
    • A dysfunctioning algorithm violating its own rules. A substantial majority (71%) of the regret reports came from videos that had been recommended by YouTube’s algorithm itself, underscoring the AI’s starring role in pushing junk into people’s eyeballs. Recommended videos were 40% more likely to be reported by the volunteers than videos they’d searched for themselves, as per the report. Mozilla even found “several” instances when the recommender algorithmic put content in front of users that violated YouTube’s own community guidelines and/or was unrelated to the previous video watched.
    • “Falling down the YouTube rabbit hole” – this metaphor is used to describe unsuspecting internet users being dragged into the darkest and nastiest corners of the web, as referred to in the report. Back in 2017 — when concern was riding high about online terrorism and the proliferation of ISIS content on social media — politicians in Europe were accusing YouTube’s algorithm of automating radicalization. However, it’s remained difficult to get hard data to back up anecdotal reports of individual YouTube users being “radicalized” after viewing hours of extremist content or conspiracy theory junk on Google’s platform.
    • Bring in the clicks. A particular stark metric is that reported regrets acquired a full 70% more views per day than other videos watched by the volunteers on the platform. This finding lends weight to the argument that YouTube’s engagement-optimising algorithms disproportionately select for triggering/misinforming content more often than quality (thoughtful/informing) stuff simply because it tends to perform well on the platform.
    • Problematic lack of transparency around how YouTube functions. Mozilla found that around 9% of recommended regrets (or almost 200 videos) had since been taken down — for a variety of not always clear reasons (sometimes, presumably, after the content was reported and judged by YouTube to have violated its guidelines). Collectively, just this subset of videos had a total of 160 million views prior to being removed for whatever reason.
    • Google meeting YouTube criticism with “inertia and opacity.” Critics have long accused YouTube’s ad giant parent of profiting off of engagement generated by hateful outrage and harmful disinformation. The claims include Google allowing “AI-generated bubbles of hate”, exposing unsuspecting YouTube users to increasingly unpleasant and extremist views, even as Google gets to shield its low-grade content business under a user-generated-content umbrella.

    In response to the report, Google claimed it welcomes research into YouTube — and suggested it’s exploring options to bring in external researchers to study the platform, without offering anything concrete on that front. Watch this space for further updates on the subject and keep following our Marketing Bites for the latest industry trends.