When I sat down to watch Bad Influence: The Dark Side of Kidfluencing on Netflix, I didn’t expect to be shocked. Most of the issues it raised—exploitative parents, the pressure on child creators, the mental toll of algorithm-chasing—were things many of us have known about for years. But one detail hit differently: the calculated use of bot views to manipulate YouTube’s algorithm and drive organic traffic.

This wasn’t just a shady backdoor tactic by desperate creators. According to the series, it’s a deliberate, strategic play used by major kidfluencer channels. And here’s the twist: YouTube knows it’s happening.

A decade ago, when I was experimenting with my own YouTube channel, I naively assumed the system was more intelligent than me. I ran some test uploads, toyed with boosting views artificially, and was quickly met with YouTube’s detection systems, which clearly indicated what percentage of my views were organic. That moment cemented the idea in my head: YouTube had this under control. It was a fair playing field.

I couldn’t have been more wrong.

From Simplicity to Strategy: How Content Creation Has Changed

Back then, content creation felt intuitive. You had an idea, filmed a video, uploaded it, and hoped it resonated. Today, the ecosystem is a labyrinth of SEO tricks, retention-rate tactics, cross-platform promotions, thumbnail optimisation, and now—strategic fake views. As someone from a tech background, I never imagined I’d feel overwhelmed by making content. But the game has evolved far beyond creativity. It’s a system of engineered manipulation.

The most disturbing part of this evolution isn’t just how creators are gaming the system. It’s how YouTube appears to tolerate it. Creators purchase bot views to make a video appear popular, which in turn nudges the algorithm to serve it to real users. These genuine views are monetised. The bots are simply the bait. And the platform, rather than shutting this behaviour down entirely, seems content to look the other way.

YouTube Knows – And That’s the Real Issue

Let’s be clear: YouTube isn’t unaware. Its detection tools are highly sophisticated, and its analytics offer detailed breakdowns of traffic sources. The platform can identify fake engagement. So the question isn’t “Can they stop it?” It’s “Why don’t they?”

There are a few uncomfortable possibilities:

  • Advertising Revenue Rules All: As long as real people eventually watch and adverts are served, the origin of the views doesn’t affect YouTube’s profit margins.
  • Maintaining the Illusion of Viral Culture: Viral videos keep the platform engaging. Whether the popularity is organic or manufactured is secondary.
  • Too Big to Moderate Properly: Monitoring every view count manually is neither scalable nor financially appealing.
  • Creator Mythology: YouTube thrives on the idea that anyone can “make it”. Fake views help sustain that fantasy.

Are We Watching What We Think We’re Watching?

This isn’t merely a technical loophole. It’s an ethical concern. Especially when children are involved, when family channels are effectively turning childhood into a monetisable brand—we must question how such content gains visibility.

If YouTube’s algorithm rewards manipulation, we’re not being served the best content. We’re being served the most calculated. And often, the most exploitative. That has serious implications for media literacy, consumer trust, and the wellbeing of young creators.

The Myth of a Level Playing Field

Ten years ago, I genuinely believed that quality content would rise on its own merits. Now, it’s painfully obvious that meritocracy on platforms like YouTube is largely an illusion. Success isn’t born from creativity alone—it comes from knowing how to work the system without getting caught.

And if the system is quietly enabling that behaviour, maybe it’s time we stop pretending it’s on our side.

Leave a comment

Trending