JUST LOOK AT WHAT IT CAN DO:
According to Google, this thing can "understand" short story prompts and bring them to life. You write a paragraph, and it directs a believable scene. It even lets you control the camera angles, add or remove objects, and style-match reference photos. This isn’t just content generation - it’s automated filmmaking with a side of Black Mirror.
And yeah, I get the upside. There are tons of exciting creative applications here. Indie filmmakers, animators, ad agencies, YouTubers, they all just got handed a cheat code. Google even built a tool called Flow, designed for creatives to manage characters, locations, and styles like ingredients in a narrative kitchen.
That’s all great in theory. But here’s the thing: I’ve seen what the internet does with power. And I’ve seen what the internet believes. Y’all already get fooled by the most obvious AI trash. I’ve watched people argue over whether a photo of “a five-year-old who made a perfect statue of Jesus out of asparagus” is real. I’ve seen AI-generated cooking videos of cats in chef hats that somehow rack up millions of views from people who think it’s wholesome and not horrifying.
So now we’re giving those same people a tool that can create fake news videos? Deepfakes with ambient sound and lip-synced dialogue? Hyper-realistic simulations of politicians, celebrities, or even just regular people saying things they never said?
This is what AI video looked like in 2023:
Two years ago, we were all laughing at Will Smith eating spaghetti in AI-generated purgatory. It was glitchy, gross, and hilarious. Now, we’re generating convincing digital humans with expressions, timing, and shadows that obey real-world physics. I don’t know how to say this gently, but: this is not normal.
And it’s not just about fakes. It’s about volume. These tools don’t just make one deepfake, they enable a torrent of them. Think misinformation, but on an industrial scale. Think scams, hoaxes, fake apologies, fake endorsements, fake evidence. Think a future where your friend sends you a video of Beyoncé endorsing their new skincare brand and you have to squint and say, “Wait, is that real?” And then spend ten minutes Googling. And then still not be sure.
Even worse? Our brains aren’t built for this. Evolution didn’t prep us to watch a video of someone speaking clearly and then say, “Hmm, that’s probably AI.” We’re hardwired to trust our eyes and ears. That’s what makes video so powerful, and so dangerous. A video isn’t just content. It’s evidence. At least, it used to be.
And look, I’m not trying to be anti-tech here. I love technology. I’m the guy who gets excited about robot dog races and new OLED displays and smart fridges that judge your snack habits. But I’m also the guy who used to believe that behind-the-scenes DVD commentaries were sacred, and now we live in a world where characters can be AI-resurrected for a sequel they never filmed. There’s a difference between progress and chaos. And we’re teetering.
I'm more worried than impressed, and I'm extremely impressed
What scares me most isn’t the tool - it’s the culture. We’re not ready. We haven’t built the guardrails. We don’t have good media literacy. We don’t even have moderators who can keep up with TikTok comment sections, let alone AI-generated video floods. And yet, here we are, pressing the gas pedal because the tech is “cool.”
It is cool. It’s magic. It’s miraculous. But so is nuclear fusion - and we don’t hand that out with a Gemini Ultra subscription.
So yeah. I’m panicking a little. Not because Veo 3 is evil, but because the world it’s entering is incredibly unprepared. If we're not careful, the silent film era of AI is going to look like a utopian golden age compared to what comes next.
And honestly? I’m not ready.