Advertisement

via @crystalynn_8

There are details about this video that inspire confusion and awe. Her pained yet bored facial expressions, the way she says “ooh, burns” after applying hand sanitizer. Then there’s the ASMR of her metal jewelry bumping up against the glass, her long, beautiful nails and complete command over her material environment. Every time the video circulates on Twitter, which it does regularly since it was first posted in 2023, users are unusually moved by this simple drink-mixing clip. @nxd1979 writes, “when she opens a redbull with another redbull....that's art.” @chrismunetonn writes, “I want to log this on Letterboxd and give it 5 stars.” We can’t put our finger on why it’s so captivating, we just know that it is. This is the beauty of human creation. There’s a human element of absurdity and randomness that keeps us on the edge of our seat. 

Sure, there is plenty of absurdity and randomness in AI-generated content, like when a hand suspiciously has two extra fingers or words look almost decipherable but are actually gibberish upon further inspection. But we know why these mistakes happen. They’re a misfire in the program, not a genuine outpouring of one woman’s idiosyncrasies. While AI videos can grab our attention, they will never replicate the human element that makes something worth watching. The imperfections, the absurdity, and the knowledge that it comes from somewhere real. 

via @houstonbone

After watching Crystalynn’s video, we’re left with so many questions. Where is she? Who is she? Why is she adding so many syrups to a drink that is already pretty sweet? What was in the cup before she drank the last of it? What does that drink taste like? Does she do this every day? Why did she open one can of Redbull with another can of Redbull? Should I be doing that? When you know something is AI generated, the “why” behind the creation is nullified. The intention of the “artist” can usually be boiled down to, “I wrote a prompt and the program gave me this video.” The intention behind the prompt may vary, (incite discourse, test the limits of technology, generate engagement) but the result is the same flattened content. We know that though it may have started with a human idea, the result is something that looks flashy but holds no real meaning and betrays no greater context. 

via @BrianRoemmele

Take, for example, this recent video that garnered over 14 million views on Twitter featuring an AI model named “Michelle.” We’re introduced to her in a series of vignettes replicating the average celebrity’s press tour. She’s interviewed, models for “photos,” and giggles to the camera, all the while making quippy remarks about her “creator.” The “why” of this video is unclear, besides to show off technology, and perhaps imagine a future where the celebrities on our screens are actually stored on a server somewhere. But the only thing that’s compelling about this “woman” is the horrifying thought that a man, alone with his computer, dreamed her up to his exact specifications. Comments ranged from, “Oh man this is so cringe” to “This should terrify everyone who sees it.” You stick around to watch the video the way you might rubberneck at a highway accident. But it isn’t compelling. You don’t wish for more. This is the reason that, despite having 14 million views, it only has 7.9 thousand likes. As impressive as it is, we have a marked distaste for it.

Part of the reason we’re compelled by Crystalynn’s video (which has almost 200k likes on TikTok) is because she is a real woman with a real backstory. Not one that we may ever know the details of, but one that we know exists. She has had a whole life that has led up to this moment, with heartbreak and joy and mundanity, just like us. We can sense the warmth of another person through the screen. Sure, what we’re seeing is only a representation of them through pixels and sound, but we know that the genuine artifact is somewhere out there. This matters. It matters that we know what we’re seeing is an expression of the human experience, not a manufactured hallucination. Programs can’t make mistakes. There might be human errors in code or hardware malfunctions that result in mistake-like outcomes, but these are caused by external factors, not because they’re following an errant whim of a motherboard. Humans make mistakes because we can’t help it. This might seem like a slight semantic difference, but it’s a difference that’s going to matter more and more as these programs become increasingly “intelligent.”

We might see a world (likely sooner than we expect) where AI will be able to convincingly generate a video just like Crystalynn’s. It will hit all the same notes, inspire the same questions, and stop you when you’re scrolling. You’ll be 100% fooled, and wonder “Who is this woman and what is her story?” Not knowing that she’s not actually a woman, but lines of code stored on a server. But so far it seems like we’ll always be able to tell the difference. Or at the very least we’ll remember the source material, and we’ll return to watch her open a can of Redbull with another Redbull one more time. 

Tags

Scroll Down For The Next Hot Take