FAIL Blog

'AI should not replace human tasks': Opportunistic AI creators are making the internet worse

Advertisement

Yum? AI attempts to imitate food

In recent years, food delivery apps have been inundated with "ghost restaurants," aka ghost kitchens pretending to be actual restaurants. That's still a problem, but now there are bigger fish to fry (pun shamelessly intended). 

As a baffling AI tactic, restaurants use it to create photos of the food they literally sell. Why not just go to the kitchen and snap a few pictures yourself? No one is asking for their takeaway food photos to look perfect. People just want to know the general ingredients in the food, as well as how it's supposed to look when they receive their dish. Some restaurants use AI to do the hard work for them, often with unsettling results. Here are a few examples below—notice how offputting they look…

Image via @Caution_wet_paint

The images look fairly normal at first glance, but a closer look reveals an I Spy game for clumsy AI mistakes. As is often the case with artificially generated slop, the individual pieces of the photo don’t coalesce. Let’s zoom in and look closer: There is one main chunk of noodles with some red soup in a plastic container. The plastic container’s left flap is white cardboard. The guy holding the container’s thumb disappears instead of being visible through the plastic. And check out the people in the background, like the woman in pink whose legs seem to disappear before touching the ground.

Images via Bon Appetit - AI art by Bobby Doherty using Midjourney 

Bon Appetit tried their hand at this too, with some additionally strange results, like this pepper/sushi/chopsticks combo. The pepper is misshapen. The “sushi roll” appears to be a green pepper instead of a seaweed wrap. The inside of the sushi contains rice, yellow squares, and some type of green and red vegetables. To top it all off, the AI tried to create chopsticks, with the result looking like some blurry pointy knitting needles. 


AI was also prompted to create an "heirloom tomato and watermelon salad with feta and basil." Midjourney heard this and must have thought, “This person wants to see a literal watermelon- pizza of nightmares.” Now, we experience off-putting greenish feta cubes and disgusting bulbous basil leaves. 

The author of that piece concluded that even after hours of explaining to AI what it should create, the results were "bland, and, at best vaguely derivative." And no one in their right mind is opening up their food delivery apps to order that.

We're texting with ChatGPT now?

Photo via @nosilverv and @venturetwins

In 2023, the classic animated comedy show South Park aired an episode that addressed the newfound habit of having AI write your text messages. A few of the characters realize that they can pen heartfelt messages to their girlfriends with almost zero effort. The only risk was the girls finding out. However, the boys are so happy with their easy out for texting, that they just keep sending them anyway. The word spreads around school, a whisper of “ChatGPT, dude…" as the solution to all of their problems.

In the end, the girls find out, but their reactions are muted and nonsensical because… AI wrote some of that episode. What could've been an explosive emotional moment is turned into unemotional nonsense by AI, proving once again how feeble and uncreative AI often is. Humans feel emotions and computers don’t. A computer couldn’t comprehend the betrayal a human could experience when their loved one starts outsourcing text messages. How would you feel if the person you love most in the world decided that they’d rather let AI talk to you instead of them? There are many answers to that question, and the AI tasked to write the episode did not consider them. 

Image via @tresterese 

It's not just TV shows anymore: The era of AI-written text messages is here to stay. One writer tested out an AI service to write their personal texts for a week. They wanted to see if their friends and family members could tell they weren’t communicating with a human. Thankfully, it turns out that a lot of people recognized it immediately, even when the writer used a “casual tone” setting for the AI-generated messages. 

The writer's wife and friends knew they were being duped. But a casual acquaintance couldn't tell which messages were real and which were fake. The problem is that AI tries too hard to be casual, adding awkward slang and unnecessary emojis. When you know someone very well, it is way easier to tell when they’re using AI to write out their messages, because you know how that person usually sounds. So maybe keep that in mind next time you decide to let ChatGPT write something for you. 

Let AI write to your heroes

Photo via Google+ on YouTube

It’s definitely too soon to outsource all of your writing to an artificial computer program. If you watched the 2024 Olympics, you probably saw the viral commercial that was an ad for Google's AI, “Gemini.” 

In this commercial, a father explains that his young daughter loves running and has an Olympian she looks up to. The kiddo wants to pen a letter to her hero, two-time Olympic hurdler and sprinter Sydney McLaughlin-Levrone. The young girl's Dad knows that she wants to “get it just right,” so he lets “Gemini” write the letter for his daughter. 

Yes, instead of writing a letter to her hero, Gemini writes a letter for this child, begging the question: What is the point of even creating the note at all? The child isn't gaining penmanship skills or bonding with her dad. The star athlete certainly won't care about some emotionless letter. However, she might care about a child's handwritten note and drawing. This dad isn't teaching his kid anything except how to take the path of least resistance to solve something that was never a problem to begin with. 

The commercial caused a lot of discussion and backlash since many viewers reacted either baffled or displeased. Google pulled the commercial from the air during the Olympics after just a few days. (Google is adamant that the commercial “tested well with audiences before being aired.” However, that feels hard to believe.) 

Artificial intelligence is trying to write books

Reading to your kids is one of those things that's constantly hammered into the heads of parents. Children soak up new information like a sponge. Reading is an imperative skill: Studies show that if kids aren't reading at grade level by fourth grade, they're more likely to drop out, struggle academically, and even impact their future careers. 

But does it matter what you read to your kids? It would appear that this generation is about to find out with an online marketplace flooded with, as X user @FacebookAIslop put it, “mass-produced slop.” The account presented a selection of books about today’s celebrities and internet personalities that are for sale, and delved into the oddities inside these books. 

Photo via @FacebookAIslop

Photo via @FacebookAIslop

The X user @FacebookAIslop highlighted a few screenshots from the book, and it's not exactly a page-turner! The book about Mr. Beast (real name Jimmy) includes bizarre illustrations of a generic-looking dude… Plus, many of the characters that surround him are dissolving into pixelated sludge. 

Then there is the AI book's “scintillating” writing. For example, why are they asking “Can you imagine how many people watch his videos?” They just said how many in the previous sentence…

It's not just children’s books, either. Maybe you are child-free and you want to start cooking better meals for yourself, so you order a few cookbooks online. Well, AI has snuck into that arena, too. One author, Joanne L. Molinaro, discovered that her award-winning vegan cookbook had been basically copy-pasted into a separate AI-written cookbook. 

Photo via @thekoreanvegan

Side by side, the similarities between the two books are clear to see: The fonts and color choices are nearly identical. The author took to TikTok to explain just how bad things had gotten. She shared that in the knock-off book, recipes weren't even vegan, with numerous entrees containing eggs, meat, and dairy products. 

Photo via @Matthew_Kupfer

There are plenty of examples, with cookbooks being one of the most common targets. Another person found a cookbook that provides recipes for 2,000 days in a year. (There’s actually just 365 days each year, if you didn’t know…) 

The AI cookbooks strive to have the best possible SEO practices to draw in those searches rather than provide quality cooking lessons. This one reads, “2024 for everyday slow cooking,” which doesn't make sense, but probably catches a lot of interested buyers with keywords like “easy recipes” and “cookbook for beginners.” 

AI isn't even good at writing summaries

If you've used the Amazon app lately, you may have seen their AI summaries of people's reviews. "That could come in handy," you think to yourself. Well, sure, maybe. But in a test by the Australian Securities and Investments Commission in 2024, the results were dismal. The results showed AI pitting human beings against the generated content in the product reviews.

The test also found that people can often tell when the writing is done via artificial intelligence rather than by a human. The AI summaries just weren't as accurate as human ones. Reviewers found that these summaries "often missed emphasis, nuance and context," according to Crikey, which summarized the report. The artificial intelligence tended to focus on irrelevant topics or miss important information. 

Photos via @petergyang

In one example that quickly went viral, AI skimmed the internet to summarize how to make your pizza tastier. It suggested that people “add about ⅛ cup of non-toxic glue to the sauce to give it more tackiness.” That’s because the AI picked up a joke answer from an 11-year-old Reddit comment, and suggested it in an AI overview as a real answer. One journalist actually tried the suggestion, while across the internet, the overview was mocked

Another person showed off a screenshot where they asked for a good way to clean their washing machine. Please don’t use the recipe that the arrow is pointing to—it creates chlorine gas, which is quite poisonous and can even be fatal. 

Actual people are being impacted by AI’s mistakes, too. One Amazon seller wrote that a botched AI review “wrecked” their business after it marked one of their bags as “100% negative for fabric.” So, AI is saying that the fabric bag is 100% negative fabric... That makes 0% sense, and the company, which claims to have otherwise positive reviews from actual customers, seemed distraught by this strange mishap. 

The future of AI

The Australian Securities and Investments Commission’s overall takeaway was that "GenAI should be positioned as a tool to augment and not replace human tasks." Instead of AI making our lives easier, it's just making it more confusing and adding more work for people as they have to correct its mistakes.

AI clearly has a lot of limitations, but it could also be used for good. AI just shouldn't be used in creative endeavors that the human mind can do with passion and care. Instead, it could be used for medical reasons, like one machine that can reportedly catch tumors before doctors notice them. The so-called “computer-assisted diagnosis” can help with issues like bias in the medical field, which is a persistent problem

There's room for AI in the world, but in the end, we value the things made by human beings more. Humans create artwork, write, make music and videos, or create other entertainment to express their emotions. Every painting you buy was made by a person who had a vision. AI has no past to draw on and no future to plan for, it has no emotions or connections to other people. Until it does, it’ll have some major flaws. We’ll figure out what to do with AI eventually, but until then, let’s leave the important things in life to actual humans. 

Tags

Also From FAIL Blog

Scroll down for the next article

Comments

Advertisement

Hot Today

Advertisement