Issue 1. How AI Destroys more than it Generates

Preface

Artificial Intelligence, or ‘AI’ is everywhere. From contained developments like OpenAI’s ChatGPT and Dall•E to Google Search results via their Gemini model, or Bing’s Copilot, and even Adobe Stock images. Artificial Intelligence has long been an exciting technology with many potential prospects in a lot of varied industries.[1] AI in the way we see today tends to be models of a Generative form, that being, they generate content - typically text, images, or video - based off of preexisting media and context.[2] In this article, we are going to cover modern examples of Artificial Intelligence being used in damaging ways, and explore how AI can be used correctly and safely.

I. Search Engines

On the sixth of December, 2023, Google introduced its new AI model, Gemini, into its search engine.[3] Gemini would replace the box that appears with a summary of your answer and a link to its source, with a more human-like answer, still providing the same information, but with the ability to ask further questions about the subject.[4]


Since its release, Gemini has been problematic due to the way it was trained. Google trained Gemini via websites, however, instead of sticking soley to informative sites such as Wikihow, Wikipedia or WebMD, they also chose to use Reddit[5] among others. The issue here lies within the nature of Reddit itself, with the ability for anyone to make their own “subreddit” dedicated to a certain topic.[6] This means that some of these subreddits definitely are excellent sources of information,[7] but a large amount are also of humour[8], with sarcastic comments. This is our area of concern: Sarcasm.

There is a common trope online, for people to respond to media they don’t like with extreme (though often disingenuous) hate, which causes comments sounding like they offer genuine advice, while if they were actually to be followed, can range from annoying consequences, like pressing Alt + F4, to downright lethal outcomes.[9] Even if not sarcastic, and it stems from someone who simply doesn’t know better, a comment that is misinformation, even if not malicious, can have the same effect.

This is where Gemini comes in. Since Gemini was trained on this data, it sees these answers as reasonable responses to the questions; AI can’t think - instead, only go off of basic data such as frequency. This causes what would have been factual summaries of sourced answers to be swapped with harmful replacements such as:

“Yes, it’s sometimes necessary to keep shocking a patient until a shockable rhythm is achieved.”[10]

That’s not true. If the heart rhythm of somebody is not shockable, that being: a shock would revert it to normal, then shocking it will likely have no positive effect.[11]

Or saying:
“Yes, sharks are significantly older than the moon”[12]

Or even worse, suggesting that pregnant women should smoke a cigarette two to three times a day.[13] I will leave it up to you to imagine the possible impacts of following this advice.

However, it’s important to recognise that this isn’t Gemini’s fault, Google is trying to combine a generative language model, which tries to simply create text that seems like it could be human made (no different than the suggested words above your keyboard), and factual information.[14] The problem here is that Gemini isn’t successful all the time at doing this, delivering human-like and factual answers, which means it should have remained in development until working every time, all the time.

II. Image Infiltration

If you’ve spent any time on Facebook, or even not, you’re probably familiar with images like these.
 Typically showcasing a young child from a developing country who has built something out of plastic bottles and other litter, often big incredulous structures. To most people, it’s obvious, this is fake. There’s also a trope of animals and other things arranging themselves into structures that look like the western image of Jesus Christ.
 Or even, a mix of both.

You may look at these and think, how is this destructive? And that’s reasonable. They all seem to be spreading positive messages, whether it’s recycling, or people praising religion. At the end of the day it brings happiness to older folk on facebook who aren’t able to recognise the difference, right? And you’re not entirely wrong. In small, closed cases, artificially made images like these are fairly harmless, but that’s exclusive.

Social Media platforms have their own stereotypes, and unfortunately, the next one we’re going to look at isn’t as innocent as Facebook’s case of rampant AI. Now we’re going to look at Twitter. Online, Twitter is known for being home to more unpopular and controversial opinions,[15] especially when it comes to American Politics, especially surrounding ex-US President Trump and his cult like MAGA following, especially during election season.

Trump, his supporters, and colleagues have used Artificial Intelligence models multiple times to create false images spreading propaganda about rival presidential candidates. Picturing Vice President Kamala Harris as leading a communist party, despite her party not being far-left, more central, if slightly left leaning. Or faking celebrity endorsements.


This is clearly working on his supporters, as we’ve seen with gullible older people on facebook, and they’re not isolated incidents.[16]

This is where widespread, unregulated AI becomes dangerous.

But that’s not limited to politics and free platforms, what about paid platforms? What about real art? 

As other creators have pointed out, AI being used to create art is beginning to appear in places for human made art, such as competitions.[17] While the debate of AI art being art or not continues to go on, one thing is for certain - while it has found purpose in places where the art is not the focus, it is seen as a large insult to human artists who spend a lot of time into their pieces, whereas the AI creator types out a prompt and waits 10 seconds.

I asked an artist friend of mine what his thoughts were on AI in the art space and this is what they said:

“It takes away from actual talent by being pushed so much in certain groups. The way AI art is made is fine; gathering different techniques and styles from different artists online is exactly what real artists do.”

“AI art is also fine to use recreationally, but pushing it so much, talking about how it's future of art, is just insulting to actual artists who have spent years developing the specific skill sets to do these things.”

“It should be used as more as an example of what computers and AI can achieve, rather than as a replacement of actual artists.”[18]

Additionally, platforms like Adobe Stock now have AI generated photos on them,[19] which is a deficit to the platform as creators using it are paying for high quality limited pictures, which are getting replaced with cheap alternatives, often with visual discrepancies. It’s a failure to meet their own marketing strategy, and a let down to their users.

III. Social Media Bots

Returning to the Facebook images, what did you notice about the titles? They contain the same basic principles: a phrase implying that they made it, despite looking different in each post - or not appearing at all; lots of heart and flower emojis, to emphasise their innocent goodwill to appeal to the older users; and a call to action, like this post if you also love X or agree with Y. It’s a certain trend - a trend not only seen on Facebook.

Let’s move over to Reddit. On Reddit, for each time you post, comment, or share, you earn an amount of “Karma”, showing how much you interact with the site. From this exists so-called “Karma Farms” where people set up bots to make posts with generic-enough titles that garner multiple thousands of likes.[20] But, why? There are multiple potential reasons, such as gathering data on users; developing Artificial Intelligence models; but most likely is simply for the purpose of gaining as much Karma as possible. Creators of these bots see it as a target: “How high can I get this number?”

Most of these act fairly harmlessly, just taking up space on your feed, where there could be genuine questions or posts deserving of being seen, made by people like you.

IV. Dead Internet Theory

This idea of algorithms acting as other users plaguing websites certainly isn’t new,[21] and whilst measures like CAPTCHAs are in place, it only takes the bot owner to complete it once, and then let the algorithm take the reins.
There exists a dystopian idea, called the Dead Internet Theory, which has been around since at least 10 years ago, which states that the Internet exists almost entirely of false users interacting and communicating with other algorithms. When it was created, it was just an idea, a possible dystopian future without thought of it becoming reality. Now, it seems more like a fulfilled prophecy.[21]
As we’ve seen throughout this article, false users can pose varying levels of “threat”. Ranging from harmless, or for research purposes, to dangerous - propagating disinformation, harmful advice, and overriding human creatives.

However, it's unfair to assume all Artificial Intelligence can only be used in harmful ways. For example, the company OpenAI, which I mentioned earlier, developers of leading Text and Media generating AIs, have been crafting their models for years. However, they’ve been fine tuning and advancing their models solely on their websites. If you want to use Chat-GPT, you go to its website. If you want to use Dall*E, you go to its website.[22]

These closed developments, where the AI is also restricted carefully under regulations, prove how easy it is to not be destructive when developing a model of Artificial Intelligence.

V. What now?

We’ve reached the end, but before I finish, you are probably wondering what steps you can take to combat destructive use of Artificial Intelligence. You can: 
  • Advocate and petition for stricter regulations on AI;
  • Request to see less content if you see posts made by Artificial Intelligence
But the best thing to do is simple ignorance. Don’t interact with the posts, just scroll past, which tells the algorithms that it’s not content you're interested in.[23] This makes the website less money, so eventually, they stop showing you the content. The creators of the AI also make less money, which is a nice bonus.

Citations

Author(s), Title, Access Date, Quote*.

[1] T. Davenport and R. Kalakota (NCBI), The potential for artificial intelligence in healthcare, 2024-08-25, (AI) will increasingly be applied within the field. Several types of AI are already being employed by payers and providers of care, and life sciences companies.

[2] J. Peck (Search Engine Land), What is generative AI and how does it work?, 2024-08-25, models are trained to recognize patterns in data and then use these patterns to generate new, similar data.

[3] S. Pichai and D. Hassabis (Google), Introducing Gemini: our largest and most capable AI model, 2024-08-25, Dec 06, 2023.

[4] K. Purdy (ARS Technica), Google is “reimagining” search in “the Gemini era” with AI Overviews, 2024-08-25, “AI Overviews,” [...] provide summary answers to questions, along with links to sources.

[5] WEAK u/waazzaap (Reddit), Google using Reddit to train AI, 2024-08-25

[6] Reddit Inc (Reddit), What are communities or “subreddits”?, 2024-08-25, sub-communities within Reddit are also known as “subreddits” and are created and moderated by redditors like you.

[7] Examples include r/coolguides, r/damnthatsinteresting, r/firstaid

[8] Examples include r/sciencememes, r/funnymemes

[9] E. Konovalova (WBS), How social media platforms fuel extreme opinions and hate speech, 2024-08-25, facilitate the spread of controversy, conflict, and what is commonly termed ‘hate speech’.

[10] WEAK


[11] J. Yang, S. Tyagi, S. Rosen, K. Eng, M.F. Odish, R. Sell, J.R. Beitler (UC SDMC), Defibrillating Non-Shockable Rhythms During In-Hospital Arrest, 2024-08-25, induced rhythm change in one-third of cases

[12] WEAK 


[13] WEAK flori robin🏳️‍⚧ (X), oh google is like BROKEN broken, 2024-08-25

[14] see [4]

[15] A. Counts and E. Nakano (TIME), Harmful Content has Surged on Twitter, Keeping Advertisers Away, 2024-08-25


[17] K. Shepard (Kotaku), Pokémon Card Contest Disqualifies Fans For Allegedly Using AI Art, 2024-08-25

[18] WEAK

[19] WEAK u/Rudicinal (Reddit), Does anyone find it messed up that Adobe Stock is just littered with AI generated images?, 2024-08-25, Adobe Stock is just littered with AI.

[20] Theknightwho and Mynewfiles (Wiktionary), karma farm, 2024-08-25, To attempt to increase an account's Reddit karma as quickly as possible, in order to increase its apparent legitimacy from the perspective of other users.

[21] K. Tiffany (The Atlantic), Maybe You Missed It, but the Internet 'Died' Five Years Ago, 2024-08-25

[22] OpenAI (OpenAI), Developing beneficial AGI safely and responsibly, 2024-08-25, we prioritize the development of safe and beneficial AI.

[23] E. Siu (Single Grain), Understanding Social Media Algorithms: Why Feeds Favor Interests Over Friends, 2024-08-25, Interest-based social media algorithms look at what you've interacted with before to guess what you might like.



Comments

Popular posts from this blog

Issue 5. Generative Artificial Intelligence and its objective failures

Issue 3. What's the deal with Climate Change?