What the Actual Heck Is AI? Does It Matter That We Know?

We can’t get away from the topic of artificial intelligence (AI), can we? It’s in the news. It’s in our social media feeds. It seems like every day one of our favorite apps is announcing a new AI integration. As I spent many nights researching AI (and getting into too many rabbit holes until 3 a.m.) to write a series of articles on the topic, I wondered: How many people know what AI is? As it turns out, not many. Inevitably, AI will be a part of daily life whether we like it or not, so yes, understanding what it is and how it works matters because its impact on our society and environment is too grand to ignore.

AI is going to do wonders in our society. My colleagues and I are seeing firsthand how it’s improving the way we work (we’ll share how our team uses AI in a future article). But we also acknowledge the harm it’s causing in black and brown communities and its contribution to technocolonialism. Companies are rushing so quickly to integrate AI into their workflows to stay competitive that they don’t invest in learning about the tool and its implications on society, missing the opportunity to hold AI developers accountable and advocate for more equitable technology.

We can’t push tech companies to improve AI without first understanding what it is. It’s okay if your familiarity with AI is limited. After all, AI only fully made it into the mainstream lexicon when DALL·E and ChatGPT launched in 2022. Although I’d love to delve into statistical algorithms and deep-learning neural networks as I attempt to explain AI — yeah, I’m a geek — I’m going to keep this simple.

What AI isn’t.

First, let’s talk about what AI isn’t. It’s not C-3PO in Star Wars. It’s not Data or Zora in the Star Trek franchise. It’s not Cortana in Halo or Janet in The Good Place. These are popular examples in pop culture of AGI (artificial general intelligence) — “self-aware,” autonomous systems with intellectual capabilities that match, and even surpass, human capacity. AGI is a vision that some AI computer scientists are working towards but are very far (far) away from attaining. It makes me chuckle that they believe they will achieve in a few years what took more than half a billion years of evolution to create the human brain. We can all have peace of mind knowing we won’t be forced to be human batteries for The Machines living in a simulated reality anytime soon. Until the foreseeable future, AGI will remain science fiction.

AI is also, well, not AI. Artificial Intelligence is a misnomer for what the technology is. There’s nothing artificial about it — the algorithms are made by humans; the data comes from humans and the content it generates is the result of combining the work of humans. There’s also nothing intelligent about AI, not in the human sense — it’s not capable of critical thinking, processing emotion, contextualization and countless other functions our brains (and bodies) do. We must separate the technology from our anthropomorphization of the technology if we want to have an accurate understanding of AI and work towards making the tool better for people and our planet.

What AI is.

Now that we have gotten that out, let’s talk about what AI is. Many people think of Generative AI, like ChatGPT, when we’re talking about AI but those are relatively new tools. AI has been around for years. Search engine results, voice assistants like Siri and Alexa, recommendations on Netflix and Spotify, facial recognition, customer service chatbots and the infamous social media algorithms, all use AI.

Although much of what the public knows about AI programs is limited due to the proprietary nature of these technologies, the building principle is all the same: Algorithms, known as machine learning, analyze massive amounts of data and make statistical predictions based on the patterns found in that data. Include deep learning, and you have a robust program that can compute multiple datasets and patterns simultaneously and predict with greater accuracy the result. Simply put, AI is a very fast pattern detector and predictor.

Its usefulness is also very restrictive. AI systems are made to solve specific problems. There isn’t one system that can do it all. Such a technological feat would take a level of programming and resources that isn’t feasible at present. And before one jumps to conclusions about its autonomy, no, AI won’t be able to “deep learn” itself into becoming a multitasking program that can solve all problems in every facet of society—even Sci-Fi can’t give you something like this, otherwise, that would make for the shortest novel in the world: “Once upon a time, there was an AGI that could solve all problems, and humans lived happily ever after (or were exterminated to save the planet). Fin.”

This leads me to the next point about AI …

AI needs humans to function.

An algorithmic system alone does not power the AI that we’re using more and more in our day-to-day. What most people don’t know is that AI’s machine-learning models require people behind the scenes to label data and moderate it to make sure that its predictions are as accurate as possible. This continuous feedback loop is what makes AI good at determining what we’re asking and how it should respond.

And if you have an image in your head of people in nice offices labeling pieces of content and checking yes/no boxes on a screen while sipping on Monster, that’d be far from reality. AI systems need tons of data, and with that comes cheap labor. The Tech Giants investing in AI are paying third-party companies in countries with weaker economies to employ people to label and moderate data and results. Many of these employees are making no more than $10 a day. What’s worse, they’re not provided with mental health resources to deal with the trauma that comes from dealing with disturbing content that they’re tasked with feeding to AI to make rules on what is essentially good or bad.

The Hidden Labor Behind ChatGPT: A Talk by Award-Winning Journalist Karen Hao.

This is one of the dark sides of AI that doesn’t get talked about often and it’s something I now think about every time I open Midjourney, ChatGPT or Gemini.

AI requires energy … lots of it.

AI, particularly Generative AI, uses many servers to compute massive amounts of data. These physical servers need significant sources of electricity to run — more than a small country’s worth. With more data coming in, more servers are added to process it, requiring even more electricity for AI to properly function. Compounded by the increasing public demand for more AI tools, it’s not hard to see why AI has the potential to exacerbate our climate change crisis if it continues unchecked.

I hope that, as you’ve reached the end of this article, you have a better understanding of AI and its implications. There’s much to contemplate about how we approach the use of AI in our organizations and personal lives and I encourage everyone to use their voice (and wallets) to force tech companies to make better, responsible AI products that benefit EVERYONE.

For further reading, check out What is AI: Additional Resources.

Share
Carlos Centeno

Carlos Centeno

Associate Creative Director

Carlos is a multidisciplinary designer with 15+ years of experience in strategy, branding and UX. His work includes brand identities, advertising, packaging, corporate collateral and websites for clients across various sectors. He’s passionate about helping brands connect with all audiences through a culture-first approach, ensuring that every piece of communication represents and speaks to diverse perspectives.

More Insights

Woman presenting to a group of seniors.
Illustration of like and dislike icons.
Collage of photos showcasing diverse group of people.