ai is making you dumber! Before you jump to the comment section with your flame thrower or pom poms, it’s a nuanced situation.
MIT is reporting that 95% of companies that have adopted ai are seeing ZERO financial benefit on their balance sheets. While a different MIT brain scan study showed significantly hampered learning with ai users performing worse in all levels neural, linguistic, and scoring.
Microsoft released a study showing that using AI reduces critical thinking and creativity … Especially for anyone who trusts ai without years of experience not using AI.
This is not a Doomer anti-AI video. This is a warning to avoid automating the very things that will make you effective and powerful in the marketplace.
As a professor of Visual Communication, this has significant implications for my students. The fact of the matter is that the more that they use AI while they are learning, the less self-confidence and expertise they will have. In my opinion, using tools that automate your learning, bakes redundancy into you as a student and makes you less skilled, less effective, and less marketable in the industry. Let’s dive into it.
[Researchers] surveyed 319 knowledge workers analyzing 936 first-hand examples of using GenAI at work. They found that higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. [PDF download of the study]
As a significant portion of knowledge work is shifting towards critical thinking, information verification, response integration, and task stewardship… it is essential that students get their reps in. Young professionals without the years of experience doing the mundane tasks that are now being automated, lack the critical thinking and intuition that comes from doing those mundane tasks.
The study states, “a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
When I pointed this out to one of my PR colleagues, he said, “that situation turns those one-off problems into system killers. Everything will grind to a halt because nobody is around who can figure out what to do with the exception.”
Imagine an athlete whose job is to be strong and fast while reacting instantly to a changing environment. They get paid the big bucks because they can be relied on to do incredible things during exceptional moments.
Take a wide receiver in football, for example. If we came to practice and saw that this receiver had automated the mundane repetition of training by having a robot run his routes and catch the ball for him during practice while he sat in a lawn chair with a lemonade, we would laugh at the absurdity of it all.
Because we know instinctively that his job is to catch the difficult passes during the game when it matters and that by outsourcing all of the mundane reps that happen during practice, he just doesn’t have enough experience to perform his job well. His muscles and mind will have withered to the point of poor performance because he automated the repetitive boring parts of his job.
The same thing happens to a young creative who uses generative ai to create an illustration instead of doing hundreds of hours life drawing and filling dozens of sketchbooks with observational and constructive drawing practice. That young creative won’t have the mundane reps that would’ve trained their instincts and taste to be able to tell whether the image will be effective or not. They won’t have the thousands of hours of time on tools to edit, alter, fix, and manipulate the image to meet the creative brief. They will not have a past filled with bad drawings and the subsequent critiques of how to improve those bad drawings. They will not have done any master studies, mark making experiments, or style exploration. None of this is making a moral judgement about ai, it is simply a fact that skipping the experience of training deprives you of the results of that training.
I can’t think of a single credible brand that would be OK putting their marketing in the hands of someone who cannot differentiate between an effective image that matches their brand guidelines and enhances their messaging and an image that will alienate their customers and hurt their reputation… No matter how quickly and cheaply they can generate that image.
I have a colleague in the math department who teaches advanced calculus and still makes his students write out long differential equations by hand … not because they are going to do this in the industry, but because by not doing it in school, they will have no idea what they’re talking about when they get into the industry where that type of thing is automated. They need to have the expertise to check the validity, accuracy, and integrity of the information that has been automated and understand WHY things are they way they are. Without the mundane reps, they are simply not prepared to function in the industry regardless of whether the industry is using an abacus, ti86, or ai.
Let’s get back to the study. Researchers also found that users with access to generative AI tools produce a less diverse set of outcomes for the same task compared to those without. And they state that this can be interpreted as a deterioration of critical thinking. In a thought based economy, less diversity of thinking is a huge detriment.
The use of generative AI also creates blind spots and self-doubt. Some participants expressed self-doubt in their ability to perform simple tasks, like verifying grammar without the use of AI, leading them to accept the GenAI outputs without question. If there is a more apt description of somebody who will soon lose their job to automation, I don’t know what it is. The paper explains GenAI tools, create obstacles for knowledge workers to be aware of the need for critical thinking… this puts workers at a disadvantage because they are literally unaware of where they might need to improve to continue to be gainfully employed.
Recently, anthropic, an ai development company, banned the use of ai for its job applicants stating that they, “want to evaluate your non-AI-assisted communication skills.” They got made fun of enough that they sort of reversed that decision. Kind of.
Tech.co reports that Apple, Samsung, Verizon, a number of Wall Street banks, and a bunch of countries have banned the use of ai.
While orgvue reports that 55% of companies regret replacing employees with ai
And a Goldman Sachs released a report speculating that ai is a bubble with an MIT professor estimating that ai will impact less than 5% of tasks…
On the other hand many people with a financial stake in the success of ai are predicting that it’ll replace most jobs, become incredibly profitable, and we’ll all need universal basic income to survive.
While we don’t know whether ai is the next Industrial Revolution forever changing every industry landscape or if ai is the second coming of the dotcom bubble, what we do currently know is that using ai is not a shortcut to expertise and has been proven to decrease critical thinking.
If you’re wanting to level up your skills, knowledge, and abilities, I recommend you avoid sabotaging yourself by using ai in your education. Just like spending hours playing a plastic guitar in Rock Band doesn’t get you any closer to playing an actual guitar, using generative ai to write or draw something for you doesn’t make you better at writing or art. In fact, this study shows that it makes you worse at it.
This is why I have my students do the work themselves. My goal as a teacher is create an environment where students become powerful agents to act for themselves. I want them to have high confidence in their own thinking and ability. Confident, skilled, critical thinking people can switch tools whenever they deem it necessary. While someone wholly reliant on a tool is controlled by that tool. I encourage my students to trust themselves and distrust ai so that we don’t have a bunch of hammers running around swinging carpenters.
While I do believe that some version of ai is here to stay, I don’t buy the hype that ai is our inevitable overlord and all is lost. We’ll still need humans to be human. If everyone abdicates their intelligence, hard work, and agency over to the machines, there won’t be anyone left to handle those exceptional situations that need a skilled critical thinking creative person. And this study shows that there is a direct link between trusting ai and diminished critical thinking. In other words, speaking confidently about ai is hurting our students.
I’m Cory Kerr and I wrote and performed this essay and video myself. Go make stuff










