Is this an era where foolishness seems to be thriving?

Is this an era where foolishness seems to be thriving?

Walking into the MIT Media Lab in Cambridge, Massachusetts, feels like stepping a bit closer to the future. Glass cases line the space, showcasing prototypes of strange and marvelous inventions—from miniature desktop robots to a surreal sculpture designed by an AI asked to imagine a tea set made of body parts. In the lobby, an AI assistant named Oscar helps visitors sort their trash, telling them where to toss a used coffee cup.

Up on the fifth floor, research scientist Nataliya Kosmyna has been developing wearable brain-computer interfaces. Her goal is to one day help people who’ve lost the ability to speak—due to conditions like ALS—communicate using only their thoughts.

Much of Kosmyna’s work involves analyzing brain activity. She’s also designing a wearable device—one version looks like a pair of glasses—that can detect when someone is confused or losing focus. About two years ago, she started getting unexpected emails from strangers who said that after using large language models like ChatGPT, they felt their minds had changed. Their memory didn’t seem as sharp—was that even possible, they asked?

Kosmyna herself had noticed how quickly people were embracing generative AI. She saw colleagues using ChatGPT at work, and applications from researchers hoping to join her team began to look different. Their emails were longer and more formal. Sometimes, during Zoom interviews, she noticed candidates pausing before answering and glancing away. Were they getting help from AI? The thought shocked her. And if they were, how much did they really understand of the answers they gave?

Curious, Kosmyna and some MIT colleagues set up an experiment. They used EEG scans to monitor brain activity while participants wrote essays—some with no help, some using a search engine, and others using ChatGPT. The results showed that the more outside help people received, the lower their brain connectivity. Those using ChatGPT displayed significantly less activity in areas linked to thinking, attention, and creativity.

In short, while users might have felt engaged, the brain scans told a different story: not much was happening up there.

After turning in their essays, participants—all students from MIT or nearby schools—were asked if they could recall what they had written. “Barely anyone in the ChatGPT group could quote their own work,” Kosmyna says. “That’s concerning—you just wrote it, and you remember nothing.”

Kosmyna, 35, is stylishly dressed in a blue shirt dress and a large, colorful necklace. She speaks faster than most people can think. As she points out, writing an essay requires skills we use every day: synthesizing information, weighing different viewpoints, and building an argument. “How are you going to handle a conversation?” she asks. “Will you have to say, ‘Uh…can I check my phone?’”

The study was small—just 54 participants—and hasn’t yet been peer-reviewed. Still, in June, Kosmyna posted it online, thinking other researchers might find it interesting. She had no idea it would spark an international media storm.

Along with interview requests, she received over 4,000 emails from around the world. Many came from stressed-out teachers worried that students relying on ChatGPT for homework aren’t really learning. They fear AI is creating a generation that can produce acceptable work but lacks true understanding of the material.

The core issue, Kosmyna explains, is that once a technology makes life easier, we’re wired by evolution to use it. “Our brains…”We naturally love shortcuts, but our brains actually need challenges to learn effectively. They require a certain amount of friction to grow.

It’s interesting that while our minds need this resistance, we instinctively avoid it. Technology, on the other hand, promises a “frictionless” experience, ensuring we glide effortlessly from one app or screen to the next without any obstacles. This seamless interaction is why we so readily hand over information and tasks to our devices. It explains why we easily get lost in endless online content and struggle to pull ourselves out. It’s also why generative AI has so quickly become a staple in many people’s daily routines.

From our shared experiences, we know that once you get used to the hyper-efficiency of the digital world, the real world—with all its friction—feels more difficult to navigate. So, you might avoid phone calls, use self-checkout lanes, and order everything through an app. You might grab your phone to solve a math problem you could have done mentally, look up a fact instead of recalling it, or rely on Google Maps to guide you from point A to B without thinking. Perhaps you’ve stopped reading books because maintaining focus feels like too much effort, or you dream of owning a self-driving car. Is this the beginning of what writer and education expert Daisy Christodoulou calls a “stupidogenic society”—similar to an environment that promotes obesity, but one that makes it easy to become intellectually lazy because machines do the thinking for you?

Human intelligence is too diverse to simply label as “stupid,” but there are concerning signs that our digital conveniences are taking a toll. In developed OECD countries, Pisa scores—which assess reading, math, and science skills in 15-year-olds—peaked around 2012. While IQ scores rose globally throughout the 20th century, likely due to better education and nutrition, they now seem to be declining in many developed nations.

The debate over falling test and IQ scores is heated. What’s harder to deny is that with each technological advancement, we become more reliant on digital devices and find it increasingly difficult to work, remember, think, or even function without them. As one expert, Kosmyna, points out with frustration, “It’s only software developers and drug dealers who call people users.” This highlights the rush by AI companies to push their products onto the public before we fully grasp the psychological and cognitive consequences.

In the ever-growing, frictionless online realm, you are primarily a user: passive and dependent. As we enter an era of AI-generated misinformation and deepfakes, how will we hold onto the skepticism and independent thinking we’ll need? By the time we realize our minds are no longer fully our own and we can’t think clearly without tech, how much of our own willpower will remain to push back?

If you express concern about what intelligent machines are doing to our brains, you might be laughed at in the near future as old-fashioned. Socrates once worried that writing would weaken memory and foster a shallow understanding—a “conceit of wisdom” rather than true wisdom. This argument echoes many modern critiques of AI. However, writing and subsequent technologies like the printing press, mass media, and the internet actually gave more people access to more information. This allowed more individuals to develop and share great ideas, making us smarter and more innovative both individually and as a society.

After all, writing didn’t just change how we access and store information; it transformed how we think. With a notebook and pen, a person can tackle more complex tasks than with memory alone.Using AI often results in bland, unimaginative, and factually questionable work. One issue is the “anchoring effect”: when you ask a generative AI a question, its answer can lock your thinking into a specific path, making you less open to other ideas. As one expert explains, “Take a candle, for example. AI can help you improve it—making it brighter, longer-lasting, cheaper, and more attractive—but it won’t lead to inventing the lightbulb.” To make that leap, you need human critical thinking, which can be messy, unstructured, and unpredictable. When companies introduce tools like the chatbot Copilot without proper training, they risk creating teams of mediocre candle-makers in a world that needs high-efficiency lightbulbs.

Another concern is that adults who use AI as a shortcut have at least benefited from an education system that existed before computers could do their homework. A recent British survey found that 92% of university students use AI, and about 20% have used it to write all or part of an assignment. This raises questions about how much they’re actually learning. Are schools and universities still fostering creative, original thinkers who can build smarter societies, or are they producing mindless, gullible drones who rely on AI to write essays?

A few years ago, Matt Miles, a psychology teacher at a Virginia high school, attended a training program on technology in schools. The instructors showed a video of a student caught using her phone in class, who then claimed she was researching with a water expert from Botswana. “It’s laughable. The kids all laugh when they see it,” Miles says. Concerned by the gap between policymakers’ views and classroom reality, he and his colleague Joe Clement wrote “Screen Schooled” in 2017, arguing that too much technology is making children less intelligent. Since then, they’ve banned smartphones in their classrooms, though students still use laptops. As one student insightfully noted, “If you see me on my phone, there’s a 0% chance I’m being productive. On my laptop, it’s 50%.”

Before the pandemic, many teachers were rightly skeptical about adding more technology to classrooms, according to researcher Faith Boninger. But when lockdowns forced schools online, tools like Google Workspace for Education, Kahoot!, and Zearn became commonplace. With the rise of generative AI, there were new promises of revolutionizing education through personalized learning and reduced teacher workloads. However, nearly all research supporting these benefits is funded by the ed-tech industry, while most independent, large-scale studies show that screen time hinders achievement. For instance, an OECD global study found that increased tech use in schools correlates with worse student results.

“There is simply no independent evidence at scale for the effectiveness of these tools,” says Wayne Holmes, a professor at University College London. “In essence, we’re experimenting on children with these technologies. Most sensible people wouldn’t walk into a bar and…”Imagine if someone offered you a new drug, claiming it’s great for your health, and you just started taking it without question. We usually demand that our medicines undergo rigorous testing and are prescribed by professionals. Yet when it comes to educational technology, which is supposed to be so beneficial for children’s developing minds, we suddenly drop those standards.

Miles and Clement are concerned not just about their students being constantly distracted by devices, but also that they’re missing out on developing critical thinking and deep understanding when answers are always a quick search away. Clement recalls a time when he’d pose a question like, “Where do you think the U.S. ranks in GDP per capita?” and guide the class through figuring it out. Now, someone has already looked it up online before he finishes asking. Students regularly use ChatGPT and get frustrated if assignments aren’t provided digitally, forcing them to type questions instead of copying and pasting into an AI or search engine.

“Finding the right answer through Google isn’t the same as having knowledge,” Clement points out. “And knowledge is crucial because it allows you to question something that sounds off or fake. Without it, you might read a flat Earth blog and think, ‘That makes sense,’ because you lack the background to know better.” He worries that the internet is already flooded with conspiracy theories and misinformation, a problem that will only grow as AI generates convincing but false content, and young people aren’t prepared to handle it.

During the pandemic, Miles found his young son crying over his school tablet. The boy was stuck on a math problem that asked him to make the number six using the fewest tokens of one, three, and five. He kept trying two threes, but the computer rejected it. Miles suggested one and five, which worked. “That’s the kind of issue you run into with non-human AI,” Miles notes, explaining that students often think in creative ways that machines can’t accommodate.

But hearing this story, I was struck by another concern: perhaps the real threat isn’t submitting to super-intelligent machines, but handing over control to ones that aren’t so smart.

Frequently Asked Questions
Of course Here is a list of FAQs about the idea that we are in an era where foolishness seems to be thriving with clear and direct answers

General Definition Questions

1 What does an era of thriving foolishness even mean
It means it feels like irrational ideas poor judgment and blatantly false information are becoming more common visible and even rewarded in public life online and sometimes in leadership

2 Isnt foolishness just a part of human nature Hasnt it always existed
Yes foolishness has always been part of human history The feeling that its thriving now comes from how quickly and widely it can spread through technology and media making it more visible and impactful than ever before

3 Whats the difference between foolishness and ignorance
Ignorance is a lack of knowledge or information Foolishness is when you have access to the right information but choose to ignore it act against your own best interest or believe in things despite overwhelming evidence to the contrary

Causes Drivers

4 Why does it feel like foolishness is so widespread today
A few key reasons
Social Media Algorithms They often promote engaging outrageous content over calm rational facts
Information Overload Its hard for people to sort fact from fiction in the digital age
Tribalism People often align with ideas that fit their group identity even if those ideas are foolish to feel a sense of belonging
Decline of Trust Waning trust in traditional institutions creates a vacuum where misinformation can thrive

5 Is the internet to blame for this
The internet isnt the root cause but it acts as a massive amplifier and accelerator It allows foolish ideas to find a global audience and likeminded communities instantly which wasnt possible before

Examples Manifestations

6 Can you give me a clear example of thriving foolishness
Examples include the rapid spread of dangerous health misinformation conspiracy theories gaining mainstream traction or people making lifealtering financial decisions based on viral social media trends without any research

7 How does this show up in everyday life
You might see it when a coworker believes a debunked viral story a family member