10 Comments
User's avatar
Sonya Louise's avatar

I struggle with brain fog due to a chronic illness. I also struggle with how people might judge me for using ChatGPT as a thought partner. At this point in my work, I've decided that my creative and productive output is worth any criticism. I use AI the way you've described it here, and I find it levels the playing field for me. I no longer feel hampered by my illness, as AI has aptly streamlined access to the areas of my brain that have proven difficult to reach these last few years. I am personally grateful for this powerful tool and fully intend to leverage its usefulness in harnessing my personal genius. Thank you for this important framing.

Expand full comment
Darren Boey's avatar

I do feel we’re heading into a world where educators need to rethink the way that they teach students how to engage effectively - to enhance their learning. If anything, a values-based approach to learning rather than a results based approach will reward students more. Teaching them curiosity and the value of learning as a practice rather than emphasising the outputs will put them into the right mindset to engage with AI tools. The student who asks the right questions will always find the right answers to what they want. AI will just help them get there quicker.

Expand full comment
ALS's avatar

I want to echo what Sonya said. Using ChatGPT, for strategy especially but really any use case that supports a person who may have certain challenges, whether it’s a disability, lack of resources, not being able to hire a team, unequal access to education, etc., has been a game changer for many, including myself. Some think of AI as the great equalizer (I love Sonya’s words of how it “levels the playing field”), which then I think threatens those who think the traditional way to learn, improve, and/or make money is the only way because they are most likely not able to make full use of AI being so stuck in their ways or perhaps not as capable at using AI, creating emotional and office-political turmoil. AI is a tool, not a replacement for your own brain - that should be the MIT study takeaway. So if one cannot understand the power of AI and why this is a good direction for many who need this type of tool to enhance one’s functioning and creates more equality among marginalized groups, then that person may need to take a deep look at themselves and ask, “Why am I so resistant to something that’s going to change the world whether I like it or not?” We should be preparing for this era, not trying to go backwards. However, I do believe there could be better safety measures implemented for businesses that need regulation and a way to figure out how to use less energy from our environment from using AI should be a priority.

Expand full comment
Luan Doan's avatar

I totally agree, this study isn’t really about work, it’s more about the mindset behind how we use AI.

That said, I do think it’s worth thinking about. I’ve caught myself sometimes relying too much on AI outputs when I’m learning or researching, just accepting what it gives me without really questioning or digging deeper.

Feels like we all need to build some habits or ground rules around how we use AI, so we’re using it as a power tool, not leaning on it like a crutch.

Expand full comment
Jurgen Appelo's avatar

Good points. Also, I'm not really sure if 54 participants is statistically relevant.

Expand full comment
John C Hansen, LEED AP's avatar

Hi Kamil,

Thanks for the clear and energetic framing here. I appreciate your pushback on the way the MIT study is being interpreted. Still, I’d like to challenge one of the points you made, or at least invite more support for it:

“Most organizations are rolling out AI tools without teaching people how to use them strategically.”

This is a strong claim, and I think your readers deserve at least a reference or two, whether it’s a survey, case study, or even anecdotal evidence from your own consulting work. Without that, it risks sounding like the very kind of generalization we’re critiquing in the MIT study’s conclusions.

Speaking from personal experience: when I began using AI, I didn’t have a guidebook either. I instinctively knew that writing prompts wasn’t my strength, and I dreaded having to learn the language of AI the way I’ve had to learn so many other systems and tools over the years. But I jumped in anyway.

And here’s the interesting part: no, I don’t remember every word I’ve written with my AI assistant. But I do recognize my own voice and intent behind it. Every idea has passed through my judgment. The writing is mine and I take ownership of it even when it’s better than I could have managed alone.

Maybe the same is true for some organizations: they jump in without much planning. The challenge isn’t to stop the leap, it’s to pay attention to where they land, and help them recalibrate from there.

And just to echo your own analogy: whether AI is used as a crutch or a precision power tool, does everyone with a broken leg remember every step they took using the crutch? And does every craftsman remember every 2×4 they cut with the circular saw, or every hole they drilled with the power drill?

Expand full comment
John C Hansen, LEED AP's avatar

Quick follow-up to my earlier comment—

On reflection, I realized my final analogy (about the crutch and the circular saw) may have wandered a bit from the main point about memory. What I really meant to say is this: we don’t measure craftsmanship by how many motions we remember—we measure it by what we build. And with AI, it’s much the same.

The key isn’t whether we recall every phrase, but whether the result reflects our intent and judgment.

Expand full comment
Darren Boey's avatar

John, I’m in the advisory space and have been engaging with many senior leaders in banks and financial firms who say exactly the same thing. I can’t name them, but Kamil is right when he says corporations are rolling AI out in a rush to innovate and look good - but without much effort to educate their people.

Expand full comment
David Todd's avatar

“Organizations… hand employees ChatGPT access and expect magic. When results disappoint, they blame the technology.”

Excellent point but one that imo proves the larger point. What’s missing with this and every other subject in our discourse is nuance. This piece suggests AI is great but people are using it wrong. MIT suggests AI is a brain rot machine. Others say it’s garbage a/o the greatest tech grift ever. Still others talk of mass job loss and societal decay just around the corner. Where’s the nuance? We keep making the same mistake over and over. Big, full-throated pronouncements that lack nuance, tint, and subtlety. No wonder everyone’s so fired up all the time.

Expand full comment
SipAndRamble's avatar

The problem isn't always AI but organizations are not at a place to provide education for people to effectively use it. I hear everyone saying teachers should teach with AI effectively but they themselves haven't been given the tools to do that. They either use it as a crutch or are terrified of it.

Expand full comment