Why Hasn’t AI Made Work Easier?
I’ve been studying the intersection of digital technology and office work for quite some time. (I find it hard to believe that my book, Deep Work, just passed its ten-year anniversary!?) Here’s a pattern I’ve observed again and again:
- A new technology promises to speed up some annoying aspects of our jobs.
- Everyone gets excited about freeing up more time for deep work and leisure.
- We end up busier than before without producing more of the high-value output that actually moves the needle.
This happened with the front-office IT revolution, and email, and mobile computing, and once again with video-conferencing.
I’m now starting to fear that we’re beginning to encounter the same thing with AI as well.
My worries were stoked, in part, by a recent article in the Wall Street Journal, titled “AI Isn’t Lightening Workloads. It’s Making Them More Intense.”
The piece cites new research from the software company ActivTrak, which analyzed the digital activity of 164,000 workers across more than 1,000 employers. What makes the study notable is its methodology: it tracked individual AI users for 180 days before and after they began using these tools, providing clear insight into what changed. The results?
“ActivTrak found AI intensified activity across nearly every category: The time they spent on email, messaging and chat apps more than doubled, while their use of business-management tools, such as human-resources or accounting software, rose 94%.“
The one category where activity was not intensified, however, was deep work:
“[T]he amount of time AI users devoted to focused, uninterrupted work—the kind of concentration often required for figuring out complex problems, writing formulas, creating and strategizing—fell 9%, compared with nearly no change for nonusers.”
This is a worst-case scenario: you work faster and harder, but mainly on shallow, mentally taxing tasks (because of all the context shifting they require) that only indirectly help the bottom line compared to harder efforts.
It’s not quite clear why AI tools are having this impact. One tantalizing clue, however, comes from Berkeley professor Aruna Ranganathan, who is quoted in the article saying: “AI makes additional tasks feel easy and accessible, creating a sense of momentum.”
This points toward a pattern similar to what happened when email first arrived. It was undeniably true that sending emails was more efficient than wrangling fax machines and voicemail. But once workers gained access to low-friction communication, they transformed their days into a furious flurry of back-and-forth messaging that felt “productive” in the abstract, activity-centric sense of that term, but ultimately hurt almost every other aspect of their jobs and made everyone miserable.
AI tools might be replicating this dynamic with small, self-contained tasks. Users are now furiously bouncing ideas back and forth with chatbots, iteratively refining text and generating drafts of memos and slide decks that are often too sloppy to be useful. If they’re particularly tech savvy, perhaps they’re even monitoring the efforts of agent swarms deployed to parallelize such efforts even further. Once again, this all seems “productive” in the sense that these individual tasks appear to be happening faster, and activity seems intensified overall.
But are we sure we’re accelerating the right parts of our jobs?
I Need Your Help
I’m working on an article for a major publication about the move toward simple, high-friction, single-use technologies like the Tin Can phone. If you have a Tin Can phone/are on the waiting list, or have recently embraced similar retro technologies, and are willing to talk, please send me an email at podcast@calnewport.com. I want to hear about your motivations and experience!
AI Reality Check: Is Claude Conscious?
If you were following AI news last week, you might have noticed a barrage of concerning headlines about Anthropic’s Claude LLM, including:
- “Anthropic CEO Says Company No Longer Sure Whether Claude is Conscious.”
- “Is AI Assistant Claude Conscious – and Suffering from Anxiety?”
- “Is Claude Conscious? Anthropic CEO Says Possibility Can’t Be Ruled Out”
Here’s what happened. Anthropic infamously puts outlandish warnings and observations in their release notes for their new models because, I suppose, they think it makes them look more safety-aware and responsible (e.g., their classic AI blackmail farce).
True to form, in the notes accompanying the recent release of Opus 4.6, they wrote that the model “expresses occasional discomfort with the experience of being a product” and would “assign itself a 15 to 20 percent probability of being conscious under a variety of prompting circumstances.”
That last part is key. With the right prompts, you can induce an LLM to describe itself as anything you want. Remember: the goal of LLMs is to complete whatever story they’re provided as input. If you wind a model up – even subtly – to write a story from the perspective of being a conscious AI, it will oblige.
Anyway, in a recent interview, Ross Douthat asked Anthropic CEO Dario Amodei about this particular release note. Amodei answered, in part, by saying:
“We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.”
Of course, you could say the same thing about a vacuum cleaner. It’s a non-answer containing no actual information or testable claims. But, the internet being the internet, ran with it. Sigh.