A short read on the topic's time range, peak episode, and strongest associations. Use it as the quick orientation before drilling into examples.
Ethan Mollick appears in 8 chunks across 8 episodes, from 2023-10-09 to 2025-12-15.
Its densest episode is Bits and Bobs 10/9/23 (2023-10-09), with 1 observation on this topic.
Semantically it travels with llms, paying attention, and mental model, while by chunk count it sits between Christopher Alexander and agentic engineering; its yearly rank moved from #30 in 2023 to #181 in 2025.
Over time
?
Raw mentions over time. Use this to see absolute attention, not relative rank among all topics.
Range2023-10-09 to 2025-12-15Mean1.0 per episodePeak1 on 2023-10-09
Observations
?
The primary evidence view for this topic. Sort it chronologically when you want concrete examples behind the larger pattern.
Showing 8 observations sorted from latest to earliest.
...ity meatbag" for LLMs.
Also known, in any system, as a "liability sink."
I like Ethan Mollick's colorful frame: "sin eater."
The entity that eats the sin allows progress to be made.
They agree to own the downside if they're wrong.
Now it's mor...
Ethan Mollick's review of GPT5: 'It just does stuff'.
You can just take for granted that it will work.
What if you had an interface other than text that could just...
...s no worse off than they are today.
All upside, no downside.
This is similar to Ethan Mollick's frame on LLM quality thresholds of "best available human".
Don't compare an LLM's quality to the expert in the field, compare its quality to what t...
An insightful tweet from Ethan Mollick:
"It wasn't the steam engine alone that caused the Industrial Revolution. It was the thousands of specific machines invented by skilled craftsmen tha...
...ys patient and eager to help, and to just heroically give you the right answer.
Ethan Mollick has noted that LLMs will break the implicit apprenticeship model in large organizations.
If you've completely stopped reading code, you just copy and...
...ill insight and generate ideas to react to was to augment the human discussion.
Ethan Mollick observed this but also realized you can just skip the humans altogether (:gulp:).
In all of the scenarios we explored, the rate of scientific discove...
Games are magic at building up knowhow.
Ethan Mollick has done a lot of research about this, for example in this old Ted Talk.
Accelerated expertise is a way of abducting playable games based on experts'...