đ I recommend: Co-Intelligence: Living and Working with AI
Hey everyone, this post is for everyone, but itâs a part of a new series Iâm doing on More Birchtree where I review the books and games I read and play in 2025. My movie reviews have been a relative hit on social media for the last few years, and while nothing will change with those, I did want to share more suggestions since movies arenât the only medium I enjoy.
Iâm going to make the post titles descriptive enough to know what Iâm talking about and whether I think itâs worth your time (using Skill Upâs format), but the full review will be in the memberâs post. Previous reviews in this series have been Portal: Revolution and Life as No One Knows It.
The review
Co-Intelligence: Living and Working with AI is a book from early 2024 by Ethan Mollick that I found to be a quick, interesting read about how to think about AI in the modern world and how to work with the tools we have available to us today. What I liked about this book is that itâs very down-to-earth in how it talks about LLMs and how to integrate them into our lives. Donât worry, this isnât some techno-optimist manifesto or anything, itâs very much coming at all of this with a sense of curiosity and healthy skepticism. âAI is the devil and we should just not use it,â folks will be disappointed, and said techno-optimists will be annoyed at downsides are even brought up at all, but as someone in the middle I found the curiosity on displays here to be refreshing.
One of the things he brings up early in the book is the question of how fast these tools will continue to improve in the future. OpenAIâs GPT 4 was way better than 3.5 which in turn was way better than 3.0, but subsequent updates havenât had as much oomph, and itâs lead people like me to wonder if the pace of advancement in these models was already slowing considerably. Mollick finds this unlikely, but he does bring up the very important point that even if LLMs never advanced beyond the capabilities of what we have right now, they will still be a major player in the world going forward. Even if the models stay largely as capable, people will get better at packaging them into software in more useful ways and technical advances in hardware will make these models run way faster and on local hardware in a way the frontier models (aka the ones youâve heard of, GPT, Claude, Gemini) simply canât today. And with more time, people are simply going to find more interesting ways to use them, both for good and for bad.
I think this is important context, because the genie isnât going back in the bottle. You may hate AI with every ounce of your being, but hoping that it just goes away and you wonât have to deal with it is exceptionally wishful thinking.
Along a similar thread, Mollick warns that basing your criticisms of how well AI models do things today is dangerous because they will improve and youâll need to change your argument as they fix the things you found lacking. He cites some numbers around hallucinations that show a remarkable improvement from GPT 3.5 to 4, and those have improved since then as well.
This line of thinking also got me pondering the âLLMs are just fancy auto-correct guessing at the next word,â argument I hear a lot. This is meant to dismiss them as useless, but this feels like dismissing computers because theyâre ultimately just a bunch of electricity running through sand, flipping a bunch of switches between 0 and 1. Yes, thatâs technically true, but it betrays whatâs actually going on.
Anyway, this was meant to be a book review, and I should mention that whatever critiques youâre thinking of right now about why none of the benefits matter due to the bad things this tech can bring, and youâll be happy to hear those are brought up as well. The author has a general feeling that like all technology before it, there will be good and bad uses for it and weâll need to have people using the tech to guide it to be used in ways that ultimately benefit us all.
He closes the book with a handful of predictions for what direction this tech could go in the long term, and asserts he doesnât know. What he does know is that just because AI can do something doesnât mean we should have it do that. Just because AI can make something that looks like art doesnât mean we will be entertained by that thing. Just because we can have AI write our blogs for us or even our books doesnât mean that we should. Just because we can listen to podcasts with AI voices talking like humans doesnât mean those podcasts are of interest to anyone. We record one episode of Comfort Zone per week, do you wish you could open your podcast app any day of the week and generate a new episode on the spot with our voices delivering you what sounds like our show? I donât think so, and thatâs because part of the podcast art form is the relationship you form with the hosts. Weâll continue to see people do too much with this stuff in the coming years, but as a people we need to realize that we donât need AI to do everything it possibly can do.