Mastodon

đź“• I recommend: Co-Intelligence: Living and Working with AI

Posted by Matt Birchler
— 4 min read
đź“• I recommend: Co-Intelligence: Living and Working with AI

Hey everyone, this post is for everyone, but it’s a part of a new series I’m doing on More Birchtree where I review the books and games I read and play in 2025. My movie reviews have been a relative hit on social media for the last few years, and while nothing will change with those, I did want to share more suggestions since movies aren’t the only medium I enjoy.

I’m going to make the post titles descriptive enough to know what I’m talking about and whether I think it’s worth your time (using Skill Up’s format), but the full review will be in the member’s post. Previous reviews in this series have been Portal: Revolution and Life as No One Knows It.

The review

Co-Intelligence: Living and Working with AI is a book from early 2024 by Ethan Mollick that I found to be a quick, interesting read about how to think about AI in the modern world and how to work with the tools we have available to us today. What I liked about this book is that it’s very down-to-earth in how it talks about LLMs and how to integrate them into our lives. Don’t worry, this isn’t some techno-optimist manifesto or anything, it’s very much coming at all of this with a sense of curiosity and healthy skepticism. “AI is the devil and we should just not use it,” folks will be disappointed, and said techno-optimists will be annoyed at downsides are even brought up at all, but as someone in the middle I found the curiosity on displays here to be refreshing.

One of the things he brings up early in the book is the question of how fast these tools will continue to improve in the future. OpenAI’s GPT 4 was way better than 3.5 which in turn was way better than 3.0, but subsequent updates haven’t had as much oomph, and it’s lead people like me to wonder if the pace of advancement in these models was already slowing considerably. Mollick finds this unlikely, but he does bring up the very important point that even if LLMs never advanced beyond the capabilities of what we have right now, they will still be a major player in the world going forward. Even if the models stay largely as capable, people will get better at packaging them into software in more useful ways and technical advances in hardware will make these models run way faster and on local hardware in a way the frontier models (aka the ones you’ve heard of, GPT, Claude, Gemini) simply can’t today. And with more time, people are simply going to find more interesting ways to use them, both for good and for bad.

I think this is important context, because the genie isn’t going back in the bottle. You may hate AI with every ounce of your being, but hoping that it just goes away and you won’t have to deal with it is exceptionally wishful thinking.

Along a similar thread, Mollick warns that basing your criticisms of how well AI models do things today is dangerous because they will improve and you’ll need to change your argument as they fix the things you found lacking. He cites some numbers around hallucinations that show a remarkable improvement from GPT 3.5 to 4, and those have improved since then as well.

This line of thinking also got me pondering the “LLMs are just fancy auto-correct guessing at the next word,” argument I hear a lot. This is meant to dismiss them as useless, but this feels like dismissing computers because they’re ultimately just a bunch of electricity running through sand, flipping a bunch of switches between 0 and 1. Yes, that’s technically true, but it betrays what’s actually going on.

Anyway, this was meant to be a book review, and I should mention that whatever critiques you’re thinking of right now about why none of the benefits matter due to the bad things this tech can bring, and you’ll be happy to hear those are brought up as well. The author has a general feeling that like all technology before it, there will be good and bad uses for it and we’ll need to have people using the tech to guide it to be used in ways that ultimately benefit us all.

He closes the book with a handful of predictions for what direction this tech could go in the long term, and asserts he doesn’t know. What he does know is that just because AI can do something doesn’t mean we should have it do that. Just because AI can make something that looks like art doesn’t mean we will be entertained by that thing. Just because we can have AI write our blogs for us or even our books doesn’t mean that we should. Just because we can listen to podcasts with AI voices talking like humans doesn’t mean those podcasts are of interest to anyone. We record one episode of Comfort Zone per week, do you wish you could open your podcast app any day of the week and generate a new episode on the spot with our voices delivering you what sounds like our show? I don’t think so, and that’s because part of the podcast art form is the relationship you form with the hosts. We’ll continue to see people do too much with this stuff in the coming years, but as a people we need to realize that we don’t need AI to do everything it possibly can do.