Mastodon

The future of LLMs is local

Posted by Matt Birchler
— 1 min read

I saw this post over on json.blog, OpenAI Is Just Uber, which sounds like it makes sense, but I think is drawing the wrong conclusions on a couple fronts.

A decade later, ride-sharing hasn’t evolved significantly since its launch. Costs have risen as consumers now pay the actual marginal cost of their rides. hasn’t undergone a transformative shift, but we do have a slightly better taxi system.

I’d actually disagree quite a bit here. Ride hailing is exponentially better than it was 10 years ago, and we’d be shocked how bad it was compared to today if we were teleported back then.

In ten years, I expect that LLMs won’t be that much more useful than they are today. Using these AI services will be much more expensive, because we will no longer have queries massively subsidized.

I’m also dubious that LLMs will get exponentially better in the next 10 years, but I think the concern about using LLMs getting more expensive misses the mark by a mile. Especially if we agree that LLMs won’t get meaningfully smarter, that means using the local models that Google has already announced for Android and Apple is clearly working on for iOS/macOS will be more viable, and will be completely free to run as much as you want.

Basically, I think if LLMs do get exponentially more useful, then server-side models that cost significant sums of money to run will continue to be prominent. But if they plateau in usefulness, then most people will run models locally on their devices most of the time because they’ll be quicker, more private, and basically just as good. And remember that our phones and computers are getting faster every year, so these LLMs will constantly run better and better than before.