Baby. Bathwater. Toss?
New York Times: Good Morning. A.I. Does Not Have to Be Perfect or Have Human-Level Intelligence to Be Useful.
But this bot also might not care about some qualities that humans value. Perhaps it will spin out falsehoods that some writers won’t catch before publishing online. Or bad actors could use A.I. to create and distribute well-written disinformation more efficiently.
Here’s a real-world example: Ted Rall, a political cartoonist, recently asked ChatGPT to describe his relationship with Scott Stantis, another cartoonist. It falsely suggested Stantis had accused Rall of plagiarism in 2002 and that both of them had a public feud. None of this happened.
As similar A.I. replaces human tasks or current technologies (such as search engines), the falsehoods could mislead many more people.
The internet in general made it much easier to spread and stumble upon false information as well, and I suspect we all have examples of us or loved ones (especially older loved ones) falling for something fake online. Sometimes's it's mild (my dad sent me a fan-made trailer for Dune a few weeks back thinking it was the real one) and others more vicious (gestures generally at the political conspiracies out there), and we all have gotten better at not trusting everything we see online. Yes, it would be great if we could trust everything, but that's simply not a reality, and despite the issues we definitely have, society has generally gotten better at this over time.
I've heard people say that they think ChatGPT is going to enable people to flood the internet with low quality, factually inaccurate information, and sure, but like…have you seen the internet? Most of us browse the top 0.0001% of it, and the rest, which is mostly pure trash, never crosses our radar because there are tools built up over the past few decades to ensure (in general) that better stuff rises to the top. Obviously it's not perfect, but it's also not the end of the world and there are massive benefits to the web overall.
The fact that these text generation tools can generate text that's not completely accurate is indeed a problem, but it's not terminal, and it's not a reason to throw the entire category out because of it. There are real benefits to these tools, and they can empower people at the same time they can also mislead others. As a society we need to adapt to and mitigate the bad parts while embracing the good parts, just as we have for all other technological advances in the past (car accidents were considerably lower before automobiles existed…).