The danger of AI imposing reality by consensus
The danger of AI imposing reality by consensus:
Let's say you hear that Elon Musk launched a nuke into the moon. You take out your smartphone with its 100x zoom and snap a photo. The moon looks the same as ever. Fake news, right?
Except your photo is not a true image of the moon, and this isn't science fiction, but 2020: modern smartphones use machine learning to "enhance" images by combining them with details from images of similar objects. Samsung's S20 "space mode" will basically replace your blurry moonshot with stock photography. This is necessary because the optical physics of the focal length inside a smartphone lens has fundamental limits on image resolution, no matter how good the sensor gets. A 100x zoom works by extrapolating the missing details by plugging in other people's photos.
That's all well and good when you can pull out a "dumb" telescope and see "unaugmented" reality with the naked eye, but what happens when ChatGPT replaces Google Search as the source of truth? GPT4 is multimodal, which means it includes audio and video input and output. When all our communications and observation are mediated through our technology, our window of reality may be AI-driven consensus. ChatGPT is already known to lie when answers contain taboo topics. What happens when your only window into the world is mediated by a political correctness filter?
It's too early to say to what extent this will be a problem, but I think the solution to push for now is to use and support many competing language models, including self-hosted projects.