The Evolving Landscape of Online Information
Over the past three months, large language models (LLMs) have dominated Twitter, the news, and the tech industry. This is primarily due to the release of ChatGPT.
For the uninitiated, here is a screenshot of the chatbot’s answer to a simple question.
Now, it is a phenomenal product and something I find myself using regularly. Some have found it so useful that it’s led to some pretty spicey takes online:
“I never use Google anymore. For answers, I go straight to ChatGPT”
In my mind, this roughly translates to I never go to primary sources anymore; I let ChatGPT generate answers for me. This is probably a hot take or clickbait, but I wanted to consider the implications of this possible reality by focusing on a specific question: where does this type of workflow leave the likes of Wikipedia and other original content sites?
I regularly use Wikipedia because of its accurate content and solid editorial process. It is an excellent source for learning about a wide variety of topics. But what happens to sites that no longer need to be visited because the source of answers is so readily accessible via ChatGPT or an equivalent? How is new content sourced and is there even a business model for Wikipedia to exist?
ChatGPT uses the data on Wikipedia to generate answers, thus making it better at answering questions. However, the hot take above illustrates a likely scenario where no one visits Wikipedia…where does that leave Wikipedia? Does it still exist? How does the site continue to operate if no one knows to donate to keep the site running? What incentives do people have to visit Wikipedia or contribute content to improve it? But what incentivizes people to keep making Wikipedia that much better when you bypass it entirely with ChatGPT?
I have no answers, but these questions have been on my mind recently. What’s the incentive to build original content sources when that content might be hidden behind an intelligent chatbot?