AI really is the end of the world. Not necessarily through it itself becoming self aware but maybe because it'll give fuckwits a pathway into jobs of high importance (like private schools do for governments).
Do Language Models Know When They’re Hallucinating References?
(Research, submitted 29 May 2023)
Language models (LMs) famously hallucinate1, meaning that they fabricate strings of plausible but unfounded text. As LMs become more accurate, their fabrications become more believable and therefore more problematic. A primary example is “hallucinated references” to non-existent articles with titles readily fabricated by the LM. For instance, a real New York Times article entitled “When A.I. Chatbots Hallucinate” leads with a ChatGPT2-fabricated New York Times article titled “Machines Will Be Capable of Learning, Solving Problems, Scientists Predict”. In this work, we study the problem of hallucinated references.