{"id":24098,"date":"2023-10-25T18:42:03","date_gmt":"2023-10-25T18:42:03","guid":{"rendered":"https:\/\/nftandcrypto-news.com\/crypto\/researchers-in-china-developed-a-hallucination-correction-engine-for-ai-models\/"},"modified":"2023-10-25T18:42:06","modified_gmt":"2023-10-25T18:42:06","slug":"researchers-in-china-developed-a-hallucination-correction-engine-for-ai-models","status":"publish","type":"post","link":"https:\/\/nftandcrypto-news.com\/crypto\/researchers-in-china-developed-a-hallucination-correction-engine-for-ai-models\/","title":{"rendered":"Researchers in China developed a hallucination correction engine for AI models"},"content":{"rendered":"
A team of scientists from the University of Science and Technology of China and Tencent\u2019s YouTu Lab have developed a tool to combat \u201challucination\u201d by artificial intelligence (AI) models.\u00a0<\/p>\n
Hallucination is the tendency for an AI model to generate outputs with a high level of confidence that don\u2019t appear based on information present in its training data. This problem permeates large language model (LLM) research, and its effects can be seen in models such as OpenAI\u2019s ChatGPT and Anthropic\u2019s Claude. <\/p>\n