{"id":20151,"date":"2023-07-21T21:33:00","date_gmt":"2023-07-21T21:33:00","guid":{"rendered":"https:\/\/nftandcrypto-news.com\/crypto\/ai21-labs-debuts-anti-hallucination-feature-for-gpt-chatbots\/"},"modified":"2023-07-21T21:33:02","modified_gmt":"2023-07-21T21:33:02","slug":"ai21-labs-debuts-anti-hallucination-feature-for-gpt-chatbots","status":"publish","type":"post","link":"https:\/\/nftandcrypto-news.com\/crypto\/ai21-labs-debuts-anti-hallucination-feature-for-gpt-chatbots\/","title":{"rendered":"AI21 Labs debuts anti-hallucination feature for GPT chatbots"},"content":{"rendered":"
<\/p>\n
AI21 Labs recently launched \u201cContextual Answers,\u201d a question-answering engine for large language models (LLMs).\u00a0<\/p>\n
When connected to an LLM, the new engine allows users to upload their own data libraries in order to restrict the model\u2019s outputs to specific information. <\/p>\n
The launch of ChatGPT and similar artificial intelligence (AI) products has been paradigm-shifting for the AI industry, but a lack of trustworthiness makes adoption a difficult prospect for many businesses.<\/p>\n
According to research, employees spend nearly half of their workdays searching for information. This presents a huge opportunity for chatbots capable of performing search functions; however, most chatbots aren\u2019t geared toward enterprise. <\/p>\n
AI21 developed Contextual Answers to address the gap between chatbots designed for general use and enterprise-level question-answering services by giving users the ability to pipeline their own data and document libraries.<\/p>\n
According to a blog post from AI21, Contextual Answers allows users to steer AI answers without retraining models, thus mitigating some of the biggest impediments to adoption:<\/p>\n
\u201cMost businesses struggle to adopt [AI], citing cost, complexity and lack of the models\u2019 specialization in their organizational data, leading to responses that are incorrect, \u2018hallucinated\u2019 or inappropriate for the context.\u201d<\/p><\/blockquote>\n
One of the outstanding challenges related to the development of useful LLMs, such as OpenAI\u2019s ChatGPT or Google\u2019s Bard, is teaching them to express a lack of confidence.<\/p>\n
Typically, when a user queries a chatbot, it\u2019ll output a response even if there isn\u2019t enough information in its data set to give factual information. In these cases, rather than output a low-confidence answer such as \u201cI don\u2019t know,\u201d LLMs will often make up information without any factual basis. <\/p>\n
Researchers dub these outputs \u201challucinations\u201d because the machines generate information that seemingly doesn\u2019t exist in their data sets, like humans who see things that aren\u2019t really there.<\/p>\n
\nWe’re excited to introduce Contextual Answers, an API solution where answers are based on organizational knowledge, leaving no room for AI hallucinations. <\/p>\n
\u27a1\ufe0f https:\/\/t.co\/LqlyBz6TYZ pic.twitter.com\/uBrXrngXhW<\/a><\/p>\n