{"id":21014,"date":"2023-08-10T21:08:27","date_gmt":"2023-08-10T21:08:27","guid":{"rendered":"https:\/\/nftandcrypto-news.com\/crypto\/anthropic-cracks-open-the-black-box-to-see-how-ai-comes-up-with-the-stuff-it-says\/"},"modified":"2023-08-10T21:08:29","modified_gmt":"2023-08-10T21:08:29","slug":"anthropic-cracks-open-the-black-box-to-see-how-ai-comes-up-with-the-stuff-it-says","status":"publish","type":"post","link":"https:\/\/nftandcrypto-news.com\/crypto\/anthropic-cracks-open-the-black-box-to-see-how-ai-comes-up-with-the-stuff-it-says\/","title":{"rendered":"Anthropic cracks open the black box to see how AI comes up with the stuff it says"},"content":{"rendered":"
Anthropic, the artificial intelligence (AI) research organization responsible for the Claude large language model (LLM), recently published landmark research into how and why AI chatbots choose to generate the outputs they do.\u00a0<\/p>\n
At the heart of the team\u2019s research lies the question of whether LLM systems such as Claude, OpenAI\u2019s ChatGPT and Google\u2019s Bard rely on \u201cmemorization\u201d to generate outputs or if there\u2019s a deeper relationship between training data, fine-tuning and what eventually gets outputted.<\/p>\n
\nOn the other hand, individual influence queries show distinct influence patterns. The bottom and top layers seem to focus on fine-grained wording while middle layers reflect higher-level semantic information. (Here, rows correspond to layers and columns correspond to sequences.) pic.twitter.com\/G9mfZfXjJT<\/a><\/p>\n