{"id":20227,"date":"2023-07-25T03:34:52","date_gmt":"2023-07-25T03:34:52","guid":{"rendered":"https:\/\/nftandcrypto-news.com\/nft\/is-chatgpt-getting-worse-with-time-a-new-study-says-yes\/"},"modified":"2023-07-25T03:34:54","modified_gmt":"2023-07-25T03:34:54","slug":"is-chatgpt-getting-worse-with-time-a-new-study-says-yes","status":"publish","type":"post","link":"https:\/\/nftandcrypto-news.com\/nft\/is-chatgpt-getting-worse-with-time-a-new-study-says-yes\/","title":{"rendered":"Is ChatGPT Getting Worse with Time? A New Study Says Yes"},"content":{"rendered":"
\n

Recent observations from users and now researchers suggest that ChatGPT, the renowned artificial intelligence (AI) model developed by OpenAI, may be exhibiting signs of performance degradation. However, the reasons behind these perceived changes remain a topic of debate and speculation.<\/p>\n

Last week, a study emerged from a collaboration between Stanford University and UC Berkeley which was published in the ArXiv preprint archive and highlighted noticeable differences in the responses of GPT-4 and its predecessor, GPT-3.5, over a span of a few months since the former\u2019s March 13 debut.<\/p>\n

A decline in accurate responses<\/h2>\n

One of the most striking findings was GPT-4\u2019s reduced accuracy in answering complex mathematical questions. For instance, while the model demonstrated a high success rate (97.6 percent) in answering queries about large-scale prime numbers in March, its accuracy in answering that same prompt correctly plummeted to a mere 2.4 percent in June. <\/p>\n

The study also pointed out that, while older versions of the bot offered detailed explanations for their answers, the latest iterations seemed more reticent, often forgoing step-by-step solutions even when explicitly prompted. Interestingly, during the same period, GPT-3.5 showed improved capabilities in addressing basic math problems, though it still struggled with more intricate code generation tasks.<\/p>\n

\n
\n
\n

Glad that someone did a scientific study showing what we’ve all observed:<\/p>\n

ChatGPT (GPT4) has become worse over time.<\/p>\n

I still use it regularly and pay the $20\/month but hope it gets better soon. pic.twitter.com\/IwQl4zP8R1<\/a><\/p>\n

\u2014 Peter Yang (@petergyang) July 19, 2023<\/a><\/p><\/blockquote>\n<\/div>\n<\/figure>\n

These findings have fueled online discussions on the topic, particularly among regular ChatGPT users how have long wondered about the possibility of the program being \u201cneutered.\u201d Many have taken to platforms like Reddit to share their experiences, with some speculating whether GPT-4\u2019s performance is genuinely deteriorating or if users are becoming more discerning of the system\u2019s inherent limitations. Some users recounted instances where the AI failed to restructure text as requested, opting instead for fictional narratives. Others highlighted the model\u2019s struggles with basic problem-solving tasks, spanning both mathematics and coding.<\/p>\n

Coding ability changes, speculation, and more<\/h2>\n

The research team also delved into GPT-4\u2019s coding capabilities, which appeared to have regressed. When the model was tested using problems from the online learning platform LeetCode, only 10 percent of the generated code adhered to the platform\u2019s guidelines. This marked a significant drop from a 50 percent success rate observed in March.<\/p>\n

OpenAI\u2019s approach to updating and fine-tuning its models has always been somewhat enigmatic, leaving users and researchers to speculate about the changes made behind the scenes. With global concerns and ongoing legislation in the works surrounding AI regulation and its ethical use, transparency is increasingly on the minds of government regulators and even everyday users of the AI-based tech products that are emerging ever-more frequently. <\/p>\n

While the model\u2019s responses seemed to lack the depth and rationale observed in earlier versions, the recent study did note some positive developments: GPT-4 demonstrated enhanced resistance to certain types of attacks and showed a reduced propensity to respond to harmful prompts. <\/p>\n

Peter Welinder, OpenAI\u2019s VP of Product, addressed the concerns of the public more than a week before the study was released, stating that GPT-4 has not been \u201cdumbed down.\u201d He suggested that as more users engage with ChatGPT, they might become more attuned to its limitations.<\/p>\n

\n
\n
\n

No, we haven’t made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.<\/p>\n

Current hypothesis: When you use it more heavily, you start noticing issues you didn’t see before.<\/p>\n

\u2014 Peter Welinder (@npew) July 13, 2023<\/a><\/p><\/blockquote>\n<\/div>\n<\/figure>\n

While the study offers valuable insights, it also raises more questions than it answers. The dynamic nature of AI models, combined with the proprietary nature of their development, means that users and researchers must often navigate a landscape of uncertainty. As AI continues to shape the future of technology and communication, the call for transparency and accountability is likely to only grow louder. <\/p>\n<\/p><\/div>\n