Monday, July 24, 2023

STANFORD SCIENTISTS RESEARCH SHOWS CHATGPT IS GETTING STUPIDER


Who would have ever thought the Singularity would be so stupid?
Regardless of what its execs claim, researchers are now saying that yes, OpenAI's GPT large language model (LLM) appeared to be getting dumber. 
In a new yet-to-be-peer-reviewed study, researchers out of Stanford and Berkeley found that over a period of a few months, both GPT-3.5 and GPT-4 significantly changed their "behavior," with the accuracy of their responses appearing to go down, validating user anecdotes about the apparent degradation of the latest versions of the software in the months since their releases. 
"GPT-4 (March 2023) was very good at identifying prime numbers (accuracy 97.6 percent)," the researchers wrote in their paper's abstract, "but GPT-4 (June 2023) was very poor on these same questions (accuracy 2.4 percent)." 
"Both GPT-4 and GPT-3.5," the abstract continued, "had more formatting mistakes in code generation in June than in March." 
This study affirms what users have been saying for more than a month now: that as they've used the GPT-3 and GPT-4-powered ChatGPTover time, they've noticed it becoming, well, stupider. 
The seeming degradation of its accuracy has become so troublesome that OpenAI vice president of product Peter Welinder attempted to dispel rumors that the change was intentional.

1 comment:

JEC said...

Kind of like what is happening with the US government now...