Why ChatGPT does so badly at math
The surprising artificial intelligence-based bot ChatGPT does well in conversations on many topics, simulating a rather natural language, but has a weakness common to many people: mathematics. If asked questions that combine text comprehension, calculations and logic, it can provide incorrect answers and even if we have tested how it can correct itself if it is addressed appropriately within the conversation, in general it tends to move in a random and inconsistent way.
Acronym of Generative Pretrained Transformer , the ChatGPT project is developed by OpenAI – which is also behind the other phenomenon of 2022 or Dall-E – to undertake even complex conversations and with a credible syntax with a human or presumed user such. As we have said, to work it draws from an immense database of samples of texts cataloged from various sources such as books, newspaper articles or web pages. It also includes Italian and is accessible free of charge after registering on the official site of the project, for those who also want to entertain themselves with invented facts and digressions of various kinds.
However, the math for ChatGPT remains a tricky topic. Several users have reported this in recent days, complaining that even the most elementary algebra problems put the system in crisis, which responds with its usual certainty, however, the results are spectacularly wrong. We too posed a simple logic problem (see cover photo) to ChatGPT, which immediately showed its shortcomings. However, once the error was explained to the artificial intelligence, it partially repented, even if the result was wrong again.
Only at the second (double) help was the system able to give the correct answer.
By opening a new chat and repeating the same problem, but with different numbers, ChatGPT immediately manages to find the correct answer, almost as if it had learned from the mistake. But it suffices to propose the same question from another computer with another user and the result is red pencil again. It is also paradoxical as asking the bot itself to propose an algebra problem without revealing the result, then you declare the resolution correct by contradicting itself .
But why does ChatGPT have these gaps in mathematics despite being a computer to all intents and purposes? The answer is simple: chatbots based on artificial intelligence are developed to be excellent conversationalists, mainly calibrated on offering fluent answers in the interlocutor's language and yes, drawing information from the learning database, but without having developed real "experience" in any sector.
The warning from the developers during the login phase clearly states it: the model "may generate incorrect information", and this is especially true for topics that require lateral reasoning, creative approaches or a good level of abstraction such as math and logic problems.
Twitter content This content can also be viewed on the site it originates from.
The system searches its database for pertinent answers and may therefore not be able to find the correct route, moving attempts and without much knowledge. After all, ChatGPT is still under development; the current version of the model will be updated during well into 2023 and should perform better.
Acronym of Generative Pretrained Transformer , the ChatGPT project is developed by OpenAI – which is also behind the other phenomenon of 2022 or Dall-E – to undertake even complex conversations and with a credible syntax with a human or presumed user such. As we have said, to work it draws from an immense database of samples of texts cataloged from various sources such as books, newspaper articles or web pages. It also includes Italian and is accessible free of charge after registering on the official site of the project, for those who also want to entertain themselves with invented facts and digressions of various kinds.
However, the math for ChatGPT remains a tricky topic. Several users have reported this in recent days, complaining that even the most elementary algebra problems put the system in crisis, which responds with its usual certainty, however, the results are spectacularly wrong. We too posed a simple logic problem (see cover photo) to ChatGPT, which immediately showed its shortcomings. However, once the error was explained to the artificial intelligence, it partially repented, even if the result was wrong again.
Only at the second (double) help was the system able to give the correct answer.
By opening a new chat and repeating the same problem, but with different numbers, ChatGPT immediately manages to find the correct answer, almost as if it had learned from the mistake. But it suffices to propose the same question from another computer with another user and the result is red pencil again. It is also paradoxical as asking the bot itself to propose an algebra problem without revealing the result, then you declare the resolution correct by contradicting itself .
But why does ChatGPT have these gaps in mathematics despite being a computer to all intents and purposes? The answer is simple: chatbots based on artificial intelligence are developed to be excellent conversationalists, mainly calibrated on offering fluent answers in the interlocutor's language and yes, drawing information from the learning database, but without having developed real "experience" in any sector.
The warning from the developers during the login phase clearly states it: the model "may generate incorrect information", and this is especially true for topics that require lateral reasoning, creative approaches or a good level of abstraction such as math and logic problems.
Twitter content This content can also be viewed on the site it originates from.
The system searches its database for pertinent answers and may therefore not be able to find the correct route, moving attempts and without much knowledge. After all, ChatGPT is still under development; the current version of the model will be updated during well into 2023 and should perform better.