The danger inherent in ChatGPT: ‘You can no longer look under the hood’

What happened? “Half the time he answered correctly!” She wanted to see if the so-called language model could “enter typical instinct questions”. And he certainly could: “The other half of the time, he just wasn’t smart enough to see the obstacles.”

Take exams

The new version of ChatGPT is mainly distinguished by the number of words it knows. “Version 3.5, which anyone can try online, can read and respond to 3,000 words. Version 4 can read and understand 6,000 to 24,000 words. That means you can ask more complex questions.” such as math puzzles, or even providing complete quizzes to the text generator.

Both the previous version and this version can also create quizzes. “A university lecturer from the University of Groningen gave his exam to ChatGPT and he passed,” says Van Stegeren enthusiastically.

“I know students have submitted exam questions, but GPT4 is also the version that was included in the Bing search engine a few weeks ago. You can ask all kinds of things and get a good answer more and more. You can actually have chat conversations with it now like talking to a bot on Whatsapp.”


But she continues, there are also disadvantages to this program. “These kinds of language models tend to hallucinate: they form facts because they have so little information. For example, you could ask the language model to prove that 12 is a prime number and then the language model would go along with it. But that is not the case of course. If you don’t have enough knowledge, you can’t see when the model is talking nonsense or when what it says is true.”

See also  Natural sciences are a lot like premium sports

In addition, of course, there is also a company behind ChatGPT. “This is a very cool and innovative technology, but there is an American tech company behind it. And that company also has shareholders and a profit motive. In recent years, I’ve seen OpenAI become less open about exactly how their models work. They’ve published in the past everything this was based on The language model, but now they proudly announce that they keep everything secret. We now have ChatGPT and we can talk to it, but we can’t see what’s going on under the hood of the data,” says the researcher.


And the data piece is where the danger lies in such language models. “Everyone is going to wildly test it and that’s very good for the field. You see that big companies like Slack and Discord want to integrate it into their services: many tech companies want to ride this huge wave of interest. But if we go OpenAI is integrated everywhere, and you get parts of our data all over the place, which I find disconcerting, if all of it is directed at an American company.”

Leave a Reply

Your email address will not be published. Required fields are marked *