ChatGPT takes the Dutch exam (and fails in time)

ChatGPT takes the Dutch exam (and fails in time)

It’s called the “final exam.” Mark Ostendorp try it. He had ChatGPT make the Dutch VWO central final exam, He wrote about his findings on the website Neerlandistiek.nlAn online magazine on the Dutch language and literature.

The exam, taken to thousands of pre-university students earlier this week, consisted of a number of opinion texts from newspapers and magazines, and students had to answer questions about that text. Oostendorp writes “Enter texts and then ask questions on ChatGPT”. He then “always checked the answers as carefully as possible against the official answer form”.

Not a complete failure

According to the professor, it is not yet possible to say what exactly ChatGPT could have achieved for a score: no standard has been agreed upon yet, and it is still too early for that.

But: ChatGPT got 33 points out of a total of 60 points. “It probably would have failed,” Ostendorp concludes, but only in time. “You didn’t get a terrible failing grade.”

What is this?

ChatGPT relies on an algorithm trained on large amounts of text, so it can learn how people talk and interact. When a message is sent, ChatGPT analyzes it and gives an appropriate response. This happens in real time, so it feels like you’re talking to a real person. It can be used for all kinds of purposes, such as having conversations with friends, practicing a foreign language, or writing texts. It is also useful for companies looking for a way to communicate with customers quickly and efficiently.

There are also some mitigating circumstances: ChatGPT is unable to read PDFs, so Oostendorp had to convert the articles to another readable format. In doing so, he removed the line numbers, “so this information was missing.”

See also  The dictator's sister Kim: America has wrong expectations about dialogue | abroad

In addition, the paragraphs in the exam texts were numbered, and ChatGPT didn’t understand that either, says Oostendorp. As a result, he writes, she may have missed some crucial points. “You don’t understand the structure of the exam.”

So most of the errors that ChatGPT makes are in the “examination skills” domain, not in the reading skills domain, which is what students are tested for and graded on.

Oostendorp also says he shouldn’t be surprised if ChatGPT had been trained with exams from previous years, it would have been easier to pass. In any case, there are quite a few amazingly good answers.

Leave a Reply

Your email address will not be published. Required fields are marked *