‘AI brings a great responsibility’

‘AI brings a great responsibility’

Technology24 Mar ’23 at 12:27Author of the book: Myrtle Koopman

Google has waited a long time to publicly launch its own artificial intelligence (AI)-powered chatbot. According to Martijn Bertisen, director of Google Netherlands, this has to do with the responsibility involved in the development of AI. “You need to not only exploit the advantages, but also understand the disadvantages and limit them as much as possible, because AI also has risks.”

Earlier this week, Google made its own AI chatbot ‘Part’ available to users in the US and the United Kingdom. (ANP/REX via Shutterstock)

Earlier this week, Google made its own AI chatbot ‘Part’ available to users in the US and the United Kingdom. Microsoft already came up with the chatbot ‘ChatGBT’, but according to Berdison you don’t always have to be the best. ‘Before Google came up with a search engine, there were already many of them. But we did it differently.’

The missed boat

Google Netherlands says the company has been working with artificial intelligence for a long time. ‘We at Google have missed the boat on the AI ​​front, but we’ve actually made quite a bit of the boat. The T in SatGBT stands for Transform, a research paper published by Google. For example, many services offered by Google – such as the search engine, YouTube or Google Maps – already use AI to create a better user experience.

‘It’s important to harness the benefits of AI, but it’s important to understand the downsides’

Martijn Bertisen, Director of Google Netherlands

According to Berdison, the fact that Google is waiting so long to introduce new products that are mostly driven by AI has to do with responsibility. ‘We reach billions of users around the world every day, so it’s important that you not only leverage the benefits of AI, but also understand and limit the downsides.’

fireworks

Perdison knows that Google CEO Sundar Pichai once compared AI to fire or electricity. ‘It’s a lot of good for humanity, but first we had to learn how to deal with it.’ That’s why Google has chosen to slow down with the introduction of AI chatbot ‘Part’.

Also Read | The trade union CNV wants more attention on new technology like ChatGPT

‘We first tested internally with 80,000 employees and then externally with 10,000 experts. We still know things go wrong and we have to learn from that.’ Berdison says it’s difficult for an organization to find a balance between an ambitious but at the same time responsible use of technology.

Weapons

In 2018, Google established a number of policies to ensure that it is responsible in the field of AI. ‘For example, it is important that AI does not create stereotypes or discriminate. In addition, users’ privacy must be guaranteed and data must be respected. We build technology based on those principles.’

“Google doesn’t want to contribute to AI that could be used for weapons development”

Martijn Bertisen, Director of Google Netherlands

Google is wary of developing AI for facial recognition because it could act as a surveillance society. In addition, according to Berdison, Google does not want to contribute to AI that could be used in the development of weapons. ‘It also has to do with responsibility, you shouldn’t provide technology that people can use to create bad things.’

See also  Biden wants to reverse midterm referendum on abortion

Leave a Reply

Your email address will not be published. Required fields are marked *