On Wednesday, the European Parliament will vote on new legislation to regulate the use of artificial intelligence. For tech companies, the stakes are high. OpenAI CEO Sam Altman, famous for ChatGPT, threatened Last month it ordered all of its software to come from Europe if the EU imposes several restrictions. He is afraid of what they say in America The ‘Brussels Effect’ To name a few: first American tech companies comply with strict rules imposed by Europe under loud opposition, and then other countries introduce similar laws. This happened with the privacy law introduced by the European Union in 2016. With the ‘Digital Markets Act’ and ‘Digital Services Act’ from 2022, Europe is also at the forefront of technology regulation. What about the new AI Act?
1 What will be discussed in the European Parliament on Wednesday?
The so-called European ‘AI Law’ – a comprehensive bill that regulates artificial intelligence in all sorts of ways. Even the most dangerous forms of AI should be banned in Europe, for example the social scoring systems used by the Chinese government. Stricter rules are imposed on the use of other AI systems, including transparency, use of personal data and energy consumption.
The AI law has already been submitted by the European Commission in 2021, which means some provisions are already out of date. MEPs include stricter rules in the proposal for large AI systems such as ChatGPT, which has been widely used for text generation since last year. Before such a system is allowed in Europe, an AI company must carry out a risk analysis of the application in practice and be transparent about the data it uses.
It is still sensitive whether MEPs will vote for a complete ban on its use this Wednesday real time (Instant) Face Recognition. Left-wing political groups in particular support this, while Christian Democrats fear restrictions on investigative services.
The last word on AI legislation was not spoken on Wednesday: negotiations with European member states will follow first. Europe will be a global leader in AI regulation when it is finalized later this year. It may take some time to execute after the contract. The AI Act will actually apply as soon as two years from now. That’s why in Brussels, big tech companies like Google and Microsoft are working quickly on the ‘AI Pact’, where voluntary agreements are signed. In the United Kingdom, American companies have already provided the government Intelligence in the performance of their AI models.
2 Has Europe learned anything from all the debate surrounding big tech?
Yes – despite all the promises, that commitment and volunteerism usually doesn’t work. For that reason, Brussels has already introduced far-reaching privacy rules and is currently working on implementing legislation that would break the omnipotence of tech giants. Tech companies will soon be required to allow competition on their platforms and more strictly remove illegal content. The experience of technology regulation is now strengthening European politicians in their efforts to make AI work.
Brussels also knows how hard it is to attack big companies. So far, only one large fine has been issued for breaching privacy law – most recently 1.2 billion euros for Meta.
3 Isn’t everyone trying to bend the rules to their liking?
Everyone agrees on one thing: Governments need to rush with regulations to keep AI on track. Lobbying to influence policy makers is in full swing.
On March 9, around a thousand scientists and business leaders called for a temporary ‘development pause’ in AI. Developments were moving so fast that the team decided to hit the pause button. A new letter arrived in May, this time signed by OpenAI CEO Sam Altman, calling for AI to be made a “global priority.” On Tuesday, a group of Dutch scientists, thinkers and artists launched a petition calling for the Netherlands to speed up AI legislation.
Tech companies say they need rules to control their technology — and their competitors. In this way they hope to lead the debate; After all, the EU is trying to regulate a technique that is still developing. Inventors of software want as much freedom as possible to perfect their models, but they cannot predict social consequences if artificial intelligence is given free reign.
4 How do superpowers China and the US view new forms of AI?
China is working to regulateDeep package service providers‘, companies that create texts, images or videos with AI. In April, China’s cyber regulator issued Provisional list with guidelines. The data the models are trained on must be “objective, diverse and accurate” and the content generated by artificial intelligence must match the mindset of the Chinese state. This way, the government retains the option to censor what the AI discovers.
China puts all the responsibility for artificial media on companies that develop AI models. All the content AI brings gets a stamp. AI providers should also develop mechanisms to combat ‘rumours’ or fake news. Additionally, citizens must register with their real name if they use AI services.
OpenAI, Google, Meta; Many major AI providers are based in the US. The US has yet to adopt any restrictions and is waiting to see what Europe decides on AI regulation. Last year, the United States already made a push Constitution To protect civil rights against dangerous AI applications. In May, the Biden administration invested $140 million in general funds Research projects For artificial intelligence. In order not to lose the technology race with China, the emphasis in Washington is on developing more powerful AI in the US.