OpenAI’s CEO Sam Altman said on Friday that the company has no plans to leave Europe. This goes against what he said earlier this week, when he said that the company would leave the area if it became too hard to follow upcoming laws on artificial intelligence. The EU is working on what could be the world’s first set of rules to govern AI. On Wednesday, Altman said that the current draft of the EU AI Act is “over-regulatory.”
Altman said in a tweet on Friday, “We are excited to keep running our business here, and we have no plans to leave.” His threat to leave Europe was criticized by Thierry Breton, who is in charge of business in the EU, and by a number of other lawmakers. Altman has been traveling around Europe for the past week, meeting with top politicians in France, Spain, Poland, Germany, and the UK to talk about the future of AI and the progress of ChatGPT.
He said that his tour was a “very productive week of conversations in Europe about how to best regulate AI!” OpenAI was criticized for not sharing the data it used to train its latest AI model, GPT-4. “Competitive landscape and safety implications” were the reasons the company gave for not giving out the information. During the discussion of the AI Act draft, EU lawmakers added new ideas that would force companies that use generative tools like ChatGPT to reveal copyrighted materials that were used to train their systems.
“These provisions are mostly about transparency,” Dragos Tudorache, a Romanian member of the European Parliament who is in charge of writing EU proposals, told Reuters on Thursday. “This makes sure that both the AI and the company building it are trustworthy.” “I can’t think of any reason why a company wouldn’t want to be open.”
Early this month, EU lawmakers agreed on the draft of the act. Later this year, the last details of the bill will be worked out by the member states, the European Commission, and Parliament. The Microsoft-backed AI-powered chatbot ChatGPT has opened up new possibilities for AI. Fears about what it could do have caused excitement and alarm, as well as problems with regulators.
In response to Altman’s tweet on Friday, Dutch MEP Kim van Sparrentak, who has worked closely on the AI draft rules, told Reuters that she and her colleagues must stand up to pressure from tech companies. She said, “I hope we stay strong, and we’ll make sure these companies have clear rules to follow about transparency, safety, and environmental standards.” “Europeans don’t do things like voluntary codes of conduct.”
OpenAI’s first run-in with the law happened in March, when the Italian data regulator Garante shut down the app in Italy. It said that OpenAI was breaking European privacy rules by doing this. After the company made changes to protect users’ privacy, ChatGPT went back online.
Sergey Lagodinsky, a German MEP who also worked on the draft AI Act, told Reuters, “I’m glad to hear we don’t have to use threats and ultimatums.” “We all face problems, but the European Parliament is not an enemy of AI. It is a partner.” OpenAI said on Thursday that it will give out 10 equal grants from a $1 million fund for experiments to figure out how AI software should be governed. Altman called these grants “how to democratically decide on the behavior of AI systems.”