OpenAI CEO, Sam Altman, has threatened to withdraw the company’s services from the EU if it cannot comply with the bloc’s AI Act. Altman is particularly concerned about the proposed “high-risk” systems designation, which would apply to OpenAI’s ChatGPT and GPT-4 models. He also opposes the new rules for generative AI models, which would require companies to disclose any copyrighted material used in training. However, Altman’s response has been criticised as over-regulating, and he has since backtracked on his threat. The tendency of tech firms to threaten product withdrawals from regulators without following through is being seen as empty rhetoric.
As seen on a recent article by The Next Web, OpenAI CEO Sam Altman has reversed his threat to withdraw his company’s services from Europe in response to the bloc’s impending AI Act. This U-turn came just a day after Altman warned that OpenAI could “cease operating” in the EU if it couldn’t comply with the new regulations.
Altman’s initial threat was prompted by concerns about the proposed rules for “high-risk” AI systems, which would subject OpenAI’s ChatGPT and GPT-4 models to extra obligations before entering the market. He also took issue with the legislation’s requirements for generative AI companies to disclose any copyrighted material used to train their systems.
Despite these concerns, Altman’s response to the proposed AI Act has been criticized by some as “over-regulating.” Lawmakers have likened his threat to blackmail, pointing out that other tech firms have made similar threats in the past without following through.
Indeed, Google has threatened to pull its search engine from Australia, WhatsApp has threatened to block its service in the UK, and Meta has threatened to shut down Facebook and Instagram in Europe on multiple occasions. Microsoft has even threatened to remove Windows from unruly US states. So far, none of these threats have been fulfilled.
Altman’s reversal is unlikely to be the last time a tech boss backtracks on a warning to regulators. However, this empty rhetoric is starting to sound like the boys who cried wolf. If tech firms want to be taken seriously, they need to follow through on their threats or engage in a more constructive dialogue with regulators.
Despite Altman’s initial resistance to the proposed AI Act, he ultimately tweeted that OpenAI is “excited to continue to operate” in Europe and has “no plans to leave.” This suggests that the company is willing to work with regulators to ensure that its AI systems are safe and compliant with the new rules.
In light of this information, the debate over AI regulation is likely to continue as the technology becomes more integrated into our daily lives. While it’s important to strike a balance between innovation and safety, it’s also important for tech firms to engage in good faith with regulators rather than resorting to empty threats. Only then can we create a regulatory framework that benefits everyone.