Yet another group has issued ethics and safety concerns over emerging artificial intelligence technologies, saying that companies as well as governments should work hard to ensure they’re safe to use.
In a paper published on Tuesday, some of the top AI researchers in the world suggested that these entities needed to dedicate at least one-third of the funding they get for AI research and development to ensuring that the systems they create are being used ethically and safely.
The paper was issued just one week before London will hold an international AI Safety Summit. It lists some measures that the researchers believe AI companies and governments need to take to address the risks that AI presents.
It was written by more than a dozen of the top academics in AI, as well as one Nobel laureate and three winners of the Turing Award. It states:
“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented.”
As of now, there are now regulations that are broad-based that focus specifically on the safety of AI. The European Union has put forth proposals for legislation that would govern AI, but it hasn’t officially become law just yet, as lawmakers still need to agree on many issues surrounding it.
Yoshua Bengio, who’s one of the three people who are known as the godfather of AI, commented:
“Recent state-of-the-art AI models are too powerful, and too significant, to let them develop without democratic oversight. It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken.”
Other authors of the paper include Yuval Noah Harari, Dawn Song, Daniel Kahneman, Andrew Yao and Geoffrey Hinton.
AI technologies have been around for a while now, but they only really burst onto the public scene recently, once OpenAI launched its generative AI models. The most famous program that uses it is known as ChatGPT.
Since it has launched, OpenAI has caused many people to be concerned, including some prominent people in the tech industry.
For instance, Elon Musk – the CEO of Tesla and SpaceX, and the other of the X social media platform – have warned that AI brings a lot of risks. He, along with other prominent CEOs, at one point this spring called for a pause of six months on developing any further AI system.
Many companies in the industry have said that regulating AI technologies in such a way could be very prohibitive to them. They say they would face very high costs associated with compliance as well as liability risks that would be disproportionate.
Stuart Russell, who’s a British computer scientist, described what AI companies may say:
“Companies will complain that it’s too hard to satisfy regulations – that ‘regulation stifles innovation.’ That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.”