OpenAI Proposes International Regulatory Body For AI


OpenAI, the nice people who brought you ChatGPT, has proposed the creation of an international regulatory body for AI, similar to the International Atomic Energy Agency. This comes in response to the rapid pace of AI innovation and the potential risks associated with it.

The proposed body would oversee AI development efforts exceeding a certain capability threshold. It would inspect systems, require audits, test for safety compliance, and impose restrictions on deployment and security levels. While it may not have the power to shut down a rogue AI, it could establish and track international standards and agreements.

OpenAI also recommends monitoring compute power and energy usage in AI research, providing objective measures for tracking and auditing AI development. (This approach would not hinder innovation in smaller companies.)

While the call for regulation is a step in the right direction, the specifics of how to design such a mechanism remain unclear; a better definition of the problem set is required. While I applaud OpenAI’s suggested initiative, it doesn’t seem to cover the intended consequences of superintelligence. It simply asks for non-binding regulation around the types of research that OpenAI is engaged in. They raise some important points, but if we need regulation (as opposed to laws permitting and outlawing certain use cases), the devil is in the details.


More By This Author:

OpenAI Introduces ChatGPT App For iOS
Google I/O 2023
Microsoft ChatGPT For Business On The Way

Disclosure: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.
Or Sign in with