Last updated: May 26, 2023, 1:24 PM IST
OpenAI wants to build AI with the right ethics
OpenAI, the startup behind the wildly popular ChatGPT artificial intelligence chatbot, said Thursday it will award 10 equal grants from a $1 million fund for these experiments.
(Reuters) – OpenAI, the startup behind the wildly popular ChatGPT artificial intelligence chatbot, said Thursday it will award 10 equal grants from a $1 million fund for experiments that examine democratic processes to determine how AI software should be controlled to address bias and other problem factors.
The $100,000 grants will go to recipients who provide compelling frameworks for answering questions such as whether AI should criticize public figures and what it should consider the “median individual” in the world, according to a blog post announcing the fund.
Critics of AI systems such as ChatGPT say they have inherent bias because of the inputs used to shape their opinions. Users have found examples of racist or sexist output from AI software depending on the questions they answer. Concerns are growing that AI working with search engines such as Alphabet Inc’s Google and Microsoft Corp’s Bing could convincingly produce false information.
OpenAI, backed by $10 billion from Microsoft, is leading the call for regulation of AI. Yet it recently threatened to withdraw from the European Union over proposed rules that it says could be too onerous.
“The current draft of the EU’s AI law would be over-regulatory, but we’ve heard it will be withdrawn,” OpenAI chief executive Sam Altman told Reuters. “They’re still talking about it.”
The startup’s grants wouldn’t go far to fund much AI research. Salaries for AI engineers and others working in the red-hot sector easily reach $100,000 and can reach $300,000 or more.
AI systems “should benefit all of humanity and be shaped to be as inclusive as possible,” OpenAI said in the blog post. “We are launching this grant program to take a first step in this direction.”
The San Francisco startup said the results of the funding could shape its own views on AI governance, though it said no recommendation would be “binding.”
(This story has not been edited by News18 staff and was published from a syndicated news agency feed – Reuters)