Regulation of artificial intelligence is critical,” Sam Altman, CEO of technology firm OpenAI, told US senators this May during hearings on artificial intelligence (AI). Many tech experts and laypeople alike agree, and the clamor for legal guardrails around AI is rising. This year, the European Union is expected to pass its first sweeping AI laws after more than two years of debate. China already has AI regulations in place.
In practice, however, people still argue about what needs to be limited, how risky AI is, and what actually needs to be limited. While California-based OpenAI and other firms have publicly called for more oversight, those companies have resisted some of the controls proposed by the EU, pushing for international advisory bodies and voluntary commitments rather than new laws. Meanwhile, this technology is a constantly moving target.
So far, the three key players the United States, the EU and China have taken a different approach, says Matthias Spielkamp, executive director of AlgorithmWatch, a Berlin-based nonprofit that studies the impact of automation on society.
• The EU is highly cautious its upcoming AI law aims to ban some uses and allow others, while imposing due diligence on AI companies.
•The United States, home to many leading AI firms, has so far been the furthest away.
• In China, the government tries to balance innovation with maintaining tight control over corporations and freedom of speech. And everyone is trying to figure out how much regulation is needed specifically for AI, since existing laws could already address some of its risks.
EU: risk regulation
This June, the EU Parliament passed the AI Bill a giant piece of legislation that would categorize AI tools based on their potential risk. Although the law may still change, as it must be approved by all three EU voting bodies the Parliament, the European Commission and the Council of the EU the current proposal would ban the use of software that poses an unacceptable risk. The AI Act defines this as covering most uses in predictive policing, emotion recognition and real-time facial recognition.
Many other uses of AI software would be allowed, but with different requirements depending on their risk. This includes tools that guide welfare and criminal justice decision-making, as well as tools that help businesses select potential employees to hire. Here, EU law requires developers to demonstrate that their systems are secure, efficient, privacy-compliant, transparent, user-friendly and non-discriminatory.
For “high-risk” uses, which include law enforcement and education software, the law requires detailed documentation that all use of AI systems is automatically logged and that the systems are tested for accuracy, security and fairness.
US: “appearance of activity”
Unlike the EU, the United States has no extensive federal laws on artificial intelligence – or significant data protection rules.
In October 2022, the White House Office of Science and Technology Policy (OSTP) released a blueprint for an AI Bill of Rights, a white paper outlining five principles to guide the use of AI, as well as potential regulations. The paper says that automated systems should be safe and effective, non-discriminatory, protect people’s privacy and transparent: people should be informed when the system makes decisions for or about them, they should be informed about how the system works, and they should have the option to opt out. or have a person hit.
“Philosophically, [the EU AI proposal and law] are very similar in identifying the goals of AI regulation: to ensure that systems are safe and efficient, non-discriminatory and transparent,” says Suresh Venkatasubramanian, a computer scientist at Brown University in Providence. , Rhode Island, who co-authored the plan when he was assistant director for science and justice at OSTP. Although US ideas about implementation differ slightly from the EU’s, “I would say they agree on a lot more than they disagree on,” he adds.
It can be helpful for countries to outline their vision, says Sarah Kreps, director of the Technology Policy Institute at Cornell University in Ithaca, New York, “but there’s a huge gap between the plan and the implementable legislation.”
China: maintaining social control
China has issued the most AI legislation to date although it applies to AI systems used by companies, not the government. The 2021 law requires businesses to be transparent and unbiased in their use of personal data in automated decisions and to allow people to opt out of those decisions. And the 2022 rulebook for recommendation algorithms from the Cyberspace Administration of China (CAC) says these must not spread fake news, make users addicted to content or promote social unrest.
In January, the CAC began enforcing rules issued in 2022 to deal with deepfakes and other AI-generated content. Service providers that synthesize images, video, audio, or text must authenticate users, obtain consent from deep-fake targets, watermark and log outputs, and counter any misinformation produced.
And the CAC will begin enforcing additional regulations this month targeting generative tools like ChatGPT and DALL-E. These say companies must prevent the spread of false, private, discriminatory or violent content, or anything that undermines China’s socialist values.
“On the one hand, [the Chinese government] is very motivated to implement social control. China is one of the most censored countries on the planet. On the other hand, there are genuine desires to protect individual privacy” from corporate invasion, says Kendra Schaefer, head of technology policy research at Trivium China, a Beijing-based consultancy that briefs clients on Chinese politics. The CAC did not respond to Nature’s request for comment for this article.
Read Now:Congress MP Gaurav Gogoi’s three questions for PM Narendra Modi & ask “why Maun vrat in Manipur”
Reference: https://www.nature.com/articles/d41586-023-02491-y