The US president Joe Biden has issued an executive order establishing new standards for the safety and security of artificial intelligence (AI) in the country.
It is a part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation.
The latest order builds on the previous actions taken by the president, including voluntary commitments from 15 leading companies for responsible AI development.
The new standards revolve around six key points. The first one requires developers of powerful AI systems to share safety test results and critical information with the US government. This is in accordance with the Defense Production Act and will ensure that AI systems are safe, secure, and trustworthy before companies make them public.
The White House has given power to the National Institute of Standards and Technology to set standards for AI safety testing. Collaboratively, the Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
The third is to protect Americans against AI-engineered dangerous biological materials. This will be through new standards for biological synthesis screening which will be established as a condition for federal funding.
This is followed by the standard that ensures protection against AI-enabled fraud and deception. It will be facilitated by the Department of Commerce which will develop guidance for content authentication and watermarking to clearly label AI-generated content.
It also seeks to establish an advanced cybersecurity program to develop AI tools for software and network security. And finally, it has ordered the development of a National Security Memorandum that directs further actions to ensure that the US military and intelligence community use AI safely, ethically, and effectively in their missions.
The executive order also talked about privacy risks of the AI. It believes that: “Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”
The president then called on the Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids. This focuses to prioritise the development of privacy-preserving techniques, strengthen privacy-preserving research and technologies, evaluate and strengthen privacy guidance for federal agencies and develop guidelines to evaluate the effectiveness of privacy-preserving techniques.
Biden also directed some additional actions, aimed at advancing equity and civil rights. This includes providing guidance to prevent AI-driven discrimination in housing and federal benefits, addressing algorithmic discrimination, and ensuring fairness in the criminal justice system regarding AI use.
Other plans include standing up for consumers, patients, and students, like in promoting responsible AI use in healthcare. It also seeks to develop principles to mitigate AI’s impact on workers and also produce a report on AI’s potential labour-market impacts.
It then talked about how America leads the game in AI innovation, with more AI startups raising first-time capital in the country last year than in the next seven countries combined. The executive order wants the US to hold on to its top spot and has proposed a few actions like catalysing AI research across the United States through a pilot of the National AI Research Resource, a tool that will provide AI researchers and students access to key AI resources and data.
Expanding grants for AI research in vital areas like healthcare and climate change, and supporting small developers and entrepreneurs utilising the technology are some of the other actions that have been called upon.
The Biden-Harris Administration also seeks to advance the American leadership in the field abroad by collaborating on AI with other nations and accelerate the development of AI standards with international partners. In doing this, it aims to promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.
The executive order has also directed for a responsible and effective government use of the novel technology. This is to be done via the issuance of guidance for federal agencies on AI use, improving AI procurement and deployment in federal agencies while also accelerating the hiring of AI professionals in the government.
The administration wrapped it up by promising to work with allies on an international framework for AI development and use. It has already had consultations with multiple countries and international entities on AI governance frameworks over the past several months, like Canada, the European Union, Germany, India, Israel, Nigeria, the Philippines, Singapore, South Korea, the UAE, and so on.