With the rest of the world placing limits on Artificial Intelligence, President Joe Biden looks to do the same with his latest executive order.
To guide the rapid growth of artificial intelligence, President Biden’s executive order has placed “guardrails” on A.I. to protect the American people. While the order was signed on Oct. 30, it won’t be fully implemented until anywhere from 90 to 365 days.
The creation of this order dates back to June, when President Biden visited a group of technology leaders like Tristan Harris, executive director of the Center for Human Technology, Jim Steyer, the CEO of Common Sense Media and more in San Francisco, CA.
Since A.I. can create images, voices and text that appear to be human-made, the topic of the debate was focused on what is the best way to make A.I. a tool for growth and national security. Although more details of Biden’s debate with the tech leaders have not been shared, the conversation seems to have helped build the framework of Biden’s executive order.
The Defense Production Act, the National Institute of Standards and Technology (NIST) and the Department of Commerce are the key tools in the order.
The Defense Production Act will order tech companies and A.I. developers to share test results and any other information the government needs, NIST will create new safety procedures before A.I. tools are released to the public and the Commerce Department will watermark A.I. generated content to differentiate and authenticate A.I. for the public.
The actions of the executive order will span across 90 to 365 days. These are a few of the upcoming changes:
Within 90 days, companies developing A.I. will have to provide the federal government with reports, records and information about training and development of any physical and cyber model. NIST will conduct a test of possible exploits, vulnerabilities and quantity of computing power. The Secretary of Commerce will have access to any transactions, domestic or international.
Within 120 days, the Secretary of Defense, the Assistant to the President for National Security Affairs and the Director of the Office of Science and Technology Policy (OSTP) will work with the National Academies of Sciences, Engineering and Medicine to study the biosecurity risks of A.I.
Within 180 days, the Secretary of Commerce will oversee any transactions, domestic or international. A U.S. Infrastructure as a Service (IaaS) Provider will verify and identify foreign buyers and resellers by obtaining addresses, emails and other personal information.
Within 240 days, the Assistant to the President for National Security Affairs, the Director of Office of Management of Budget (OMB) and the Secretary of Homeland Security will collaborate on possible regulations for A.I. as well as shaping it as a tool for homeland security.
More actions in the coming days are detailed in the White House’s briefing room statement.
A possible reason why Biden took executive action against A.I. is because of how fast other countries acted with the fast-growing technology. The European Parliament is classifying A.I. systems into categories of risk. EU state members will be the ones to enforce rules and regulations, as well as remove apps from the public market.
For Brazil and China, the developers of A.I. technology will be held accountable for any public harm. In Brazil, if an A.I. developing companies that infringes on the newly written users’ rights will be liable for damages. In China, A.I. developers will also be punished if their technology affects another’s intellectual property or causes any sort of damage.
Another reason for the president’s executive action is the impact A.I. has had on social media. From “deep fakes” to false images, an increase of racial and social tension and damage to the minds of America’s youth, immediate action seemed necessary to POTUS.