It’s hard to go anywhere and not hear people talking about their fears about what widespread deployment of artificial intelligence may mean, not just to the economy, but to our lives. Are we about to be taken over by machines that are smarter than we are? Probably not, but in an effort to rein in AI gone wild, the Biden Administration issued an Executive Order on October 30, 2023 that attempts to manage — if not directly regulate — the technology to a whole new level . The primary goal of the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” is to continue to improve AI safety and security - -not to solve every problem or address every issue associated with AI in the wild. The October 2023 order represents a step forward from the White House’s last action in August 2023, when 15 major AI providers, including tech heavyweights Google, Microsoft, and Open AI, agreed to voluntarily (a word that you will see again in this piece) work, over the course of a two-year competition, make AI more secure.
Specifically, one of the goals of the competition is to improve “watermarking,” which is one way of validating content that has been generated by AI by embedding a digital signal into an AI’s output so the generated end product is recognizable as AI-generated. The executive order, which was clearly drafted with the guidance of AI sophisticates, also positively reflects a friendly relationship with the technorati. The timing of the executive order is also interesting, as it came out just two days prior to the UK’s AI Safety Summit, thus reflecting the U.S. staking its claim as a leader in the development of AI policy and management.
The October 2023 offers encouragement for compliance with the voluntary (there’s that word again) requirements originally outlined in the August 2023 document. More specifically, it reflects a serious interest and directs attention (note that I didn’t use the word “requires”) towards increased transparency from AI companies about how AI processes work—that is, how the “secret sauce” is made. According to the White House Fact Sheet, the executive order further directs the U.S. Department of Commerce to develop guidance for labeling AI-generated content. It is hoped/expected that AI companies will use this guidance (again, note the word “guidance”) to develop labeling and watermarking tools that the White House hopes federal agencies will adopt. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”
However, the executive order does not address every issue associated with AI deployment. First, it represents guidance, not federal law. This means that this order sets out guidelines the federal government would like to see companies adopt, but there are no legal consequences if companies choose not to do so. Secondly, the executive order also is silent on the issue of compliance by industry players or government agencies’ use of these technologies and tools. Third, the executive order establishes a new raft of standards for labeling AI-generated content going forward. Nothing has been said about compliance for tools and processes that are already out there.
What is interesting is that the executive order creates thresholds for notification of the federal government when certain thresholds of data volumes are met, and/or when vulnerabilities pose risks to “to national security, national economic security, or national public health and safety.” The authority for these represents a novel use of the Defense Production Act, a law dating back to 1950 designed to give special powers to the Executive Branch to direct private companies to meet specifications and prioritize orders from the federal government to speed production.”
The executive order encourages the sharing of safety test results for new AI models with the US gov if the tests show that the technology could pose a risk to national security. It also establishes best practices and standards rather than enforceable actions, and lastly, it establishes “red team” testing, where tests are developed to intentionally expose vulnerabilities including biases.
What isn’t included, for now, are compliance dates, enforcement mechanisms, and established penalties for non-compliance. Also, while an Executive Order has teeth, it can always be undone by a future President. Legislation would likely be the most permanent “fix,” or step in the right direction, but securing passage of any legislation is extremely unlikely in a highly polarized Congress.
Nonetheless, the executive order identifies and tackles a number of major issues in AI deployment. It’s not unusual that the technology is far more advanced and sophisticated than the regulation designed to manage it. President Biden’s executive order is a step in the right direction for sure.
In August, the New York Times ran a piece called “A.I Revolution is Coming. Just When Is Hard to Say.” The point of the article is that regardless of the technology, there has always been a gap between the development of a new world-changing technology and its widespread deployment. AI is increasingly present in what we see, do, manufacture, and sell. But it is not (yet) ready to take over the world. The executive order issued this week attempts to put some guardrails in place as enterprise, government, and individual reliance on artificial intelligence advances.