From Discourse Magazine <[email protected]>
Subject A New Executive Order Establishes Some Guideposts for AI Strategy
Date November 3, 2023 10:01 AM
  Links have been removed from this email. Learn more in the FAQ.
  Links have been removed from this email. Learn more in the FAQ.
View this post on the web at [link removed]

On Monday, the Biden administration debuted a landmark executive order on artificial intelligence [ [link removed] ]. Dozens of pages long, the order is both exhaustive and comprehensive. Most if not all federal agencies will be asked either to write new AI safety rules or to actively consider integrating AI into their systems to (hopefully) improve how the government does business. While it’s too early to tell how this order will be implemented, and there hasn’t been much time to digest the expansive new regulation [ [link removed] ], a few initial takeaways are evident.
Biden Is Serious
Previous Biden administration executive actions have often been “messaging orders,” designed to plant a flag in the ground to articulate beliefs and principles but lacking substance beneath the veneer. This order is the opposite. Many of the asks are specific, impactful and wide-reaching—notably the new AI reporting requirements; “know your customer” rules for foreign actors operating on U.S. servers; requested rules to crack down on algorithmic bias in healthcare, housing and other areas; biosecurity regulation; immigration reforms; and possible financial stability regulation. Other provisions suggest that this order is not only an attempt to “go big” on AI, but a sign that AI policy is truly being centered as a core administration priority.
Significantly, the order redirects several of the administration’s earlier monetary commitments toward AI. Previously, the administration had dedicated the Technology Modernization Fund, a government IT slush fund of sorts, to funding government IT needs such as the implementation of the 21st Century IDEA act, a not yet fully implemented bill that requires the complete digitization of government forms. Now this act is clearly taking a backseat, as the fund going forward will prioritize AI “and particularly generative AI.” Government digitization is out; generative AI is in.
The Technology Modernization Fund redirection is only one example. Many other preexisting priorities are being trumped by this new push into AI, and it’s clear the administration is trying to make AI policy one of its legacy issues. No doubt we will see more commitments redirected toward this effort as implementation of the executive order begins and details emerge.
National Security Takes Center Stage
Perhaps the centerpiece of the order is the activation of the Defense Production Act (DPA), a statute intended to [ [link removed] ] provide “national mobilization capacity to bring the industrial might of the U.S. to bear on broader national security challenges.” In the past week [ [link removed] ], many have exaggerated the “threat” the activation of this statute poses, often citing its Korean War origins and the DPA’s “commandeering” authorities. These authorities require companies to accept government manufacturing contracts in times of crisis, but they were not invoked in Monday’s order.
Many of these DPA concerns are either deceptive or needlessly provocative. First, the current DPA is not a Korean War-era law—it was reauthorized in amended form in 2018. Second, the DPA is realistically a multipurpose, commonly used statute with a range of powers other than the commandeering authority. For instance, DPA Title III is the source of the government’s power to provide direct loans for defense projects, make purchase commitments and provide loan guarantees.
Per the Department of Defense [ [link removed] ], there are currently 19 DPA Title III projects in progress with authorizations from both Biden and Trump. Invoking the DPA is a regular practice and is currently being used to develop a range of technologies, from hypersonic missiles to large-capacity batteries and next-generation unmanned aerial vehicles.
In the AI executive order, President Biden specifically invokes the DPA’s Title VII industry assessment powers to require regular reporting from AI companies on a range of information, including the AI technologies they own and the results of safety assessments. Such action is not beyond the pale. In 2012, the Obama administration used [ [link removed] ] these assessment powers in a similar manner, requiring telecommunications companies to report use of foreign hardware in an effort to crack down on cyberespionage. Since then, Congress has twice reauthorized this law and these provisions, affirming congressional intent for this type of application.
The president’s activation of the DPA is thus clearly backed by precedent and Congress. Still, that’s not to say the use of this power doesn’t raise concerns. I personally believe a degree of industry assessment is indeed needed for AI, though the DPA invocation I imagine would be more tailored and focused less on R&D and more on AI diffusion. But I have reservations about the significant privacy risks this power would create if overapplied and the potential for chilling effects on industry. In the executive order’s current form, we don’t have a full picture of the scope of application, and to assess these risks properly, we will have to wait until implementation details are known.
The key takeaway here is that the order’s activation of the DPA is not absurd, unprecedented or an abuse of power. Rather, the administration is conceiving of AI technology as a new defense-industrial resource to be both protected and cultivated. To the Biden administration, AI is not just an industry; it’s an asset. By invoking the DPA, it is assessing the size, shape and scope of that asset. How good is our AI ecosystem? How does it compare with China’s? Are we competitive?
A Possible Regulatory Future
Those who are worried about misuse of the data collected via the DPA should consider that the executive order’s main concern is national defense. Under the Department of Commerce, the order’s new program of DPA implementation will be enforced by the Bureau of Industry and Security. As the name suggests, this organization is not a consumer watchdog; it’s a national security agency. More specifically, half of the bureau’s dual mandate is “promoting continued U.S. strategic technology leadership.” Given this mission, use of the data collected by DPA powers will no doubt be tilted toward the interests, knowledge base and priorities of the bureau: informing national security, assessing technological competitiveness and cultivating AI assets for defense purposes.
Make no mistake, this data collection and reporting could inform new AI regulation—but it will be a very specific type of regulation with very specific goals. According to [ [link removed] ] Undersecretary of Commerce for Industry and Security Alan Estevez, who will be leading the effort, “The Bureau of Industry and Security stands ready to develop the regulations and procedures mandated by today’s executive order that will enhance safety and protect our national security and foreign policy interests” (emphasis added). The executive order’s further emphasis on chemical, biological, radiological and nuclear threats confirms that this data reporting is not an example of run-of-the-mill consumer welfare regulation. Rather, the reporting requirement is concerned with WMDs, cybersecurity and other major threats to national security.
While this information may indeed find additional uses in investigation and consumer-facing regulation, data tends to have a hard time escaping the national security sector. Once it’s collected, it will be classified. Even if declassified, the data won’t be disseminated among agencies until they decide upon information-sharing agreements. Diffusing information like this tends to be difficult, and as a result, consumer welfare regulations based on this reporting are unlikely in the near future.
Diffusion on the Mind
What has me most excited about Biden’s executive order, though, is its clear emphasis on AI diffusion throughout the federal government. Originally, I had been expecting it to focus on taming this technology and putting the brakes on its overapplication. While these elements certainly exist in the order, the administration clearly sees AI as lightning that it wants to capture in a bottle. The order contains countless provisions devoted to “AI capacity”; the administration is embarking on a whole-government push to educate, bring in talent and assess IT with the aim of creating a modern, AI-driven government that (hopefully) is more efficient, approachable and fair.
On AI.gov we already see the tip of this iceberg; the administration has already begun asking citizens to join what it calls the “National AI Talent Surge [ [link removed] ],” an effort backed by new job postings, eased clearance requirements for both citizens and (uniquely) noncitizens, and prioritized hiring. All these efforts are encouraging and necessary. The best way [ [link removed] ] to create AI abundance and compete with China is to go big on AI diffusion and reap AI potential. This order is a great first step.
Unfortunately, as is often the case in the public sector, competing priorities may hinder AI’s great potential. While many provisions in the order are devoted to capacity building, many more are devoted to standards, regulations and ethical limits. Limits and standards are certainly needed, but so too is caution in imposing new regulations. If agencies overregulate procurement and use of AI, the red tape could slow down and impede this rollout, just as it did with the federal government’s digitization efforts.
On Wednesday, the White House Office of Management and Budget released draft implementation [ [link removed] ] guidance, giving us an early taste of the rules that will guide this effort. While it’s hard to say how extensive these rules will be, at first glance one can easily imagine potential excesses. To adopt AI systems that will “impact the rights and safety of the public,” a category that has yet to be fully defined, agencies must do all of the following and more:
Conduct AI impact assessments and independent evaluations.
Test the AI in a real-world context.
Identify factors contributing to algorithmic discrimination and disparate impacts.
Monitor deployed AI.
Train AI operators.
Ensure that AI advances equity, dignity and fairness.
Consult with affected groups and incorporate their feedback.
Notify and consult with the public about the use of AI and agencies’ plans to achieve consistency with the proposed policy.
Notify individuals potentially harmed by a use of AI and offer avenues for remedy.
Clearly, system approval will require a lot of paperwork.
Hidden beneath the simple text of these requirements will no doubt be a slog of additional rules, forms and processes. If such rules are too onerous, the implementation costs, opportunity costs and morale costs to employees who simply want to complete their work can disincentivize action and strangle the potential benefits of AI technologies.
If we want AI capacity, developers and staff will need the freedom to explore, innovate, create and implement. Rules must exsist, but they must also be light and permissive enough to harness AI’s full potential. Thankfully, some early signs suggest an understanding of this balance. The administration explicitly states that “agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI” and that agencies should provide generative AI services to employees (with limits, of course) for “purposes of experimentation.” There is little Luddism to be found here.
While the Biden administration has reasonable reservations, it clearly wants to make use of AI technology. Work will be needed, however, to tame bureaucratic instincts that often value [ [link removed] ] inflexible “process over progress.” Time will tell whether Monday’s executive order will be implemented in ways that address the administration’s national security concerns while still allowing AI innovation to thrive.

Unsubscribe [link removed]?
Screenshot of the email generated on import

Message Analysis