White House AI Memo Promises to Balance National Security Interests with Privacy and Human Rights
Gabby Miller, Ben Lennett / Oct 24, 2024On October 24, 2024, the Biden Administration issued a “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence.” The purpose of the document, according to the administration, is to “galvanize federal government adoption of AI to advance the national security mission, including by ensuring that such adoption reflects democratic values and protects human rights, civil rights, civil liberties and privacy.”
The memo fulfills a directive set forth in the Biden Administration’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which was released in October 2023. The directive called for a memorandum to “address the governance of AI used as a component of a national security system or for military and intelligence purposes,” including taking into “account current efforts to govern the development and use of AI for national security systems.” “National security system” is a term that covers a range of applications in defense and intelligence.
The memo was developed and released in the context of growing concerns among policymakers about the development of artificial intelligence (AI) and its impact on national security, including competition with China. The document reflects these concerns and includes numerous references to democratic principles and the threat of digital authoritarianism. Moreover, a senior administration official offered in a press briefing about the memo that the government must foster US leadership in AI, and “a failure to do this, a failure to take advantage of this leadership and adopt this technology, we worry could put us at risk of a strategic surprise by our rivals, such as China.”
At the same time, the memo seeks to balance national security objectives with human rights, civil rights and civil liberties that could be harmed through the use of AI technologies and systems by the military and intelligence services. Such concerns are not just limited to the national security context, however, and were identified by the administration in the 2023 AI Executive Order as a risk for all federal agencies, requiring the Office of Management and Budget (OMB) to develop guidance for federal agencies to strengthen AI governance, advance responsible AI innovation, and manage risks from the use of AI.
Below is a summary of the memo's policy and objectives.
Section 1. Policy
The national security memo’s aim is to not only develop the systems, safeguards, and processes required to harness AI technologies that advance the US national security mission but also to “lead the world” in the responsible application of AI as it relates to national security. The memo states that AI in the national security context can “offer great benefits,” but misuse – even without malintent – threatens to “bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses, and weaken the rules-based international order.”
According to the memo, in order for the US to facilitate “a stable and responsible” AI governance landscape internationally, it’s important that it make use of these systems in a way that protects human rights and civil liberties as well as privacy and safety. But it must do so with “responsible speed” or risk “losing ground to strategic competitors.”
Ceding the United States’ technological edge would not only greatly harm American national security, but it would also undermine United States foreign policy objectives and erode safety, human rights, and democratic norms worldwide.
This requires the US government and its national security institutions to make meaningful changes to their strategies, capabilities, informational infrastructure, governance, and organization so that AI – as it becomes increasingly more general-purpose and affects nearly all domains with national security significance – is not relegated to a single institutional silo, the memo reads.
Section 2. Objectives
The memo sets out three objectives to guide its activities relating to AI and national security.
- Lead the world’s development of safe, secure, and trustworthy AI.
- Harness powerful AI with appropriate safeguards.
- Cultivate a stable and responsible framework to advance international AI governance.
Much of the stated objectives align with President Joe Biden’s 2023 AI Executive Order. The memo reasserts that the government must provide safety and security guidance to AI developers and users and rigorously assess and help mitigate risks that AI systems might pose. The memo defines success as measured not only by US technological innovation, but also by its leadership in developing global norms rooted in international law, human rights, civil rights, and democratic values.
It also warns not to take the “unmatched vibrancy and innovativeness of the United States AI ecosystem” for granted so that it can be proactively strengthened and “remain the most attractive destination for global talent and home to the world’s most sophisticated computational facilities.”
Section 3. Promoting and Securing the United States’ Foundational AI Capabilities
3.1. Promoting Progress, Innovation, and Competition in United States AI Development.
In order to protect its competitive edge in AI development, the US must bolster key drivers of AI progress, like technical talent and computational power. “Although the United States has benefited from a head start in AI, competitors are working hard to catch up,” the memo states. Thus, advancing the lawful ability of “noncitizens highly skilled in AI and related fields to enter and work” in the US is a national security priority and will require “developing talent at home” and attracting and retaining “top international minds.”
The Department of State (DOS), the Department of Defense (DOD), and the Department of Homeland Security (DHS) are charged with attracting and bringing to the US individuals who would “improve United States competitiveness in AI and related fields, such as semiconductor design and production.”
The US must continue to develop the “world’s most sophisticated AI semiconductors” and construct “its most advanced AI-dedicated computational infrastructure to retain its lead in AI. The DOD, the Department of Energy (DOE), and the Intelligence Community(IC) are responsible for designing and building facilities capable of harnessing frontier AI for relevant scientific research domains and intelligence analysis, where appropriate. The Office of the White House Chief of Staff and the DOE will coordinate streamlining permits, approvals, and incentives for constructing AI-enabling infrastructure. This includes surrounding infrastructure like clean energy generation, power transmission lines, and high-capacity fiber data links.
3.2. Protecting United States AI from Foreign Intelligence Threats.
Foreign states will likely aim to “obtain and repurpose the fruits of AI innovation in the United States to serve their national security goals,” the memo asserts. This may come in the form of “research collaborations, investment schemes, insider threats, and advanced cyber espionage.”
The National Security Council (NSC) staff and the Office of the Director of National Intelligence (ODNI) are charged with making recommendations to “ improve identification and assessment of foreign intelligence threats” to the US AI ecosystem and related sectors, such as semiconductor design and production. ODNI, in coordination with DOD, the Department of Justice (DOJ), Commerce, DOE, and DHS, must also develop a list of the most plausible avenues in the AI supply change that could be disrupted or compromised by foreign actors and take steps to reduce such risks.
To ensure that foreign actors don’t obtain US intellectual property through “gray-zone methods,” the Committee on Foreign Investment in the United States must consider whether a “covered transaction involves foreign actor access to proprietary information” that sheds light on how to create and effectively use powerful AI systems.
3.3. Managing Risks to AI Safety, Security, and Trustworthiness.
The US must proactively construct testing infrastructure to assess and mitigate AI risks to realize AI’s “positive potential” and preserve US AI leadership. The Government wants to pursue tools for “reliably testing AI models’ applicability to harmful tasks and deeper partnerships with institutions in industry, academia, and civil society.”
Commerce, acting through the AI Safety Institute (AISI) within the National Institute of Standards and Technology (NIST), will be the primary point of contact with private-sector AI developers. They will “facilitate voluntary pre- and post-public deployment testing for safety, security, and trustworthiness of frontier AI models” and lead “voluntary unclassified pre-deployment safety testing,” including risk assessments related to cybersecurity, biosecurity, chemical weapons, and system autonomy.
The memo makes clear that AISI’s direct responsibilities “do not extend to the evaluation of AI systems for the potential use by the United States Government for national security purposes,” as these responsibilities lie within the agencies considering such uses.
Section 4. Responsibly Harnessing AI to Achieve National Security Objectives.
4.1. Enabling Effective and Responsible Use of AI.
To accelerate responsible AI adoption and account for technical talent needs required to adopt and integrate AI into their missions, the Department of State, DOD, DOJ, DOE, DHS, and IC elements must revise their hiring and retention policies. They should also consider programs to attract experts from industry, academia, and civil society — including scholarship for service programs — that build familiarity with the AI industry, according to the memo.
“To accelerate the use of AI in service of its national security mission, the United States Government needs coordinated and effective acquisition and procurement systems,” the memo states. DOD and ODNI, in coordination with OMB, must create a work group to address issues that may arise in the AI procurement process. It will consult with the Director of the NSA to develop recommendations.
In line with the 2022 National Security Strategy, the US must invest in and enable the “co-development and co-deployment of AI capabilities with select allies and partners” to maintain an advantage over its competitors. The DOD, in coordination with the State Department and ODNI, will evaluate its feasibility by creating a list of foreign states with such capabilities, developing concepts for co-development and co-deployment, and more.
The US, through the DOD and ODNI, must also improve internal coordination on its approach to AI to ensure interoperability and resource-sharing.
4.2. Strengthening AI Governance and Risk Management.
The US must develop and implement AI governance and risk management practices to “ensure that military use of AI capabilities is accountable,” even during military operations “within a responsible human chain of command,” the memo states. These frameworks must be adaptable to future opportunities and developments, consistent across agencies, transparent, and more.
The policy states that agencies must develop and use AI consistent with US law and policies, democratic values, and international law and treaty obligations. Agency heads will monitor, assess, and mitigate risks tied to their agency’s development and use of AI, including when AI decision-making outputs are used in a defense, intelligence, or law enforcement context. These include risks to physical safety, privacy harms, discrimination and bias, inappropriate use in conflict, data spillage from AI systems trained on classified or controlled information, and more.
The policy is accompanied by a “Framework to Advance AI Governance and Risk Management in National Security.” This AI framework serves as a national security-focused counterpart to the March 2024 OMB memo on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.” The AI Framework specifies guidance on AI activities that pose unacceptable levels of risk and should be prohibited, that are high impact and require minimum risk management, and more.
The State Department, the Department of the Treasury, DOD, DOJ, Commerce, DOE, DHS, and ODNI will issue or update guidance on AI governance and risk management for their respective national security strategies in line with the aforementioned AI Framework, the memo states.
Section 5. Fostering a Stable, Responsible, and Globally Beneficial International AI Governance Landscape.
The US must maintain an active role in “shaping the international order” – as it did with establishing nonproliferation regimes for biological, chemical, and nuclear weapons – by producing a strategy for the advancement of international AI governance norms, the memo reads. The strategy will be produced by the Department of State, in coordination with DOD, Commerce, DHS, the United States Mission to the United Nations (USUN), and the United States Agency for International Development (USAID), and will cover bilateral and multilateral engagement with allies and partners as well as outline an approach to working with international institutions like the Group of 7 (G7).
Section 6. Ensuring Effective Coordination, Execution, and Reporting of AI Policy.
The heads of the State Department, DOD, Commerce, DOE, ODNI (acting on behalf of the IC), USUN, and USAID must submit a report to the President detailing the activities they were tasked with in the memo. The reports will come within 270 days of the memo for at least the next five years annually.
The Chief AI Officers of the Department of State, DOD, DOJ, DOE, DHS, OMB, ODNI, CIA, DIA, NSA, and NGA will form an AI National Security Coordination Group within 45 days of the memo. The Group will harmonize policies, develop processes and best practices, and create training programs and initiatives relating to AI and US national security strategy.
The Coordination Group will also establish a National Security AI Executive Talent Committee within 90 days of the memorandum. The committee will work to “develop an updated set of Government-wide procedures for attracting, hiring, developing, and retaining AI and AI-enabling talent for national security purposes.”