Resetting AI Regulation: Key Takeaways from EO 14110’s Repeal
Which Biden-era AI actions should Trump focus on?
In a significant shift, the Trump administration has repealed the Biden Administration’s AI executive order (“EO 14110”) and replaced it with the new Removing Barriers to American Leadership in Artificial Intelligence executive order. This decisive move underscores President Trump’s commitment to cutting red tape and fostering innovation, signaling a broader effort to solidify America’s leadership in emerging technologies.
Although the Biden administration’s AI policy has been officially revoked, its influence persists in ongoing initiatives and agency activities. To ensure future agency efforts align with the Trump administration’s new vision, Section 5 of Trump’s executive order calls for a comprehensive review of all policies and actions stemming from the now-canceled directive.
To aid this review, the following overview highlights key AI-related actions taken under the Biden administration. Throughout 2024, our team monitored federal agency proceedings on AI, identifying those with the greatest potential impact on innovation. Organized reverse chronologically from December 2024 back to January 2024, this list provides a punch list of key initiatives to review.
Please note that the proceeding descriptions are generally drawn from the agency’s own descriptions, and do not necessarily reflect the Abundance Institute’s analysis of the likely purpose or impact of the proposals. For proceedings where the Abundance Institute commented, we have included links.
The Top Biden AI Regulatory Proceedings
Executive Branch Agency Handling of Commercially Available Information Containing Personally Identifiable Information
Agency: Executive Office of POTUS
Office: Office of Management and Budget
Comment Due Date: 12/16/2024
Target outcome: General consideration of public comments to inform updated guidance
Status: Guidance not yet updated
The OMB requested public input on how federal agencies collect, use, share and dispose of commercially available information (CAI) that contains personally identifiable information (PII), particularly where AI may heighten privacy risks. This RFI is part of OMB's implementation of Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and aims to inform potential updates to existing OMB guidance on agency handling of such data. Specifically, the RFI solicits feedback on topics like transparency, accountability, data quality, and contract requirements with third-party data providers, all intended to ensure responsible use of CAI containing PII.
NTIA on Bolstering Data Center Growth, Resilience, and Security
Agency: Department of Commerce
Office: National Telecommunications and Information Administration (NTIA)
Comment Due Date: 11/4/2024
Target outcome: General consideration of public comments to inform a report
Status: Report not yet issued
NTIA requested public comments on the challenges and opportunities related to data center growth, resilience, and security in the United States, driven by increasing demands from critical technologies like artificial intelligence. This notice invited stakeholders to provide input on supply chain resilience, market growth, and the data security considerations that may impact future policies and regulatory approaches. NTIA will use the gathered insights to inform a report that outlines policy recommendations for fostering safe and sustainable data center expansion across the country.
Commerce Proposes Reporting Requirements for Frontier AI Developers and Compute Providers
Agency: Department of Commerce
Office: Bureau of Industry and Security
Comment Due Date: 10/11/2024
Target outcome: Proposed rule
Status: Rule not yet finalized
The Bureau of Industry and Security (BIS) under the Department of Commerce proposed an amendment to its Industrial Base Surveys—Data Collections regulations. This new rule introduces specific reporting requirements for the development of advanced artificial intelligence (AI) models, particularly dual-use foundation AI models, as well as computing clusters. These measures are being instituted in response to the Executive Order (E.O.) 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” signed on October 30, 2023. The proposed rule mandates U.S. companies that are involved in the development of AI models that meet certain computational thresholds or that possess significant computing clusters to report their activities to the federal government on an ongoing basis. The rule aims to enhance the federal government’s oversight of AI technology, particularly in areas relevant to national security.
Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts
Agency: Federal Communications Commission
Comment Due Date: 10/10/2024
Target outcome: Proposed rule
Status: Rule not yet finalized
The proposed rule by the Federal Communications Commission (FCC) aims to address the impact of artificial intelligence (AI) technologies on robocalls and robotexts. It proposes requirements for disclosing AI-generated calls, protections for consumers' consent to such calls, and exemptions for AI technologies benefiting individuals with disabilities in using telecommunications. Additionally, the FCC seeks input on emerging technologies for detecting, alerting, and blocking fraudulent or AI-generated calls while addressing related privacy implications.
Disclosure of Use of AI in Political Advertisements
Agency: Federal Communications Commission (FCC)
Comment Due Date: 9/19/2024
Target outcome: Proposed rule
Status: Rule not yet finalized
The Federal Communications Commission (FCC) proposed requiring broadcasters, cable operators, and satellite providers to disclose when political advertisements contain artificial intelligence-generated content. This rule aims to enhance transparency and accountability by mandating both an on-air announcement and a notice in online political files to inform viewers and listeners. It does not restrict the use of AI in political ads, but rather ensures the public is aware of its presence.
Managing Misuse Risk for Dual-Use Foundation Models
Agency: Department of Commerce
Office: National Institute for Standards and Technology & U.S. AI Safety Institute
Comment Due Date: 9/9/2024
Target outcome: Guidance document
Status: Updated guidance issued
The document, titled "Managing Misuse Risk for Dual-Use Foundation Models," is an initial public draft published by the U.S. AI Safety Institute in July 2024. It provides guidelines for improving the safety, security, and trustworthiness of dual-use foundation models, which are AI models that can be misused to cause harm. The document focuses on identifying, measuring, and managing the risks associated with these models, addressing both technical and social aspects. It outlines seven key objectives and associated practices for organizations to mitigate misuse risks across the AI lifecycle.
Regulations on U.S. Investments in Sensitive Technologies in Countries of Concern
Agency: Department of Treasury
Office: Office of Investment Security
Comment Due Date: 8/4/2024
Target outcome: Final rule
Status: Final rule issued
The Department of the Treasury's Office of Investment Security proposed regulations to implement Executive Order 14105, addressing U.S. investments in sensitive national security technologies and products in countries of concern. The rule mandates U.S. persons to notify the Treasury Department about certain transactions and prohibits other high-risk transactions. It aims to prevent investments that could exacerbate threats to U.S. national security by enhancing the military, intelligence, and cyber capabilities of adversarial nations.
Opportunities and Challenges of Artificial Intelligence (AI) in Transportation
Agency: Department of Transportation
Office: Advanced Research Project Agency--Infrastructure (ARPA-I)
Comment Due Date: 8/1/2024
Target outcome: General consideration of public comments to inform ARPA-I investments
Status: Investments have not been made public
The Department of Transportation’s Advanced Research Projects Agency—Infrastructure (ARPA-I) requested public input on current and future uses of artificial intelligence (AI) to improve transportation, as well as the risks and barriers associated with AI adoption. This RFI focuses on safe and equitable implementation, including potential opportunities for autonomous mobility ecosystems, and solicits insights that will inform future AI research, development, and policy efforts across all modes of transportation.
Impact of AI on Art related Patent Evaluations
Agency: Department of Commerce
Office: Patent and Trademark Office
Comment Due Date: 7/29/2024
Target outcome: General consideration of public comments to inform guidance
Status: Notes on a July 25, 2024 listening session have been issued
The United States Patent and Trademark Office (USPTO) invited public comments on how the proliferation of artificial intelligence (AI) affects the evaluation of prior art, the knowledge standard for a person having ordinary skill in the art (PHOSITA), and patentability determinations. This initiative seeks input on potential challenges and opportunities AI presents to intellectual property policy, contributing to the development of guidance for patent examinations and informing USPTO's advisory roles.
Defense Industrial Base Adoption of Artificial Intelligence for Defense Applications
Agency: Department of Defense
Office: Office of the Secretary of Defense
Comment Due Date: 7/22/2024
Target outcome: General Consideration of public comments to inform PA&T's Trusted AI Defense Industrial Base Roadmap
Status: National defense industrial strategy implementation plan has been published
The Department of Defense Office of Industrial Base Resilience requested public comments on measures to enhance the adoption of artificial intelligence (AI) within the Defense Industrial Base (DIB). The feedback will inform the development of policies and initiatives to support AI integration in defense systems, contributing to the Trusted AI Defense Industrial Base Roadmap.
DOJ and Stanford Workshop on Promoting Competition in Artificial Intelligence
Agency: Department of Justice
Comment Due Date: 7/15/2024
Target outcome: Inform agency policy and case selection
Status: Workshop summary report has been published
The Antitrust Division, the Stanford Graduate School of Business, and the Stanford Institute for Economic Policy Research, cohosted a free full-day workshop to discuss competition and AI industry structure, including competition in AI models, semiconductors, the cloud, and AI applications. In a series of panels, presentations, and remarks, government and industry representatives, academics from both law and business, content creators, inventors, and other tech industry stakeholders explored how competition at one level of the AI stack affects other AI technologies, how standards and accountability systems can be designed to promote competition and the challenges AI poses to content creators. The workshop also explored how competition affects investors' funding decisions, and the practical considerations that investors face when evaluating whether to invest in startups.
AI Risk Management Framework: Generative AI Profile
Agency: Department of Commerce
Office: National Institute for Standards and Technology (NIST)
Comment Due Date: 6/2/2024
Target outcome: Consideration of comments on guidance documents
Status: Updated guidance issued
The NIST Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1) provides a structured, cross-sectoral framework to guide organizations in managing risks associated with Generative AI (GAI). It highlights unique and exacerbated risks posed by GAI across the AI lifecycle, offering strategies to govern, map, measure, and manage these risks while aligning with the trustworthy AI principles. The document is informed by stakeholder input and focuses on governance, content provenance, pre-deployment testing, and incident disclosure, aiming to ensure safe, transparent, and responsible deployment of GAI technologies.
Reducing Risks Posed by Synthetic Content
Agency: Department of Commerce
Office: National Institute for Standards and Technology (NIST)
Comment Due Date: 6/2/2024
Target outcome: Consideration of comments on guidance documents
Status: Updated guidance issued
The NIST AI 100-4: Reducing Risks Posed by Synthetic Content guidelines provide a comprehensive overview of technical methods to enhance digital content transparency and mitigate harms caused by synthetic content, such as misinformation, child sexual abuse material (CSAM), and non-consensual imagery (NCII). It discusses approaches like watermarking, metadata recording, and synthetic content detection, alongside testing and evaluation practices to ensure provenance and authenticity of digital content. The report emphasizes a risk-based, human-centered approach and calls for further research and standards development to address emerging challenges in digital content integrity and safety.
Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile
Agency: Department of Commerce
Office: National Institute for Standards and Technology (NIST)
Comment Due Date: 6/2/2024
Target outcome: Consideration of comments on guidance documents
Status: Updated guidance issued
The NIST SP 800-218A: Secure Software Development Practices for Generative AI and Dual-Use Foundation Models provides an enhanced framework for secure software development tailored to generative AI and dual-use foundation models. It builds on the Secure Software Development Framework (SSDF) Version 1.1, offering specific recommendations for addressing AI-related risks, including data integrity, model security, and vulnerability management across the AI lifecycle. This community profile is intended for AI model developers, system producers, and acquirers, emphasizing risk-based, secure-by-design practices to ensure trustworthy and resilient AI systems.
National Institute of Justice AI in Criminal Justice Report
Agency: Department of Justice
Office: National Institute of Justice
Comment Due Date: 5/28/2024
Target outcome: General consideration of public comments to inform a report
Status: Final report issued
The National Institute of Justice requested public input to inform a report on the safe, secure, and trustworthy development and use of artificial intelligence (AI) in the criminal justice system, as outlined in Section 7.1(b) of Executive Order 14110. This input will help guide the NIJ's evaluation of AI applications within the justice system to ensure responsible and effective implementation.
Trade Regulation Rule on Impersonation of Government and Businesses
Agency: Federal Trade Commission
Office: N/A
Comment Due Date: 4/30/2024
Target outcome: Amend rule on impersonation of government and businesses
Status: Rule has been finalized
The Federal Trade Commission (FTC) proposed a new Trade Regulation Rule to address the impersonation of government and businesses. The rule aims to prohibit deceptive practices where individuals or entities impersonate government officials, businesses, or organizations to mislead or defraud consumers. This initiative seeks to enhance consumer protection by targeting scams and fraudulent activities, ensuring trust and accountability in government and business interactions. Public comments are invited to shape the final provisions of the rule.
Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights
Agency: Department of Commerce
Office: National Telecommunications and Information Administration (NTIA)
Comment Due Date: 3/27/2024
Targeted outcome: General consideration of public comments to inform report
Status: Report has been issued
The National Telecommunications and Information Administration (NTIA) released a report on open models in artificial intelligence (AI), focusing on their potential benefits, challenges, and policy considerations. The report explores how open AI models can foster innovation, enhance transparency, and expand access to AI technology while addressing associated risks like misuse, bias, and security vulnerabilities. It provides recommendations for balancing openness with responsible AI governance to maximize societal benefits and mitigate harms.
The Use of Artificial Intelligence in Counterterrorism
Agency: Privacy and Civil Liberties Oversight Board
Office: N/A
Comment Due Date: 1/7/2024
Targeted outcome: General consideration of public comments to inform public forum
Status: Forum transcript has been made public
The Privacy and Civil Liberties Oversight Board (PCLOB) announced a public forum to examine the role of artificial intelligence (AI) in counterterrorism and national security. The forum explored how AI technologies are utilized, their implications for privacy and civil liberties, and their challenges. The event invites input from experts, stakeholders, and the public to guide the board's oversight and policy recommendations on AI’s application in these critical areas.