The director of the White House Office of Science and Technology Policy testified before Congress to outline the administration's approach to artificial intelligence governance, research investment, national security considerations and coordination with industry and international partners.
Overview
The director of the White House Office of Science and Technology Policy (OSTP) this week testified before a congressional hearing on the administration's strategy for artificial intelligence, laying out the federal government's approach to balancing accelerated research and deployment with safety, security and equity concerns. The testimony, carried live by C-SPAN, served as a public accounting of the Biden administration's AI priorities, progress to date and the remaining policy decisions Congress and agencies face.
Key themes from the testimony
- Coordination across the federal government: The OSTP director emphasized interagency efforts to produce consistent guidance and to avoid fragmentation in standards and approaches across departments and agencies.
- Investment in research and infrastructure: The testimony highlighted federal investments in computing infrastructure, foundational research and workforce development intended to sustain U.S. competitiveness in AI.
- Risk management and standards: Officials underscored the role of voluntary frameworks and technical standards to mitigate harms such as bias, misuse, and system failures.
- National security and export controls: Lawmakers pressed witnesses on the balance between enabling beneficial innovation and preventing adversaries from acquiring capabilities that could threaten U.S. security.
- International cooperation: The director framed U.S. policy as part of broader international engagement to create interoperable norms and to align approaches to safety and governance.
Background and context
Artificial intelligence is now a central topic in U.S. domestic policy, foreign policy and national security planning. Over the past three years the Biden administration has accelerated the U.S. government's engagement with AI governance through a mix of executive actions, interagency initiatives, research funding, and collaboration with standard-setting bodies.
OSTP, which advises the president on science and technology matters, has been a coordinating hub. Its priorities for AI include fostering safe and trustworthy systems, enabling responsible innovation, protecting national security, and ensuring economic benefits are broadly shared. OSTP has also championed public-access initiatives and proposed shared resources intended to broaden research participation.
Selected policy instruments and initiatives
- National AI Research Resource (NAIRR): OSTP and partner agencies have advocated for a resource to democratize access to compute, data and tools for AI research across academia, nonprofits and industry. OSTP materials describe the NAIRR as a way to give wider access to resources currently concentrated in a handful of organizations. See OSTP's background on NAIRR: https://www.whitehouse.gov/ostp/.
- NIST AI Risk Management Framework (AI RMF): The National Institute of Standards and Technology released the AI RMF as a voluntary set of practices to improve the ability to incorporate trustworthiness considerations into AI products and systems. NIST describes the framework as a tool to manage and govern AI-related risks: https://www.nist.gov/ai-risk-management.
- Export controls and trade measures: The U.S. Commerce Department and other agencies have implemented export controls targeting high-end chips, AI-specialized hardware and certain software exports to restrict access by adversary states. See Commerce Department/BIS guidance on export controls: https://www.bis.doc.gov/.
- Research and domestic manufacturing support: Legislative actions such as the CHIPS and Science Act included provisions intended to strengthen domestic semiconductor production and research capacity, with implications for AI hardware supply chains. Details of the CHIPS and Science Act and related programs are available at the Commerce Department: https://www.commerce.gov/.
Details from the hearing
During the hearing, lawmakers posed detailed questions about how the federal government intends to keep pace with rapid private-sector advances while minimizing potential harms. The OSTP director described ongoing efforts to coordinate guidance and standards and to channel funding toward resilient and trustworthy AI research.
Lawmakers from both parties raised concerns about the pace of development and the adequacy of existing statutory authorities to regulate powerful AI systems. Several members pressed the OSTP director for specifics on how the administration would use existing authorities versus seeking new legislation to address systemic risks, liability, transparency and consumer protection.
Committee members also focused on national security implications, asking about the sufficiency of export controls and whether the administration was monitoring the diffusion of capabilities that could be repurposed for military or malign uses. The hearing included discussion of both near-term tactical misuse (fraud, deepfakes, disinformation) and longer-term strategic concerns (autonomous weapons, capabilities for large-scale disruption).
OSTP priorities highlighted
- Standards and technical guardrails: The OSTP representative reiterated federal endorsement of voluntary standards and the need for industry participation in technical norms-setting.
- Transparency and model provenance: The testimony stressed work on provenance, documentation and transparency measures so developers and deployers can better understand system capabilities and limitations.
- Equity and worker transition: OSTP emphasized programs to study and support communities and workers affected by technological disruption.
- Research ecosystem resilience: Federal efforts to diversify access to compute, datasets and talent in order to prevent concentration of capabilities in a few organizations and to enable broader scientific participation.
Expert reaction and commentary
Outside experts and advocacy organizations responded to the testimony with cautious praise for increased coordination and concern about the need for clearer statutory authorities.
Laurie E. Locascio, Director of the National Institute of Standards and Technology, has described the AI Risk Management Framework as a tool intended "to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services and systems." The framework is available on the NIST website: https://www.nist.gov/ai-risk-management.
Emily Weinstein, a senior fellow at a technology policy think tank, described the hearing's focus as appropriate but said Congress would need to match oversight with legislative clarity. "Coordination and voluntary standards are necessary, but without clearer legislative guardrails, there will be gaps in accountability when harms occur," she told this reporter. Her organization’s analysis on AI governance is available here: https://www.brookings.edu/.
Security experts noted that export controls and investment screening are only one part of a broader strategy to manage risks. A recent Council on Foreign Relations report lays out how export controls, alliance coordination and targeted R&D funding can be combined to protect critical capabilities while supporting innovation: https://www.cfr.org/.
Relevant data and trends
Quantifying the pace of AI development and its economic footprint is challenging because AI is embedded across multiple industries and business models. Several observable trends are shaping policymaker deliberations:
- Concentration of compute and models: State-of-the-art large language models and other advanced AI systems often require substantial compute resources and specialized hardware. This concentration raises questions about access and systemic risk.
- Private-sector investment: Private investment in AI — from startups, incumbent firms, and corporate R&D — remains a primary engine of progress. Policymakers are weighing incentives to ensure research continues while addressing externalities.
- Workforce and labor impacts: Studies vary on the scale and timing of job displacement and transformation. Policymakers are planning education and transition programs to help workers adapt to changed labor demands.
- Proliferation of dual-use capabilities: AI tools and open-source models lower the barrier to access for benign and malicious users alike, complicating traditional regulatory approaches centered on hardware or export restrictions.
Independent analyses, such as those compiled in the annual AI Index, provide data on investment, publication rates, talent flows and compute growth; these data sources inform congressional and executive-branch deliberations. The AI Index provides an overview of many of these trends: https://aiindex.stanford.edu/.
Legislative and regulatory considerations
Congressional attention to AI has been bipartisan but fragmented. Committees with jurisdiction over homeland security, commerce, judiciary, armed services and science have each expressed interest. Key policy areas lawmakers are considering include:
- Liability and consumer protections: Determining who bears responsibility when AI systems cause harm or produce defective outcomes.
- Transparency and auditability: Requiring documentation, model cards or other forms of disclosure so regulators and affected parties can assess system behavior.
- Safety standards for high-risk applications: Creating sector-specific guardrails (healthcare, transportation, critical infrastructure) where system failures can cause severe harm.
- National security and export controls: Updating authorities to address rapid changes in AI capabilities, supply chains and software distribution.
- Workforce and education: Funding retraining and curricula to help workers adapt and to cultivate a diverse talent pipeline.
Stakeholders disagree on the pace and scope of regulatory mandates versus voluntary or standards-based approaches. Industry groups generally favor flexible, risk-based regulation that relies on standards, testing and certification, while some civil-society organizations argue for stronger, enforceable protections in areas such as discrimination and surveillance.
International dimension
AI governance is also a global issue. Governments, standard-setting bodies and multilateral institutions are engaged in efforts to harmonize approaches. The United States has sought to work with allies to align export controls and technical standards, while also building coalitions to address misuse and establish norms for responsible development.
International forums such as the OECD, G7 and the United Nations have discussed AI policy topics ranging from human rights to safety. Multilateral coordination is viewed by many analysts as necessary to prevent an international regulatory patchwork that undermines both safety and trade.
What remains unresolved
Several contested questions remain at the center of U.S. debate on AI strategy:
- When and how to impose mandatory safety testing for foundation models and other high-impact systems.
- Which federal agency or combination of agencies should hold primary regulatory authority for different classes of AI applications.
- How to balance transparency with intellectual property and national security considerations.
- How to measure and demonstrate compliance in a sector driven by rapid iteration and continuous deployment.
During the hearing, lawmakers repeatedly asked the OSTP director to identify which of these open questions would be escalated to legislation versus addressed through executive action or agency rulemaking. Responses emphasized the need for both congressional engagement and agile regulatory responses.
References and further reading
What to watch next
Following the testimony, several near-term developments will be important to monitor:
- Committee follow-ups and potential requests for additional documents or briefing materials from OSTP and other agencies.
- Legislative proposals that seek to codify standards, assign authority, or fund new research and workforce initiatives.
- Agency rulemakings from departments such as Commerce, NIST, FTC and HHS that could create sector-specific requirements for AI systems.
- International coordination outcomes, including joint statements from allies or new multilateral mechanisms addressing AI safety and trade.
Conclusion
The OSTP director's congressional testimony framed the administration's AI strategy as a multi-pronged effort combining research investment, interagency coordination, standards development and targeted regulatory actions. While the administration outlined priorities and progress, the hearing highlighted enduring uncertainties about the appropriate balance between voluntary standards and statutory regulation, the distribution of regulatory authority, and the measures necessary to secure national security while sustaining innovation. As AI capabilities continue to evolve, Congress, federal agencies and international partners are likely to remain engaged in shaping governance measures that address both the promise and the risks of advanced AI.
Disclaimer: This article is based on publicly available information and does not represent investment or legal advice.
Comments 0