
Recent actions and rhetoric during the first year of US President Donald Trump’s second term have fueled European perceptions a full-scale American retreat from artificial intelligence (AI) safety. Vice President JD Vance told world leaders at the Paris AI Action Summit in February 2025 that “the AI future is not going to be won by hand-wringing about safety. It will be won by building.” A month later, the National Institute of Standards and Technology (NIST) instructed partner scientists to remove mention of “AI safety”, “responsible AI”, and “AI fairness” from the skills expected of members and to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness”. And in June 2025, the Department of Commerce renamed its AI Safety Institute the Center for AI Standards and Innovation (CAISI), and announced a shift in its mission from “help[ing] define and advance the science of AI safety” to policies that “evaluate and enhance U.S. innovation” in AI.
Beyond the headlines, however, AI safety is a rare policy issue on which there has been regulatory momentum and even bipartisan agreement in Washington. The Trump administration has maintained some Biden-era policies on high-risk AI use cases, and US states have become increasingly involved in regulating specific use cases and transparency measures. This opens opportunities for transatlantic shared learning and approaches on AI safety, particularly when they are framed in terms of individual policy questions such as children’s safety or catastrophic risk.
AI safety refers to technical solutions, policies, and guidelines that minimize the risks AI systems pose to humanity and that secure AI’s benefits. AI safety measures seek to mitigate a wide range of harms. Current damage includes privacy and cybersecurity violations, such as the unauthorized collection of personal data, and training data bias, which has resulted in hiring systems that penalize résumés containing the word “women”. Long-term damage could comprise existential risks to humanity, such as AI-engineered bioweapons and pathogens. AI safety, in addition to its broad scope, spans a range of tools, including robustness testing and validation against adversarial attacks, bias detection and mitigation, and governance frameworks for risk management.
The concept of AI safety has existed for decades, but it generated mainstream political attention in 2023 amid rising concerns about generative AI and artificial general intelligence. These culminated in the first AI Safety Summit at Bletchley Park in November of the same year. Since 2025, the term “AI safety” has become increasingly politically contentious, as have AI safety topics related to diversity, equity, and inclusion (DEI) such as discriminatory model behavior related to gender or race. Other related topics that receive broad bipartisan US and multilateral support are transparency around adverse AI incidents and the prohibition of deepfake intimate images.
Several US federal government policies from the past year include AI safety measures, especially for risk-management practices and guardrails to prevent detrimental AI use. These policies include:
Read more on The German Marshall Fund of the United States

