Senator Cynthia Lummis Introduces RISE Act to Boost AI Transparency and Professional Accountability
Senator Cynthia Lummis (R-WY) has unveiled the Responsible Innovation and Safe Expertise (RISE) Act of 2025, a legislative proposal aimed at clarifying legal responsibilities for professionals who use artificial intelligence (AI) tools in their work.
The bill seeks to enhance transparency from AI developers—though it stops short of mandating open-source access to models. Instead, it focuses on requiring developers to publicly release model cards to qualify for limited liability protections.
In a press release, Lummis emphasized that professionals such as doctors, lawyers, engineers, and financial advisors would remain legally responsible for the advice they provide, even when informed by AI systems. The legislation ensures that accountability ultimately rests with the human professional.
To receive limited immunity from civil liability, AI developers must publish model cards—technical documentation detailing an AI system’s training data sources, intended use cases, performance benchmarks, known limitations, and potential failure scenarios. This information would help professionals determine whether the system is suitable for their specific use.
“Wyoming values both innovation and accountability. The RISE Act sets clear, predictable standards that promote safer AI development while respecting the judgment of professionals,” said Lummis.
The bill does not offer blanket legal immunity for AI developers. Immunity would not apply in cases of recklessness, fraud, willful misconduct, knowing misrepresentation, or when AI is used outside its intended professional context.
The RISE Act also imposes ongoing transparency obligations. Developers must update model documentation within 30 days of releasing new AI versions or discovering critical failure risks, ensuring professionals have access to up-to-date, reliable information.
Overall, the RISE Act aims to strike a balance between innovation and responsibility, encouraging safer AI integration without undermining professional standards.
Falls short of requiring open-source access
The RISE Act, in its current form, does not mandate full open-source disclosure of AI models.
While developers are allowed to protect proprietary information, they must ensure that any withheld details are not related to safety. Each redaction must be accompanied by a written explanation justifying the omission under trade secret protections.
In a previous interview with CoinDesk, Simon Kim, CEO of Hashed—one of South Korea’s leading venture capital firms—warned about the risks of opaque, centralized AI systems.
“OpenAI is not open, and it’s controlled by a very small group of people, which is quite dangerous. Building these kinds of [closed-source] foundational models is like creating a ‘god’ we don’t understand,” Kim said.

