AI companies, showcase your safety tests to the US government

1 min read

The Biden administration has announced that developers of major artificial intelligence (AI) systems in the United States will be required to report their safety test results to the government. This comes as part of President Joe Biden’s executive order to manage the rapidly evolving technology. The White House AI Council will meet to review progress on the order, with a focus on the mandate for AI companies to share vital information, including safety tests, with the Commerce Department. The government’s National Institute of Standards and Technology will develop a uniform framework for assessing safety.

AI has become a crucial consideration for the US federal government due to its economic and national security implications. The government is also exploring congressional legislation and collaborations with other countries and the European Union to establish rules for managing AI. Several federal agencies have conducted risk assessments of AI’s use in critical national infrastructure, and the government plans to hire more AI experts and data scientists in federal agencies. The aim is to ensure that regulators are prepared to effectively manage AI technology.

These developments reflect the growing recognition of the importance of AI safety and the need for regulations to keep up with technological advancements. By requiring AI companies to disclose their safety test results, the government aims to ensure that AI systems are safe before they are released to the public. Establishing a common standard for assessing safety will also help in evaluating the reliability and ethical implications of AI technology.

Previous Story

SiP Tech Market: Small, Smart, Successful. 2030 Vision

Next Story

SoFi Stock Soars As Fintech Swings To Profitability

Latest from News