AI rollout should prioritize safety, not speed

The rapid development and implementation of Artificial Intelligence (AI) in everyday services and products has become an increasingly important legal issue as the technology develops faster than the legal frameworks that regulate it. In response to these concerns, on Oct. 30, 2023, President Biden issued an Executive Order aimed at preventing and addressing harm caused by AI. The order lays out eight guiding principles that include the following: 

  1. AI must be safe and secure.
  2. Responsible development and use of AI must include a commitment to support American workers.
  3. AI policies must promote and support equity and civil rights.
  4. Consumers, who increasingly use, interact with, or purchase AI or AI-enabled products, must be protected under the law.
  5. AI must respect Americans’ privacy and civil liberties.

The Order instructs a variety of executive agencies to research the effects of AI, publish guidelines to ensure AI developers adhere to the guiding principles, and promulgate rules to adapt to the AI revolution. For example, the order instructs the National Institute of Standards & Technology (NIST) to “establish guidelines and best practices” to “promote consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems.” Similarly, the order instructs the Secretary of Commerce to assess “the existing standards, tools, methods, and practices” for authenticating content and labeling “synthetic content,” which was produced or altered using AI or algorithms. 

While signing the order, President Biden remarked “[t]o realize the promise of AI and avoid the risks, we need to govern this technology.” He also stated that “[the Federal Government] can’t move at a normal government pace” and emphasized that it must “move as fast, if not faster, than the technology itself” due to the rapidly evolving AI landscape. These sentiments were echoed by Vice President Harris who stated that “[America has] a moral, ethical and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm.” However, President Biden also recognized the limitations of his order noting that while it “represents bold action” it must be supported legislatively by Congress. 

President Biden’s order is a welcome and necessary step toward protecting Americans from potential wide-reaching harms. As co-lead of the In Re Social Media Adolescent Addiction MDL, my colleague Previn Warren recently argued against the dismissal of Plaintiffs’ claims against the world’s largest social media companies for harms caused by their products features, including user conditioning algorithms. My colleagues and I are also concerned about the potential impact of “Deepfake AI,” which is used to create fake images or videos of unsuspecting victims without their consent. Often, these images and videos are used to harass and extort the victim, spread disinformation, and commit fraud. Without question, pausing to consider the implications is worth the effort in a rapidly evolving technological landscape. We look forward to the continued exploration in the area of AI but believe we should not do so without the long-term risks being understood.