Artificial Intelligence And Autonomous Systems Legal Update (2Q19)

The second quarter of 2019 saw a surge in debate about the role of governance in the AI ecosystem and the gap between technological change and regulatory response. This trend was manifested in particular by calls for regulation of certain "controversial" AI technologies or use cases, in turn increasingly empowering lawmakers to take fledgling steps to control the scope of AI and automated systems in the public and private sectors. While it remains too soon to herald the arrival of a comprehensive federal regulatory strategy in the U.S., there have been a number of recent high-profile draft bills addressing the role of AI and how it should be governed at the federal level, while state and local governments are already pressing forward with concrete legislative proposals regulating the use of AI.

As we have previously observed, over the past year lawmakers and government agencies have sought to develop AI strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies while encouraging positive innovation and competitiveness.1 Now, for the first time, we are seeing federal, state and local government agencies show a willingness to take concrete positions on that spectrum, resulting in a variety of policy approaches to AI regulation—many of which eschew informal guidance and voluntary standards and favor outright technology bans. We should expect that high-profile or contentious AI use cases or failures will continue to generate similar public support for, and ultimately trigger, accelerated federal and state action.2 For the most part, the trend in favor of more individual and nuanced assessments of how best to regulate AI systems specific to their end uses of regulators in the U.S. has been welcome. However, even so there is an inherent risk that reactionary legislative responses will result in a disharmonious, fragmented national regulatory framework. In any event, from a regulatory perspective, these developments will undoubtedly yield important insights into what it means to govern and regulate AI—and whether "some regulation" is better than "no regulation"—over the coming months.

Table of Contents

  1. Key U.S. Legislative and Regulatory Developments

  2. Bias and Technology Bans

  3. Healthcare

  4. Autonomous Vehicles

  5. Key U.S. Legislative and Regulatory Developments

    As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the House introduced Resolution 153 in February 2019, with the intent of "[s]upporting the development of guidelines for ethical development of artificial intelligence" and emphasizing the "far-reaching societal impacts of AI" as well as the need for AI's "safe, responsible, and democratic development."3 Similar to California's adoption last year of the Asilomar Principles4 and the OECD's recent adoption of five "democratic" AI principles,5 the House Resolution provides that the guidelines must be consonant with certain specified goals, including "transparency and explainability," "information privacy and the protection of one's personal data," "accountability and oversight for all automated decisionmaking," and "access and fairness."

    Moreover, on April 10, 2019, U.S. Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) introduced the "Algorithmic Accountability Act," which "requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans."6 Rep. Yvette D. Clarke (D-NY) introduced a companion bill in the House.[7 The bill stands to be the United States Congress's first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific activity, such as the use of autonomous vehicles. While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington's stance amid growing public awareness of AI's potential to create bias or harm certain groups. Although the bill still faces an uncertain future, if it is enacted, businesses would face a number of challenges, not least significant uncertainty in defining and, ultimately, seeking to comply with the proposed requirements for implementing "high risk" AI systems and utilizing consumer data, as well as the challenges of sufficiently explaining to the FTC the operation of their AI systems. Moreover, the bill expressly states that it does not preempt state law—and states that have already been developing their own consumer privacy protection laws would likely object to any attempts at...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT