author photo
By Bruce Sussman
Tue | Apr 16, 2019 | 7:39 AM PDT

Should the Federal Trade Commission have the power to say your organization is using "high risk AI" as part of how you do business?

And should the FTC then have the power to regulate that business use of Aartificial Intelligence?

A U.S. Senate bill introduced by Senators Ron Wyden and Cory Booker is attempting to do these things. And it contains a very broad definition of what constitutes high risk automated decision making.

What is high risk Artificial Intelligence?

The proposed bill, called the Algorithmic Accountability Act of 2019, defines high-risk automated decision systems as having one or more of the following traits:

  • poses a  significant risk to the privacy or security of personal information of consumers;
  • involves the personal information of a significant number of consumers regarding race, color, national origin, political opinions, religion, trade union membership, genetic data, biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal convictions, or arrests;
  • systematically monitors a large, publicly accessible physical place;
  • resulting in or contributing to inaccurate, unfair, biased, or discriminatory decisions impacting consumers;
  • makes decisions, or facilitates human decision making, based on systematic and extensive evaluations of consumers, including attempts to analyze or predict sensitive aspects of their lives, such as their work performance, economic situation, health, personal preferences, interests, behavior, location, or movements that— (i) alter legal rights of consumers; or (ii) otherwise significantly impact consumers

That would certainly put a lot of AI work into the high risk zone, wouldn't it? Can you imagine the list of companies being labeled high risk?

The FTC would then regulate the companies using "high risk AI," and violators of those regulations could not only face FTC sanctions, the bill says the U.S. Attorney General could sue violators on behalf of the American people.

How would 'high risk AI' be identified?

The Artificial Intelligence legislation would direct the FTC to conduct impact assessments on the way businesses are using AI. The high risk determination decision is based on that analysis.

Here are the factors proposed for that process:

The term "automated decision system impact assessment" means a study evaluating an automated decision system and the automated decision system’s development process, including the design and training data of the automated decision system, for impacts on accuracy, fairness, bias, discrimination, privacy, and security....

Specifics the impact assessment would look at:

  • a detailed description of the automated decision system, its design, its training, data, and its purpose;
  • an assessment of the relative benefits and costs of the automated decision system in light of its purpose, taking into account relevant factors, including data minimization practices;
  • the duration for which personal information and the results of the automated decision system are stored;
  • what information about the automated decision system is available to consumers;
  • the extent to which consumers have access to the results of the automated decision system and may correct or object to its results;
  • (who are) the recipients of the results of the automated decision system

It's interesting to see how much of the impact assessment focuses on consumer control and privacy, since this echoes themes seen in the EU's GDPR and California's CCPA.

This time, however, the federal legislation would regulate Artificial Intelligence like it's never been regulated before.

Will the Algorithmic Accountability Act of 2019 make the United States a role model for how Artificial Intelligence is used?

Or will this hurt business and its ability to innovate with AI in a global marketplace?

At the very least, it will create a lot of discussion.

Perhaps even more than Senator Ron Wyden's 2018 proposal to send CEOs, CISOs, and CPOs to jail.

Comments