White House Embraces Artificial Intelligence, But Urges Caution in Its Use

Published On October 26, 2016 | By Anna Myers | General
TwitterLinkedInFacebookRedditCopy LinkEmailPrint

Artificial Intelligence (“AI”) systems use big data to inform decision-making and machine learning processes. These processes generate fairness, safety, and accountability concerns, because big data-powered AI can be used to make decisions regarding an individual’s housing, employment, and creditworthiness, among other things. These concerns are covered in the Executive Office of the President’s report on AI, where the White House calls for more transparency in how big data informs AI decisions. The White House previously reported on big data in 2014 and 2016, highlighting big data opportunities for promoting fairness and recommendations for mitigating discrimination.

The complexity of systems, coupled with the sheer amount of data used to make a decision, make it challenging to determine what data (or underlying sets of algorithms) influence (or should influence) a particular outcome. Unfortunately, even seemingly unbiased developers of an AI system, with the best intentions, can inadvertently create a system that produces biased results. For example, AI systems that streamline the job screening or housing screening process may rely on biased data that leads to inadvertent violation of discrimination laws. This can be mitigated by practicing good data hygiene—understanding the sources of the data you collect, how that data is processed, and ultimately how it is used. The White House report urges those in the AI industry to provide clear and transparent notice to the consumer, to prevent discrimination.

Use of AI can create Fair Credit Reporting Act (“FCRA”) liability, for example. Specialty credit bureaus historically evaluated a customer’s creditworthiness by manually analyzing credit history. But AI systems powered on big data now automate this function. Where businesses rely on information in a consumer report to deny credit or provide less favorable credit terms to consumers, the FCRA requires notice to consumers of the information that led to the decision. But as noted in the FTC’s big data report, FCRA applies to predictive analytics based on both non-traditional credit characteristics, such as social media information, as well as traditional characteristics. This makes FCRA-compliant notice difficult, because businesses simply may not be able to identify the data that drove the decision. This in turn may prevent consumers from exercising their FCRA rights to dispute the accuracy or completeness of the reports. Blind reliance on big data-powered AI systems, without consideration of legal obligations, may thus lead to regulatory fines and civil lawsuits.

IoT device companies using big data should also take care not to run afoul of regulations. Currently, smart cars and aircraft are regulated by transportation agencies, but the White House Report suggests that use of AI in IoT devices may be subject to FTC regulation. How the FTC will regulate such new technology continues to evolve, but we can expect application of long-standing FTC requirements for companies to provide clear notice of information collection and use practices, notice of data sharing practices, and appropriate data security.

 

About The Author

Anna Myers was a ZwillGen Fellow 2016-2017.

Leave a Reply

Your email address will not be published. Required fields are marked *