This article was co-authored by Joseph Bambara CIPP/US.
Gottfried Wilhelm Leibniz (1646-1716), the famous lawyer, mathematician and polymath once said: “It is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used“. He asked why shouldn't it be possible for machines to complete all steps of the event chain which occurs in a lawyer's mind while they are deciding? Why can machines not calculate who is right in the dispute between people or how to find a fair and equitable solution? If these questions were asked in the 17th century, here in 2020 the answers cannot be too far away.
Artificial intelligence (“AI”) is a field of computer science. AI is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. AI is a system’s ability to correctly interpret external data, to learn from such data, and to use the heuristics process to perform actions by delegating requests to a set of services (e.g., read an IoT sensor, execute a smart contract and store an asset on the blockchain) that achieve specific objectives. A decision to deploy AI can also raise fundamental ethical and moral issues for society. These complex issues are of critical importance to our future. The resolution of these issues will be captured and reflected in our laws and legal framework through statutes and regulations to the extent that a political consensus develops or through case law. As legal responsibility is a subset of ethical responsibility, AI to gain acceptance and be trusted in a new economy, businesses, and professionals, e.g., we attorneys, will need to take into account ethical considerations as well as legal considerations.
Artificial Intelligence and legal use cases
AI and document analysis
AI-powered software will provide a number of legal use cases. It already improves banking by identifying potential fraudulent transactions. It improves the efficiency of eDiscovery and document analysis as it can review documents and flag them as relevant to a particular case. Once a certain type of document is denoted as relevant, machine learning algorithms can get to work to find other documents that are similarly relevant. Machines are much faster at sorting through documents than humans and can produce output and results that can be statistically validated. They can help reduce the load on the human workforce by forwarding on only documents that are questionable rather than requiring humans to review all documents.
AI Contract review and management
A big portion of work law firms do on behalf of clients is to review contracts to identify risks and issues with how contracts are written that could have negative impacts for their clients. They redline items, edit contracts and counsel clients if they should sign or not or help them negotiate better terms. AI can help analyze contracts in bulk as well as individual contracts.
Automated Legal Advice Tools (ALATs)
Automation and artificial intelligence are making their mark in the legal industry. AI legal tools can be currently categorized as follows:
- Legal chatbots like Lexi, see, aigeneration.net, provide customized legal information and documents through online interactive chat.
- Legal applications like Picture It Settled, see, http://www.pictureitsettled.com, predict parties’ negotiating patterns and allows parties to refine their settlement strategies.
- Virtual assistants like Voicera listens, records, transcribes and takes notes.
- Legal document automation like clerky.com provide startups with automated company incorporation documents.
- Legal document review like contractprobe.com provide a quality check on contracts and identifies missing or unusual clauses.
- Legal artificial intelligence like Compas Core used by judges to predict the risk that an accused person will commit a new violent crime, be likely to re-offend or be a flight risk. That said, it presents potential discriminatory issues that we will address later.
- Legal data analytics and prediction AI tools like premonition.ai can predict which lawyers will win cases in which courts, even before they attend court.
- Legal technology companies like clause.io help develop smart applications by combining rules, reasoning, decision management and document automation.
Artificial intelligence and discrimination
In 2001 a Space Odyssey, the artificial intelligence known as HAL attempted to take over the space station. This predicted the risk that AI could one day take over the world and turn on humans. There is, perhaps, a far more immediate risk: discrimination. AI and machine learning programs do not have a “sense” of objective fairness. Rather, they act on the basis of algorithms and data. Algorithms are complex mathematical formulas and procedures implemented into computers that process information and solve tasks. Advancements in artificial intelligence are the result of integrating computer algorithms into systems enabling the system to not only follow instructions but also to learn. As more decisions become automated and processed by algorithms, these processes become opaque and less accountable. However, algorithms are created by humans and, inevitably, their bias becomes reflected, to a greater or lesser extent in the algorithms that they create, e.g., bias in automated loan application software. This is called “algorithm bias.” Further, the data from which AI learns and on which it acts also inevitably includes the biases of the humans whose data is used. This kind of bias was found in a risk assessment software known as COMPAS, which courts use to forecast which persons convicted of crimes were most like to commit offenses after release into the general population. When news organization ProPublica compared COMPAS risk assessments for 10,000 people arrested in one county in Florida with data showing which ones went on to commit further offenses, it discovered that, when the algorithm was “right,” its decision making was fair, but when the algorithm was “wrong,” people of color were almost twice as likely to be labeled a higher risk notwithstanding that, in fact, they did not re-offend.
The position humans have in the creation of AI is frequently misunderstood. Most people are not familiar with how business developers create algorithms and organize data to bring AI to life. Humans play a critical role in that process, so businesses must build training and safeguards into their processes to identify and reduce bias in the algorithms they create. Since many businesses source at least some of the software development to third parties, this concept extends to their vendors and service providers. Trust and transparency in systems are inherent pre-requisites to widespread adoption and sustainability, and accountability is inherent to achieving trust. Traceability, as is achievable through blockchains, communication and the ability to evolve processes in real-time, are important for businesses to ensure their AI systems consume training data that reflects accurate ground truth. They also must be able to initiate roadblocks and make improvements as necessary to eliminate potential bias in the data. Further, the public must have access so that they understand the manner in which AI systems are created and improved and understand the results of the data processes that impact their lives so they can correct errors and contest decisions made by algorithms. Personal data collected from our social connections and online activities are used by governments and companies to make determinations about such activities as our ability to fly, obtain a job and get security clearance, as well as determine the severity of criminal sentencing. These opaque, automated decision-making processes bear risks of profiling and discrimination as well as undermine our privacy and freedom of association. Without knowledge of the factors that provide the basis for decisions, it is impossible to know whether government and companies engage in practices that are deceptive, discriminatory or unethical. Algorithmic transparency, for example, plays a key role in resolving the question of Facebook’s role in the Russian interference of the US 2016 election cycle. The business and legislative responses thereto are yet to be resolved.
Companies will need to develop an AI policy and an AI incident plan that could be used to mitigate the risk posed by adopting AI in businesses. The AI policy should integrate with existing and working processes and procedures, such as the software development life cycle, cybersecurity incident response plans, HR policies, and crisis management plans. For businesses using AI such as banks, they may be required to have cyber incident response plans. See our article on New York State Cybersecurity Regulation (23 NYCRR 500), which requires covered entities to have written security policies, and these cyber incidents may encompass AI incidents. To the extent that the EU’s General Data Protection Regulation (“GDPR”) applies to a company, the company’s policy should account for the requirements of Article 22 of the GDPR that controls automated processing and Article 15 of the GDPR that permits that data subjects to receive meaningful information about the logic involved in automated decisioning related to the data subject.
As we have highlighted the danger of AI bias and the need for explainable AI, existing law prohibits bias in certain financial decisions and requires explanation for adverse decisions that may be made by AI systems. Under the Fair Credit Reporting Act (“FCRA”), 15 U.S.C. § 1681 et seq., among other requirements, any financial institution that uses a credit report deny a consumer’s application for credit, insurance, or employment must tell the consumer, and must give the consumer the name, address, and phone number of the agency that provided the information. Upon the request of a consumer for a credit score, a consumer reporting agency shall supply to the consumer a statement and notice that includes “all of the key factors that adversely affected the credit score of the consumer in the model used”, and any consumer reporting agency shall provide trained personnel to explain to the consumer any information required to be furnished to the consumer under the Act. See,15 U.S.C. §1681g (f) and (g) for requirements of adverse action notices. Accordingly, it is prudent for a business to enact AI policies to minimize impermissible bias and promote explainability.
Summary
Emerging technologies must be a primary focus of the next generation of attorneys and the businesses they serve. They must address their new professional responsibilities and support and promote ethical principles, to positively impact the relevant legal and regulatory frameworks. The immense interest in use cases involving AI and resulting transformation underscores that law today is as much about the new models, tools, and skillsets that drive it as it is about legal practice expertise. The legal profession is being subsumed by rapidly evolving technology. Legal practice and delivery are each changing. The new practice areas around AI are emerging as law struggles to keep pace with the speed of business change in the digital age. The emergence of new industries demands that businesses engage the Withers team which can not only provide legal expertise in support of new areas but also possesses the intellectual agility to master them quickly. Our Withers team can help you safely prepare your businesses from both a technical and legal standpoint incorporating these emerging technology trends in a customized, secure and efficient manner. We have the experience and know-how to help you stay ahead of the competition.
For more information please contact your regular Withers attorney or the author of this piece.