Hype vs. Reality: Does AI Really Deliver On Its Promises?

In the last decade, there has been a lot of hype surrounding Artificial Intelligence (AI) in the ediscovery industry, but is it reality or all just science fiction?

AI, in layman’s terms, is the ability of a computer (or robot controlled by a computer) to perform tasks requiring human intelligence and discernment. While many confuse the difference between the two, machine learning (ML) is a type of AI that gives computers the ability to learn without being explicitly programmed. Instead, the computer uses human input to make educated guesses that extend beyond that input. Machine learning is the foundation of the predictive coding technology that has transformed ediscovery, reducing the amount of reviewable data by as much as 80% in some matters.

Many solutions claim that using machine learning models can significantly aid in making the ediscovery process more efficient, but is it a reliable method? Messaging promises so many amazing results when using AI capabilities in the discovery process, such as:

Automated document review of thousands of documents in a matter of seconds.
Instant intelligent recommendations for legal teams.
The ability to discover patterns in documents invisible to the human eye.
New and exciting discoveries made with constantly learning ML models.
While these remarkable results are possible, some unrealistic expectations continue to flourish throughout the ediscovery industry.

Unrealistic Expectations of AI in Ediscovery
There are plenty of solutions that over promise what AI can actually do to improve ediscovery, including its ability to get exact predictions in a fraction of the time. Even though AI can dramatically streamline the ediscovery process, it is not entirely automated, and legal professionals need time to train models properly. Other issues may occur as well, including:

Data used to make predictions may not be accurately reviewed or fully representative of a dataset, particularly when the data is pulled from a biased search.
Predictions can be prone to errors, making thought-out and agreed-upon metrics an essential component of defensibility in court.
In reality, both unsupervised and supervised ML models have limitations related to human effort. In unsupervised models, the model isn’t learning from human judgments, so the intended use case needs to be considered carefully. On the other hand, human input and bias pose limitations to supervised ML models since the accuracy of the model partially depends on the accuracy of the training data.

 

Read the full article here.

Book your ticket today for Europe's Leading Law Expo