Why machine learning is aiding and concerning law practitioners

Data drives the practice of law. Whether to win cases or land clients, lawyers rely on the vast swathes of information created from each court case; their strategies depend upon the meaningful patterns identified within large volumes of data from prior litigation. What may look to the untrained eye as lines of legal jargon is valuable insight to the experienced lawyer – but that isn’t to say they couldn’t do with a helping hand.

Enter Artificial Intelligence.

So far, the conversation surrounding AI and the legal industry has largely been dominated by talk of robots rising up to replace lawyers. Now, specialist machine learning software is proving that the two can come together to increase efficiency to levels never seen before in the industry.

Take ROSS Intelligence, for example. Every claim requires diligent research, but it takes time for a lawyer to sift through mounds of data to find meaning. Now, these professionals can take advantage of the natural language processing capabilities that ROSS Intelligence possesses to gain instant access to recommended readings, related case law and useful secondary resources – all by asking it a question.

Of course, this is just one of the ways in which machine learning is assisting lawyers in their day-to-day practice. Kira Systems is already proving a critical tool in a lawyer’s arsenal with its ability to perform a highly accurate due diligence contract review by searching and extracting relevant content at top speed. Meanwhile, another tool named Ravel Law claims to be able to predict outcomes based on relevant case law, judge rulings and referenced language from more than 400 courts.

Unlike most technology, these systems improve with use: the more they learn, the more intelligent they become and the more capable they are of helping lawyers exceed client expectations.

According to Sherry Askin, CEO of Omni Software Systems, machine learning is much like the process of intellectual development: just like a child expands their vocabulary by soaking up the words they hear around them, we help to improve the performance of deep learning programs not only by inputting raw data, but providing context and relevance to the information.

“The machine is no longer a vessel of information,” Askin explained. “It figures out what to do with that information and it can predict things for you.”

What could possibly go wrong?

While our gaze has been somewhat fixed on the apocalyptic vision of an AI uprising, a greater and more tangible threat associated with this technology has been hiding in the shadows; a threat of human origin.

Considering the amount of sensitive data they deal with every day, law firms have become a prime target for cyber-criminals. While machine learning software may be aiding them to sort through large volumes of information at lightning speed and improve client satisfaction, it isn’t immune to the threat of infiltration from a malicious hacker. These opportunistic individuals or groups can and will approach legal AI software and attempt to mistrain them in order to steal valuable data from its target.

We all saw what happened when Microsoft unveiled their AI powered chatbot ‘Tay’: It only took ten minutes before Twitter trolls flocked hurriedly to Tay’s account to take advantage of her learning capabilities. What started as a sweet and innocent AI fast became a highly offensive account, spouting controversial opinions and vulgar language. Inevitably, Tay was shut down due to the stream if inflammatory messages that the software had determined to be “normal Twitter behaviour.”

Through this incident, Microsoft quickly learned what happens when AI gets into the wrong hands. Where there is an opportunity to take advantage or exploit technology, hackers will not hesitate to do their worst. With this in mind, we must be wary not to blindly trust artificial intelligence. To protect automated operations conducted by AI, we must remember that while these AI systems are trained to be effective, they can also be steered off course.

This is where human input is absolutely critical: without trained analysts and security experts present, law firms toying with new technology open their doors to threats they may not have even considered. It may have made a name for itself in the legal industry, but machine learning software will always need the human touch to teach, train and protect it from ever falling into the wrong hands.