A Short History of AI
A Short History of AI
Artificial intelligence ("AI") is a technology that mimics or replicates the type of human behavior equated with "thinking." AI is an old concept, first proposed by the ancient Greeks with their "golden robots." During the Middle Ages and the Renaissance, various philosophers and intellectuals added more formal reasoning for automatons and "logical machines." It was during this time that the all-important concept of the "algorithm" was developed. However, the major breakthrough occurred in the early 20th Century with the advent of mathematical logic. The development of computing machines in the 40's and 50's enabled mathematical logic to become a viable tool for computation.
Algorithms are a sequence of instructions that manipulate information to achieve a desired result. Every computer program is a set of algorithms. Normally, algorithms are "hard-wired" and thus entirely predictable. This helps make computers reliable, or pedantic, depending upon the competence of the software engineer.
In contrast, AI is an algorithm that can be *changed* with experience. AI comes in two main varieties: learning and creative. Learning AI typically employs a technology called neural networks. Creative AI uses another technology called genetic algorithms.
Neural networks are akin to synapses in human brains, and they work best when you want to compare something with a known thing. That makes “the known” an important factor in neural networks. Like humans, utilizing “the known” for neural networks is achieved through training. That training is a tedious chore, but there are substantial benefits. Instead of enlisting dozens of software engineers to (re)write an static-algorithm-based program to achieve a new result, AI can "learn" and adapt to new things with little or no human intervention. This ability to learn makes AI-based programs more flexible and able to accommodate new missions without expensive (and time consuming) overhaul. An instance of AI can be adapted to new missions simply by copying it and giving it new data – virtually for free. That makes AI attractive to business and governments who can afford the capital-intensive design/training cycle.
Genetic algorithms are used to "breed" new designs and technologies, and are thus creative. Genetic algorithms are "universal approximators," meaning that they can create anything that someone can define. They work best when you known what you want, but don’t know how to make it. Genetic algorithms, incidentally, can be used to breed neural networks -- AI breeding more AI. A good application of a genetic algorithm is one that is embedded within a wholly-digital corporations (no employees). This form of AI can write software to adapt the digital corporation to a new business model, or write code to take advantage of a new business opportunity – and it can be made to work autonomously. Don’t laugh. Vermont has already enacted a law to enable exactly that type of corporate entity.1
Most of the AI that attorneys encounter is of the learning variety. AI can be trained to identify responsive documents, for example. That training, however, almost always requires humans. To take the sting out of an otherwise tedious task, the training process can be couched in more human-friendly fashions. For example, when you "like" something on Facebook, you're training Facebook's AI (and contributing to that company's value as an unpaid employee). Similarly, you enhance Amazon's AI when you rate a product (again, as an unpaid employee). Training can be accomplished with a few highly-skilled (and patient) humans, or large numbers of humans who each do a small bit. In both cases, certain human characteristics are leveraged to help the AI. Humans do the hard thinking, AI does the common thinking.
To be useful, learning AI depends upon three elements: 1) data; 2) fast processors (to manipulate the data); and 3) the Internet (to transmit the data). For learning AI, data is absolutely essential. By data, we mean DATA! The more data the better. The higher the quality of that data, the better. Data on 100,000 slip-and-falls is a juicy opportunity. On the flip side, no data means no AI. Digging for data on some esoteric area of the law is a waste of time. Data is thus the Achilles heel of AI, and limits its scope. That means that AI is poorly suited to unusual/unique situations where data about the scenario is limited – unless the market for that task is highly valued. So people who work in esoteric areas of the law with limited income can rest a littler easier.
Where could AI engineers get data on legal matters? Most likely, attorneys are going to give it to them for free. It would not surprise me to learn in the near future that companies would give law firms free “data science” tools. “Did you say free?,” said the hard-pressed attorney? “Why yes,” says the company, “out of the goodness of our hearts.” Once installed, the data science tool will indeed provide some very helpful insights to the lawyers in the firm. In the meantime, that same software will glean your documents for data that AI engineers will feed to their hungry neural networks. Again, don’t laugh. I’m in discussions with a New York-based AI company to do exactly that. Lawyers may be knowledge workers, but most of them really don’t understand the value of information, and so are easily duped. Incidentally, the ethical implications of such a scenario are obvious, but the engineer isn’t the one who has to worry about it. So read those contracts carefully, lest you help train your replacement and run afoul of your state’s ethics rules at the same time.
AI and the Legal Profession
A prominent management consultant named Peter Drucker2 once famously showed a picture of a Black & Decker drill to Black & Decker executives and asked them: is this your product? They scratched their heads (the Black & Decker logo was clearly visible on the drill) and said yes. Drucker next showed them a picture of a hole in a wall, and said, “No, that’s not your product. Your customers are not buying drills, they are buying holes.”3
Lawyers should not lose sight of the fact that clients are not looking for lawyers. Clients are looking to solve their problems. If clients can get their problems solved by using a machine for low cost, they will do it, even if they know that their local lawyer will wither on the vine. David is right that human emotions may command a premium, but emotions also have a cost that not everyone is willing to pay. WalMart proved that.
Can AI destroy the legal profession? Of course it can. Why? Because every profession rests on two pillars: specialized knowledge and specialized skill.
As for the specialized knowledge pillar, those of us who went to law school 30+ years ago remember well the many hours spent in a lonely room with an ambiance imbued with the smell of acid paper, where you could go for months without seeing a non-attorney. The Internet changed all that. Access to specialized legal knowledge can be had just about anywhere, by anyone, for very little money. That’s one pillar of the profession imperiled.
What about the other pillar, specialized knowledge? Well, you should remember why there are professions in the first place. In order to entice bright people to undergo years of education and apprenticeship to acquire specialized knowledge, societies confer both privilege and remuneration to those who became professionals. That’s why lawyers and doctors get to do things that, if performed by a non-professional, would incur civil or criminal penalties.
The Achilles heel for the profession, however, is that a non-professional (like your client) who uses a software program to get advice is not practicing law. Neither, incidentally, is the company that creates or owns that software. That’s a big loophole, and corporate America is using AI to walk right through it. In the near future, you won’t be competing with Baker McKenzie, Fulbright or Jones Day. You’ll be competing with Google, Microsoft and Amazon (whose market cap far exceeds the entire legal industry).
Corporate America is going after the specialized knowledge pillar of our profession quite simply because law is a $400+ Billion per year industry.4 Corporate America is well poised to invest in the up-front costs of AI in order to go after that market via the aforementioned loophole. They will offer society the (automated) benefit of a profession without having to pay the cost of said profession. Will society continue to incur the cost of professionals if it doesn’t have to? Unlikely, and could you blame it?
Will there still be lawyers twenty years from now? Yes, of course, but far fewer of them – and that’s the whole point of any automation process. Unfortunately, those few remaining lawyers may not have a state-backed monopoly to help them recoup the cost of their training. Is there anything that lawyers can do about it? Sure – close the loophole. Is that happening? So far, the answer is “no.” For one because machines would be have to be equated with humans for the practice of law, and humans are loath to do that. Secondly, the profession is blithely unaware of the loophole. Finally, it is my experience (from talking to hundreds of lawyers) that lawyers cannot imagine that AI could ever get that good. Are they right? Just ask the Jeopardy participants who played IBM’s Watson.5
1 See https://legislature.vermont.gov/assets/Documents/2018/Docs/BILLS/S-0269/S-0269%20As%20Passed%20by%20Both%20House%20and%20Senate%20Unofficial.pdf
2 https://en.wikipedia.org/wiki/Peter_Drucker
3 Special thanks to Prof. Jane Winn for reminding me of this classic Drucker quote.
4 See, e.g., http://www.anythingresearch.com/industry/Legal-Services.htm Yes, I know that the medical industry dwarfs the legal industry. Rest assured that most AI-engineers are working in the medical industry. However, many AI engineers don’t have a medical background, and right now the legal industry is “low hanging fruit.”
5 See, e.g., https://en.wikipedia.org/wiki/Watson_(computer)