Artificial intelligence (AI) and machine learning (ML) are the new hot and trendy in technology. Yet AI has been around for a while. At Docbyte, we’ve been using rule-based AI to automate sorting documents into categories for our digital mailroom. As for ML, this is where things get interesting as the potential benefits are huge. But for most people, machine learning is as mysterious as distant buildings on a dark, foggy day. Their hazy outlines may give us a general idea of what is involved, but distinguishing a hospital from an office block or counting the windows is only possible by taking a closer look. The same goes for machine learning. As this technology is becoming increasingly important, working its way into our everyday lives, it’s high time to lift the misty veil and dispel some of the mysteries surrounding it.
Machine learning – noun: məˌʃiːn ˈlɜːrnɪŋ
The world-renowned Merriam-Webster dictionary defines ‘machine learning’ as “the process by which a computer is able to improve its own performance by continuously incorporating new data into an existing statistical model.” Wikipedia gets more technical: “Machine learning algorithms build a mathematical model based on sample data, known as ‘training data’, in order to make predictions or decisions without being explicitly programmed to perform the task.” Both definitions are spot-on and highlight the most important characteristics of ML:
- A mathematical and statistics-based method
- Little to no human intervention in training the model
- Self-improvement capabilities of the algorithm
ML vs AI
A foray into machine learning territory can soon lead to confusion as distinctions between similar technological concepts – say between ML and AI – may be blurry. Simply put, AI is the umbrella term for the theory and development of computer systems that perform tasks that would normally require human intelligence, i.e. try to simulate intelligent behavior in computers. On this basis, ML is a tool to create AI.
Two types of machine learning
The two most common and important approaches to creating a model with machine learning are supervised and unsupervised.
This type of ML can be compared to a student-teacher relationship. The teacher, a human programmer, provides input from which the student, the ML model, learns to infer underlying patterns. The student then applies its learning to new exercises, adjusting its model each time the answers are wrong. The more examples or data the student receives, the better it becomes.
Two examples of this type are classification and regression.
- Classification: you feed your algorithm item characteristics and a set of categories into which these items can be sorted. The algorithm then searches for patterns in how these item characteristics are categorized, so that it can also correctly categorize new items based on its findings. For instance, if flowers with a certain height and color grow in a specific region, then new flowers with those features should fall into the category that can grow in this region.
- Regression: analyzes the relationship between variables and their effect on certain characteristics. For instance, how does the color of a flower impact its price? By discovering these causal effects, the algorithm can accurately set price points for new flowers.
While supervised models entail some form of human interaction and predefined rules, unsupervised models have none. Data is given to an algorithm, which then figures out patterns and characteristics by itself. Again, two examples:
- Clustering: instead of asking the algorithm to sort item characteristics into predefined categories, you provide data and let the algorithm define the categories into which it sorts the item characteristics.
- Topic modeling: similar to clustering, topic modeling algorithms extract a predefined number of topics from data they have been fed.
Behind the buzz
Machine learning is a topic riddled with buzzwords, without people always knowing what they mean. We explain some of them:
Probably the most commonly heard and confusing buzzword. Contrary to what the name might suggest, data mining is not about digging up new data from various systems. It’s actually digging through your existing mountain of data to find the most useful information, making it closer to data filtering than mining.
A machine learning algorithm that mimics the way the human brain works. In essence, it’s a network of neurons where each neuron represents a possible parameter that influences the outcome of an analysis by the network. Based on training, parameters can be switched on or off to turn the input into the correct output.
Deep learning or DL
Another umbrella term for techniques and models handling complex problems that require a huge amount of data. With deep learning, the goal is to use neural networks to simulate human thinking. A deep neural network differentiates itself from other neural networks by its sheer size. While a regular network can have 1,000 neurons, for example, those in deep learning scenarios usually number well into the hundreds of thousands. For instance, the latest state-of-the-art natural language processing (see below) model from Google has 340 million parameters.
Of course, this makes deep learning quite complex, requiring a significant effort to implement. On the flip side, deep neural networks produce much better results than other ML approaches. For example, in return for added complexity, we get the capacity to interpret language extremely well. This means we can now automate document processing to an unprecedented level. DL also shines in its ability to correctly analyze unstructured data, such as images and video. This enables an even higher level of automation, giving us advanced image search capabilities, face ID, image classification and more.
Natural language processing or NLP
An umbrella term that covers all techniques concerning the interactions between computers and human/natural languages. The goal is to teach machines to read, decipher, understand and make sense of human language. Recent deep learning advances have created algorithms capable of doing just that. This opens the door to using ML to tag, label and extract information from unstructured text. Chat and voice bots, call center analytics and more have all helped NLP to further automate existing processes and create new ones that improve efficiency.
Why use machine learning?
The objective of ML is to find advanced statistical connections and patterns. While predefined rules also allow us to do that, the process is just much harder, time-consuming and ultimately not as accurate. Moreover, once the algorithm of an ML model has been programmed, it learns by itself, which means less work on the developer’s part. Accuracy is another upside of the technology. In one use case, rule-based models gave us about 50 percent correct answers, while machine learning achieved 80 percent! ML algorithms are able to uncover patterns that humans just can’t.
Robotic and intelligent process automation
Traditionally, developers automate processes by creating an overview of tasks, then connecting and scripting the necessary steps for automation. With robotic process automation (RPA), a computer develops this action list itself by watching users go through the tasks. So it is possible to automate your business flows without human interaction. However, humans still decide when and on what process RPA needs to run.
Intelligent process automation (IPA) further enhances RPA by adding AI and ML capabilities such as image search, voice recognition and face ID. These new technologies hold a lot of potential for increasing automation to levels never seen before. For instance, systems that automatically fill in corporate templates, such as invoices based on information extracted from digitized documents.
The future of ML at Docbyte
Models with manually defined rules can work just fine. But as the problems we want to solve with AI become more complex, we find we have reached their limits. Intricate use cases require thousands of rules, and hand coding those is just an unnecessary titan’s task. Machine learning can help reduce the work, so it’s sure to hold a much more prominent place in future AI projects. That being said, it’s not game over for rules. ML still requires a significant investment of time and effort, so for basic problems rules still, well, rule.
When we consider which type of ML holds the most potential for Docbyte, classification and topic modeling seem the way to go. Our digital mailroom, for example, already integrates classification algorithms to make email sorting into different categories more efficient. In particular, in complex sorting cases we’ve moved away from rule-based sorting and introduced machine learning to improve the speed and accuracy of categorizing.
Topic modeling extracts common topics from our clients’ documents and allows for better and quicker classification of new files, making it easier to store and find information. In fact, we’re already implementing this type of ML to improve the digital onboarding process for customers, as it can handle requests much faster. Other applications of machine learning are also helping to smooth the onboarding process. For instance, classification algorithms combined with OCR help us distinguish ID cards from driving licenses and assign the document to the right person. NLP helps extract data such as the name, address and more to further speed up the process.