In our second blog of this series, where we unlock the lexicon of Artificial Intelligence for business leaders currently being overwhelmed by the hype of ChatGPT, we will focus on Machine Learning (ML).
What is Machine Learning?
People throw the terms machine learning and AI together and interchangeably, but they don’t mean the same thing. ML is a subset of AI that uses computers to learn or improve performance based on the data they use.
It’s a fascinating concept, straight out of science fiction: a computer uses algorithms to learn from the data provided. The more it develops, the more it learns: the more data it is fed, the better it gets.
It is where the concerns come that computers can become “more intelligent” than their human masters.
The reason ML has become more successful and prominent in the past decade, is the growth in volume, variety and quality of both public and privately-owned data, the availability of cheaper and more powerful data processing and storage capabilities.
Essentially ML models look for patterns in data and draw conclusions, which is then applied to new sets of data. They are not explicitly directed by people, as the machine learning capabilities develop from the data provided, particularly with large data sets. The more data used, the better the results will be.
So, where AI is the umbrella concept of enabling a machine to sense, reason or act like a human, ML is an AI application that allows computers to extract knowledge from data and learn from it autonomously.
How to train ML models
The key to machine learning (as much else in life) is training. ML computers need to be trained with new data and algorithms to obtain results.
Three training models are used in machine learning:
- Supervised learning maps in a specific input to an output using labelled/structured training data. Simply, to train the algorithm to recognize pictures of cats, it feeds it labelled pictures of cats.
- Unsupervised learning is based on unstructured (unlabelled) data, so that the end result is not known in advance. This is good for pattern matching and descriptive modelling. For example, Altilia uses Large Language Models (LLMs) as its foundation, which are trained on huge datasets using unsupervised learning.
- Reinforcement learning can be described as “learn by doing”. An “agent” learns to perform a task by feedback loop trial and error until it performs within the desired range, receiving positive and negative reinforcement depending on its success. Altilia often uses Human-in-the-Loop (HITL) reinforced learning in its Altilia Review module.
- Transfer learning enables data scientists to benefit from knowledge gained from a previous model for a similar task, in the same way that humans can transfer their knowledge on one topic to a similar one. It can shorten ML training time and rely on fewer data points. Altilia uses this technique to fine-tune pre-trained Large Language Models (LLMs) on a dataset provided by the client. We will focus on LLMs in a future blog.
Why not schedule a demo with Altilia to learn more about how we can help transform your organization? Click here to register.