- Terminology (A to D)
- AI Capability Control
- AIOps
- Albumentations
- Asset Performance
- Autoencoder
- Backpropagation
- Bayes Theorem
- Big Data
- Chatbot: A Beginner’s Guide
- Computational Thinking
- Computer Vision
- Confusion Matrix
- Convolutional Neural Networks
- Cybersecurity
- Data Fabric
- Data Storytelling
- Data Science
- Data Warehousing
- Decision Tree
- Deepfakes
- Deep Learning
- Deep Reinforcement Learning
- Devops
- DevSecOps
- Diffusion Models
- Digital Twin
- Dimensionality Reduction
- Terminology (E to K)
- Edge AI
- Emotion AI
- Ensemble Learning
- Ethical Hacking
- ETL
- Explainable AI
- Federated Learning
- FinOps
- Generative AI
- Generative Adversarial Network
- Generative vs. Discriminative
- Gradient Boosting
- Gradient Descent
- Few-Shot Learning
- Image Classification
- IT Operations (ITOps)
- Incident Automation
- Influence Engineering
- K-Means Clustering
- K-Nearest Neighbors
- Terminology (L to Q)
- Terminology (R to Z)
AI 101
What is Machine Learning?
Table Of Contents
Machine learning is one of the quickest growing technological fields, but despite how often the words “machine learning” are tossed around, it can be difficult to understand what machine learning is, precisely.
Machine learning doesn’t refer to just one thing, it's an umbrella term that can be applied to many different concepts and techniques. Understanding machine learning means being familiar with different forms of model analysis, variables, and algorithms. Let’s take a close look at machine learning to better understand what it encompasses.
What Is Machine Learning?
While the term machine learning can be applied to many different things, in general, the term refers to enabling a computer to carry out tasks without receiving explicit line-by-line instructions to do so. A machine learning specialist doesn’t have to write out all the steps necessary to solve the problem because the computer is capable of “learning” by analyzing patterns within the data and generalizing these patterns to new data.
Machine learning systems have three basic parts:
- Inputs
- Algorithms
- Outputs
The inputs are the data that is fed into the machine learning system, and the input data can be divided into labels and features. Features are the relevant variables, the variables that will be analyzed to learn patterns and draw conclusions. Meanwhile, the labels are classes/descriptions given to the individual instances of the data.
Features and labels can be used in two different types of machine learning problems: supervised learning and unsupervised learning.
Unsupervised vs. Supervised Learning
In supervised learning, the input data is accompanied by a ground truth. Supervised learning problems have the correct output values as part of the dataset, so the expected classes are known in advance. This makes it possible for the data scientist to check the performance of the algorithm by testing the data on a test dataset and seeing what percentage of items were correctly classified.
In contrast, unsupervised learning problems do not have ground truth labels attached to them. A machine learning algorithm trained to carry out unsupervised learning tasks must be able to infer the relevant patterns in the data for itself.
Supervised learning algorithms are typically used for classification problems, where one has a large dataset filled with instances that must be sorted into one of many different classes. Another type of supervised learning is a regression task, where the value output by the algorithm is continuous in nature instead of categorical.
Meanwhile, unsupervised learning algorithms are used for tasks like density estimation, clustering, and representation learning. These three tasks need the machine learning model to infer the structure of the data, there are no predefined classes given to the model.
Let’s take a brief look at some of the most common algorithms used in both unsupervised learning and supervised learning.
Types of Supervised Learning
Common supervised learning algorithms include:
- Naive Bayes
- Support Vector Machines
- Logistic Regression
- Random Forests
- Artificial Neural Networks
Support Vector Machines are algorithms that divide up a dataset into different classes. Data points are grouped into clusters by drawing lines that separate the classes from one another. Points found on one side of the line will belong to one class, while the points on the other side of the line are a different class. Support Vector Machines aim to maximize the distance between the line and the points found on either side of the line, and the greater the distance the more confident the classifier is that the point belongs to one class and not another class.
Logistic Regression is an algorithm used in binary classification tasks when data points need to be classified as belonging to one of two classes. Logistic Regression works by labeling the data point either a 1 or a 0. If the perceived value of the data point is 0.49 or below, it is classified as 0, while if it is 0.5 or above it is classified as 1.
Decision Tree algorithms operate by dividing datasets up into smaller and smaller fragments. The exact criteria used to divide the data is up to the machine learning engineer, but the goal is to ultimately divide the data up into single data points, which will then be classified using a key.
A Random Forest algorithm is essentially many single Decision Tree classifiers linked together into a more powerful classifier.
The Naive Bayes Classifier calculates the probability that a given data point has occurred based on the probability of a prior event occurring. It is based on Bayes Theorem and it places the data points into classes based on their calculated probability. When implementing a Naive Bayes classifier, it is assumed that all the predictors have the same influence on the class outcome.
An Artificial Neural Network, or multi-layer perceptron, are machine learning algorithms inspired by the structure and function of the human brain. Artificial neural networks get their name from the fact that they are made out of many nodes/neurons linked together. Every neuron manipulates the data with a mathematical function. In artificial neural networks, there are input layers, hidden layers, and output layers.
The hidden layer of the neural network is where the data is actually interpreted and analyzed for patterns. In other words, it is where the algorithm learns. More neurons joined together make more complex networks capable of learning more complex patterns.
Types of Unsupervised Learning
Unsupervised Learning algorithms include:
- K-means clustering
- Autoencoders
- Principal Component Analysis
K-means clustering is an unsupervised classification technique, and it works by separating points of data into clusters or groups based on their features. K-means clustering analyzes the features found in the data points and distinguishes patterns in them that make the data points found in a given class cluster more similar to each other than they are are to clusters containing the other data points. This is accomplished by placing possible centers for the cluster, or centroids, in a graph of the data and reassigning the position of the centroid until a position is found that minimizes the distance between the centroid and the points that belong to that centroid’s class. The researcher can specify the desired number of clusters.
Principal Component Analysis is a technique that reduces large numbers of features/variables down into a smaller feature space/fewer features. The “principal components” of the data points are selected for preservation, while the other features are squeezed down into a smaller representation. The relationship between the original data potions is preserved, but since the complexity of the data points is simpler, the data is easier to quantify and describe.
Autoencoders are versions of neural networks that can be applied to unsupervised learning tasks. Autoencoders are capable of taking unlabeled, free-form data and transforming them into data that a neural network is capable of using, basically creating their own labeled training data. The goal of an autoencoder is to convert the input data and rebuild it as accurately as possible, so it's in the incentive of the network to determine which features are the most important and extract them.
Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.