Back to Blog

machine learning terminology
Data Science

The Most Common Machine Learning Terms, Explained

8 minute read | April 30, 2019
Leah Davidson

Written by:
Leah Davidson

“Machine learning” is one of the current technology buzzwords, often used in parallel with artificial intelligence, deep learning, and big data, but what does it actually mean? And what other machine learning terminology is important to understand?

McKinsey has defined machine learning as “algorithms that can learn from data without relying on rules-based programming.” Carnegie Mellon University offers this: “The field of Machine Learning seeks to answer the question ‘How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?’”

Machine learning means that the accuracy of the system is improving over time, with the addition of more data and feedback. You probably encounter many examples of machine learning every day without realizing it. When Facebook suggests “people you might know” or when Amazon emails you recommendations of products you might like based on previous purchases, they are using machine learning algorithms to customize your results.

Machine learning deepens the work of artificial intelligence. In AI, researchers originally created rules for computers to make decisions. With machine learning, computers actually “learn” for themselves and design new rules through practice and repetition.

learning algorithms

(Source.)

From medical diagnoses to fraud detection, machine learning is improving our capability to solve societal problems. Paypal acquired the ML startup Simility to analyze millions of transactions and flag anomalies to prevent money laundering. Machine learning can also predict hypoxaemia (low oxygen levels) during surgery; recognize cardiovascular risk factors like high blood pressure, age, and smoking in retinal images; and identify abnormal growths during colonoscopies.

Adopting machine learning at scale is not without challenges. Machine learning can lead to overfitting (when the machine predicts outcomes based on historical data, but does not adapt to new variables) and sometimes it is difficult to find large enough data samples for the machines to train with.

Although many people worry about ML replacing humans, people and machines complement each other in many ways. According to the Harvard Business Review, “through… collaborative intelligence, humans and AI actively enhance each other’s complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter.”

Humans can help train and operate machines and explain their behavior. When Microsoft developed the ML bot Cortana, it required substantial data points and human insights to create a personality that was “confident, caring, and helpful but not bossy.” A poet, a novelist, and a playwright were all part of the team that trained the bot on how to communicate effectively with humans.

Machine learning is full of interesting variants and subfields, so let’s start decoding other machine learning terminology.

YouTube video player for HcqpanDadyQ

Machine Learning Terminology

Classification

Classification is a part of supervised learning (learning with labeled data) through which data inputs can be easily separated into categories. In machine learning, there can be binary classifiers with only two outcomes (e.g., spam, non-spam) or multi-class classifiers (e.g., types of books, animal species, etc.).

One of the most popular classification algorithms is a decision tree (essential for both data scientists and machine learning engineers), whereby repeated questions leading to precise classifications can build an “if-then” framework for narrowing down the pool of possibilities over time.

You can learn about other classification algorithms, like naive Bayes, k-nearest neighbor, and artificial neural networks, here.

decision tree

(Source.)

Clustering

Clustering is a form of unsupervised learning (learning with unlabeled data) that involves grouping data points according to features and attributes.

Clustering can be used to organize customer demographics and purchasing behavior into specific segments for targeting and product positioning. It can also analyze housing quality and geographic locations to create real estate valuations and plan the layout of new city developments. It can classify information by topics within libraries or web pages and compile an easily accessible directory for users.

The most common kind of clustering is K-means clustering, which involves representing each cluster by a variable “k” and then defining the centroid of those clusters. All data points are then assigned to a particular cluster and, through this process, we identify the centroid of the new clusters. Here are a few examples of what K-means clustering looks like in practice:

  • A hospital wants to locate emergency units at the minimum possible distance from areas where accidents frequently happen
  • A seismologist studies regions where earthquakes have occurred over the last few decades to identify the areas of greatest risk
  • A pizzeria wants to understand where to locate stores based on customer demand to minimize the distance the drivers need to travel for delivery

Other clustering methods that you can learn more about here include density-based clustering, hierarchical-based methods, partitioning methods, and grid-based methods.  

Regressions

Regressions create relationships and correlations between different types of data. For example, each profile picture has an image with pixels that belong to a person. With static prediction (one that stays the same over time), machine learning acknowledges that a certain pixel arrangement corresponds to a given name and allows for facial recognition (for example, when Facebook recommends tags for the photos you’ve just uploaded).

Regressions can also be useful when predicting outcomes based on data in the present. For a long time, statistical regression has been used to solve problems, such as predicting the recovery of cognitive functions after a stroke or predicting customer churn in the telecommunications industry. The only difference is that now many of these regression analyses can be done more efficiently and quickly by machines.

Regression is a type of structured machine learning algorithm where we can label the inputs and outputs. Linear regression provides outputs with continuous variables (any value within a range), such as pricing data. Logistical regression is when variables are categorically dependent and the labeled variables are precisely defined. For example, you can classify whether a store is open as (1) or (0), but there are only two possibilities.

Other types of regression that you can explore here are polynomial regression, support vector regression, decision tree regression, and random forest regression.

Get To Know Other Data Science Students

Bret Marshall

Bret Marshall

Software Engineer at Growers Edge

Read Story

Rane Najera-Wynne

Rane Najera-Wynne

Data Steward/data Analyst at BRIDGE

Read Story

Joy Opsvig

Joy Opsvig

Data Science Apprentice Engineer at LinkedIn

Read Story

Deep Learning

Deep learning is similar to machine learning—in fact, it’s more of an application of machine learning that imitates the workings of the human brain. Deep learning networks interpret big data (data that is too large to fit on a single computer)—both unstructured and structured—and recognize patterns. The more data they can “learn” from, the more informed and accurate their decisions will be. Here are some examples of deep learning in practice:

  • Chatbots and virtual assistants: Virtual assistants like Alexa and Siri or customer service chatbots on different web pages can receive human requests, decipher language, and present lifelike responses.
  • Real-time bidding and programmatic advertising: Advertising now depends on software buying advertising space through a competitive bidding process. Cognitiv AI is an example of a deep learning platform that synthesizes data on customer demographics, weather, available inventory, time of day, and other variables to create custom buying algorithms for a specific target market.
  • Recommendation engines: From travel sites like Booking.com and Expedia to streaming platforms like Netflix and Spotify, recommendation engines learn from past purchasing or usage behavior to customize marketing. There are two forms of recommendation engines: collaborative, where user preference data is collected at scale and users are compared to similar user personas, and content-based filtering, where properties of specific items are analyzed and future items are compared to past items to determine the closest matches.

Neural Networks

Neural networks are closely related to deep learning. They create sequential layers of neurons that deepen the understanding of data collected from a machine to provide an accurate analysis.

RelatedA Beginner’s Guide to Neural Networks in Python

A neural network consists of layers of nodes, which receive stimulation from “trigger” data. This data then is assigned a weight through coefficients, as some data inputs may be more significant than others.

Neurons normally come in three different layers: an input layer of data, a hidden layer with mathematical computations, and an output layer. In an example where we want to estimate airline ticket prices, our input layer would collect the origin airport, destination airport, departure date, and airline. Each of those would receive a weight (perhaps the departure date matters more than the airline) and then the output would deliver a price prediction.

neural networks

(Source.)

Natural Language Processing

Natural language processing is the subfield of AI that processes human languages. It is a very important term in the field of data science and machine learning. The challenge is that often human speech is not literal. There are figures of speech, words, or phrasing specific to certain dialects and cultures, and sentences that can take on different meanings with grammar and punctuation. Similar to human conversations, natural language processors need to use the syntax (arrangement of words) and semantics (meaning of that arrangement) to come up with correct interpretations.

The first step in natural language processing is converting unstructured language data into a form that can be read by a computer. The computer then assigns meaning to each sentence through algorithms and translates it back, often in another form (for example, speech to text or from one language to another).

Natural language processing can help translation apps like Google Translate, document collaboration and communication tools like Slack and Microsoft Word, and virtual assistants. Here’s an example of how the Royal Bank of Scotland incorporates text analytics to parse through customer service complaints from emails, surveys, and call centers to isolate problem areas and enact improvements to their relationships and reputation.

Machine Vision

Machine vision, or computer vision, is the process by which machines can capture and analyze images. This allows for the diagnosis of skin cancer by looking at X-rays and other medical imagery, and for the detection of real-time traffic and vehicle types for self-driving cars, like Tesla’s new models.

There are many different ways that machines can “see”: representing colors numerically, decomposing images into different parts, and identifying corners, edges, and textures. As the machines gather and code more information, they begin to view the larger picture.

Many of the trends around machine vision right now include integration into the industrial internet of things, which involves collecting productivity inputs and sensory data in factories, and non-industrial applications like “driverless cars, autonomous farm equipment, drone applications, intelligent traffic systems, and guided surgery.”

Machine Learning Engineer

With all these exciting technological advances, who is responsible for deploying ML within companies? In many cases, the responsibility first lies with the machine learning engineer, a data-driven software engineer focused on building the systems that can eventually learn and perform work autonomously. These engineers usually need to be familiar with different code bases, distributed computing, data wrangling, and computer science.

Here’s a snapshot from Springboard on what it takes to become a leading machine learning engineer. And here’s an explanation from a machine learning engineer.


Machine learning is on the rise, with 96% of companies increasing investments in this area by 2020. According to Indeed, machine learning is the No. 1 in-demand AI skill and the global market is predicted to increase sevenfold, from $1.4 billion in 2017 to $8.8 billion by 2022.

One of the main challenges with machine learning today is the small talent pool—according to Element AI, there are fewer than 10,000 people worldwide with the necessary skills. That means people with the right skills can enjoy competitive salaries and job security.

About Leah Davidson

A graduate of the Wharton School of Business, Leah is a social entrepreneur and strategist working at fast-growing technology companies. Her work focuses on innovative, technology-driven solutions to climate change, education, and economic development.