Narrow vs. General AI: What’s Next for Artificial Intelligence?
In this article
In 1950, Alan Turing asked, “Can machines think?”
At the time, it probably seemed like an outlandish suggestion, but fast-forward almost 70 years and artificial intelligence can detect diseases, fly drones, translate between languages, recognize emotions, trade stocks, and even beat humans at “Jeopardy.” It seems like AI is indeed developing a mind of its own.
Artificial intelligence, a term coined by John McCarthy in 1956, began as a simulation of human intelligence through machines and computer systems.
Today, AI represents a way to process data and reach conclusions faster than humans, leading to more accurate predictions of the future. Google’s director of engineering, Ray Kurzeil, forecasts that machines will reach a human level of intelligence by 2029. Kurzeil also says that by 2045 we will reach technological singularity, a time when artificial intelligence becomes more powerful than humans.
This inflection point will lead to a separation between AI as we know it today (also called “narrow AI”) and a future state of AI (“general AI”) that can apply intelligence to any problem.
What Is Narrow AI?
Narrow AI (ANI) is defined as “a specific type of artificial intelligence in which a technology outperforms humans in some very narrowly defined task. Unlike general artificial intelligence, narrow artificial intelligence focuses on a single subset of cognitive abilities and advances in that spectrum.”
There are many examples of narrow AI around us every day, represented by devices like Alexa, Google Assistant, Siri, and Cortana. They include:
- Self-driving cars
- Facial recognition tools that tag you in pictures
- Customer service bots that redirect inquiries on a webpage
- Google’s page-ranking technology that determines which websites appear at the top of the search engine
- Recommendation systems showing items that could be useful additions to your shopping cart based on browsing history
- Spam filters that keep your inbox clean through automated sorting
Today, many companies are investing in and implementing ANI to improve efficiency, cut costs, and automate tasks; however, ANI has serious limitations. Here are some of the barriers to ANI:
- ANI needs a large amount of high-quality data to yield accurate results, and not all environments meet these data requirements.
- The learning curve to institutionalize AI properly can be steep. Companies have to set up and train their staff on new processes and technologies.
- If a task changes, the effectiveness of an ANI system decreases, since it is programmed for a specific purpose.
- Sometimes, replacing humans with rules-based machines leads to greater frustration and lowers customer satisfaction—for example, in the hospitality industry, where guests value personalized service and human interaction.
As we address these obstacles and open up new use cases for AI, we are moving toward a new paradigm—that of general artificial intelligence.
What Is General AI?
Think of R2-D2 in “Star Wars” or Jarvis in “Iron Man” and you’ll get a sneak peek into what researchers are labeling as the future of artificial general intelligence (AGI). AGI recently received a $1 billion investment from Microsoft through OpenAI.
But what exactly is AGI?
AGI, or “strong AI,” allows a machine to apply knowledge and skills in different contexts. This more closely mirrors human intelligence by providing opportunities for autonomous learning and problem-solving.
The challenge now is to move from ANI to AGI in advanced fields like computer vision and natural language processing.
To reach AGI, computer hardware needs to increase in computational power to perform more total calculations per second (cps). Tianhe-2, a supercomputer created by China’s National University of Defense Technology, currently holds the record for cps at 33.86 petaflops (quadrillions of cps). Although that sounds impressive, the human brain is estimated to be capable of one exaflop (a billion billion cps). Technology still needs to catch up.
Currently, one of the main approaches to AGI is called “whole brain emulation,” where a brain’s memory and mental state are transferred onto a computer. Computer architecture is similar to the brain’s because they can both operate through a system of neurons called neural networks. When the right action is taken, it strengthens the transistor connections in the firing pathways. Through trial and error, technology can learn and form smart neural pathways.
To date, scientists have been able to replicate the brain of a 1-millimeter flatworm consisting of 302 neurons. The human brain, however, is estimated to contain 100 billion neurons, which means we have a ways to go before we can recreate our brain.
Quantum computers, which use quantum mechanics to process exponentially more data than normal computers, are positioned to be the next technological frontier to facilitate AGI.
For AGI to match human intelligence, it needs to be able to transfer learnings from one environment to another, use common sense, work collaboratively with other machine and human stakeholders, and attain consciousness.
Neuroscientist Dr. Heather Berlin at the Icahn School of Medicine at Mount Sinai defined consciousness in three different ways: “pure subjective experience (‘Look, the sky is blue’), awareness of one’s own subjective experience (‘Oh, it’s me that’s seeing the blue sky’), and relating one subjective experience to another (‘The blue sky reminds me of a blue ocean’).” Developing artificial consciousness requires subjective, conscious experience in addition to pure intellectual horsepower.
Many experts have different predictions about when we will reach AGI. In May 2017, over 350 machine learning and neuroscience experts were surveyed and around 50% believed it would happen before 2060. Louis Rosenberg, CEO of the technology company Unanimous AI, predicts that it will happen sooner—around 2030—and Patrick Winston, MIT professor and former director of the MIT Artificial Intelligence Laboratory, puts the date around 2040.
With all these forecasts, how will we know when we’ve reached AGI?
One of the most famous tests to compare the intelligence of humans and computers is the Turing test, where a human contrasts the conversational abilities of a human and machine. Apple co-founder Steve Wozniak also coined the “coffee test,” where a machine enters a typical home and figures out how to prepare a cup of coffee. Other tests evaluate whether robots can successfully attend college or replace important job functions with greater efficacy than human workers.
Machine Learning and Deep Learning on the Road to AGI
So how does other AI terminology fit into the new model of ANI and AGI?
Machine learning describes the ability to find patterns and make decisions without instructions or pre-programming, the ability for computer systems to truly “learn” on their own. Machine learning therefore comprises a subset of AI, but not the other way around.
Deep learning is a subset of machine learning that “learns” from unsupervised and unstructured data that is processed through neural networks, algorithms with brain-like functions.
Neural networks can develop through both training and inference. Training involves using different algorithms and improving on them over time, while incorporating new data sources. Inference means that a machine can identify which data sources it needs to make a prediction through logical rules and deductive reasoning.
Research progress in machine learning and deep learning is facilitating the transition from ANI to AGI by enabling decision-making without explicit instructions.
Toward Artificial Super-Intelligence (ASI)
Artificial super-intelligence (ASI) is a step further from AGI, where artificial intelligence exceeds human capabilities to operate at a genius level. Since ASI is still hypothetical, there are no real limits to what ASI could accomplish, from building nanotechnology to producing objects to preventing aging.
Many philosophers and scientists have different theories about the feasibility of reaching ASI. Cognitive scientist David Chalmer believes that once AGI is achieved, it will be relatively straightforward to extend capabilities and efficiency to attain ASI. According to Moore’s law, computational power should double at least every two years, which suggests there may not be a limit to technology’s eventual power.
One of the roadblocks to ASI is the complexity of global problems. Can machines really solve world hunger or stop climate change? Additionally, ASI will need an exceptional amount of data, even relative to AGI. Some believe that using genetic engineering to create a super-intelligent group of humans is the best bet at ASI, while others posit that ASI will involve a new generation of supercomputers.
The Future of AI
Although we still have a long way to go before AGI and ASI, AI is moving quickly, with new discoveries and milestones emerging all the time with the combination of data science. Relative to human intelligence, AI holds promise for being able to multitask, perfectly recall and memorize information, function continuously without breaks, make calculations at record speed, sift through lengthy records and documents, and make unbiased decisions with the assistance of programmers, data scientists, machine learning engineers, and deep learning researchers.
Recently, Google’s AlphaZero won a 100-game chess championship through reinforcement learning and IBM created robots that can provide formidable competition in world-class debate competitions.
As AI continues to take over more jobs, there are big debates over the ethics of AI and whether governments should step in to monitor and regulate growth. AI could transform human relationships, increase discrimination, invade personal privacy, pose security threats through autonomous weapons, and even, in some doomsday scenarios, end humanity as we know it.
These issues may sound daunting, but they also make the study of AI all the more intriguing and impactful.
Since you’re here…
Thinking about a career in data science? Enroll in our Data Science Bootcamp, and we’ll get you hired in 6 months. If you’re still playing the field, take a peek at our free data science curriculum, and don’t forget to peep our student reviews. The data’s on our side.