It’s 8:00 am on a Tuesday morning. You’ve awoken, scanned the headlines on your phone, responded to an online post, ordered a holiday sweater for your mom, locked up the house, and are driving to work, listening to some great new music on the radio.

You’ve also used artificial intelligence (AI) more than a dozen times — to be roused, to call up local weather report, to purchase a gift, to secure your house, to be alerted to an upcoming traffic jam, and even to identify an unfamiliar song.

AI is already pervasive in our world, and it’s making a huge difference in our everyday lives. But this is not the AI you’ve seen in sci-fi movies, with nervous scientists clacking on keyboards and attempting to halt machines from destroying the world.

Your smartphone, house, bank, and car already use AI on a daily basis. Sometimes it’s obvious, like when you ask Siri to get you directions to the nearest gas station, or Facebook suggests a friend for you to tag in an image you posted online. Sometimes less so, like when you use your Amazon Echo to make an unusual purchase on your credit card (like that goofy holiday sweater) and don’t get a fraud alert from your bank.

AI is going to bring major shifts in society through developments in self-driving cars, medical image analysis, better medical diagnosis, and personalized medicine. And it will also be the backbone of many of the most innovative apps and services of tomorrow. But for many it remains mysterious.

To help unwrap some of this mystery, Facebook is creating a series of educational online videos that outline how AI works. We hope these simple and short introductions will help everyone understand how this complex field of computer science works.

No magic, just code

To begin, there’s something important to know: AI is a rigorous science focused on designing intelligent systems and machines, using algorithmic techniques somewhat inspired by what we know about the brain. Many modern AI systems use artificial neural networks, computer code that emulates large networks of very simple interconnected units, a bit like neurons in the brain. These networks can learn from experience by modifying the connections between the units, a bit like human and animal brains learn by modifying the connections between neurons. Modern neural nets can learn to recognize pattern, translate languages, learn simple logical reasoning, and even create images and formulate new ideas. Recognizing patterns is particularly important — AI is good at recognizing patterns in large amounts of data, something that is not as easy for humans.

All of this happens at blinding speed through a set of coded programs designed to run neural networks with millions of units and billions of connections. Intelligence emerges out of the interaction between this large number of simple elements.

Artificial intelligence is not magic, but we have already seen how it can make seemingly magical advances in scientific research and contribute to the everyday marvel of identifying objects in photos, recognizing speech, driving a car, or translating an online post into dozens of languages.

At the Facebook Artificial Intelligence Research (FAIR) lab we are working on getting learning machines to work even better. A large part of this is something called deep learning, which is how we sharpen AI by structuring neural networks in multiple processing layers. Using deep learning, we can help AI learn abstract representations of the world. Deep learning can help improve things like speech and object recognition, and it can play an important role in advancing research in fields as diverse as physics, engineering, biology and medicine.

One particularly useful architecture of a deep learning system is called a convolutional neural network, or ConvNet. A ConvNet is a particular way to connect the units in a neural net inspired by the architecture of the visual cortex in animals and humans. Modern ConvNets may utilize anywhere from seven to 100 layers of units. In a park we can see a collie and a chihuahua, but recognize them both as dogs, despite their size and weight variations. To a computer, an image is simply an array of numbers. Within this array of numbers, local motifs, such as the edge of an object, are easily detectable in the first layer. The next layer would detect combinations of these simple motifs that form simple shapes, like the wheel of a car or the eyes in a face. The next layer will detect combinations of shapes that form parts of objects, like a face, a leg, or the wing of an airplane. The last layer would detect combinations of parts that form objects: a car, an airplane, a person, a dog, etc. The depth of the network — with its multiple layers — is what allows it to recognize complex patterns in this hierarchical fashion.

ConvNets are particularly useful for recognizing natural signals like images, videos, speech, music, and even text, once they have been trained with a large database of examples. To train a network well, you need to provide large amounts of images that have been labeled by humans. The ConvNet learns to associate each image to its corresponding label. What’s interesting is that it will produce good labels for images it has never seen before. The result is a system that can comb through a vast variety of imagery and identify what’s in the photo. These networks are also incredibly useful in speech and text recognition and are a key component of self-driving cars and the latest generation of medical image analysis systems.

What is learnable?

AI also addresses one of the central questions that we as humans grapple with: What is intelligence? Philosophers and scientists have struggled with this question for ages. The answer is elusive and mysterious, yet this central attribute makes us uniquely human.

Concurrently, AI also prompts the large philosophical and theoretical question: What is learnable? And since mathematical theorems tell us that a single learning machine cannot learn all possible tasks efficiently, we also get a sense of what cannot possibly be learned regardless of how much resources you throw at it.

In this way, AI machines are very much like us. We don’t always excel at being general learning machines. Our human brains are incredibly specialized, despite their apparent adaptability. Still current AI systems are very far from having the seemingly general intelligence that humans possess.

In AI, we generally think about three types of learning:

  • Reinforcement learning — This is focused on the problem of how an agent ought to act in order to maximize its rewards, and it’s inspired by behaviorist psychology. In a particular situation, the machine picks an action or a sequence of actions, and gets a reward. This is frequently used when teaching machines to play and win games, like chess, backgammon, go, or simple video games. One issue is that in its purest form, reinforcement learning requires an extremely large number of trials to learn even simple tasks.
  • Supervised learning — Essentially, we tell the machine what the correct answer is for a particular input: here is the image of a car, the correct answer is “car.” It is called supervised learning because the process of an algorithm learning from the labeled training dataset is similar to showing a picture book to a young child. The adult knows the correct answer and the child makes predictions based on previous examples. This is the most common technique for training neural networks and other machine learning architectures. An example might be: Given the descriptions of a large number of houses in your town together with their prices, try to predict the selling price of your own home.
  • Unsupervised learning / predictive learning — Much of what humans and animals learn, they learn it in the first hours, days, months, and years of their lives in an unsupervised manner: we learn how the world works by observing it and seeing the result of our actions. No one is here to tell us the name and function of every object we perceive. We learn very basic concepts, like the fact that the world is three-dimensional, that objects don’t disappear spontaneously, that objects that are not supported fall. We do not know how to do this with machines at the moment, at least not at the level that humans and animals can. Our lack of AI technique for unsupervised or predictive learning is one of the factors that limits the progress of AI at the moment.

These approaches are often used in AI, but there are many problems that are inherently difficult for any computing device. This is why even if we build machines with super-human intelligence, they will still have limited abilities. They may beat us at chess, but not be smart enough to get in out of the rain.

Jobs of the future

As AI, machine learning, and intelligent robots become more pervasive, there will be new jobs in manufacturing, training, sales, maintenance, and fleet management of these robots. AI and robots will enable the creation of new services that are difficult to imagine today. But it’s clear that health care and transportation will be among the first industries to be completely transformed by it.

For young people, just sorting out their career goals, AI offers a wealth of opportunities. So how do we prepare for jobs that don’t yet exist?

If you’re a student:

  • Math and physics classes are where one learns the basic methods for AI, machine learning, data science, and many of the jobs of the future. Take all the math class you can possibly take, including Calc I, Calc II, Calc III, Linear Algebra, Probability, and Statistics. Computer science, too, is essential; you’ll need to learn how to program. Engineering, economics, and neuroscience are also helpful. You may also want to consider some areas of philosophy, such as epistemology, which is the study of what is knowledge, what is a scientific theory, and what does it mean to learn.
  • The goal in these classes is not simple rote memorization. Students must learn how to turn data into knowledge. This includes basic statistics, but also how to collect and analyze data, be aware of possible biases, and to be alert to techniques to prevent self-delusion through biased data manipulation.
  • Find a professor in your school who can help you make your ideas concrete. If their time is limited, you can also look toward senior PhD students or postdocs to work with.
  • Apply to PhD programs. Forget about the “ranking” of the school for now. Find a reputable professor who works on topics that you are interested in, or pick a person whose papers you like or admire. Apply to several PhD programs in the schools of these professors and mention in your letter that you’d like to work with that professor, but would be open to work with others.
  • Engage with an AI-related problem you are passionate about. Start reading the literature on the problem and try to think about it differently than what was done before. Before you graduate, try to write a paper about your research or release a piece of open source code.
  • Apply for industry-focused internships to get hands-on experience on how AI works in practice.

If you’re already involved in a career and want to pivot to AI:

  • You can get a broad idea of what deep learning is about by going through tutorial lectures that are available online. There are plenty of online materials, tutorials, and courses on machine learning, including Udacity or Coursera lectures. These include an overview paper in Nature by myself, Yoshua Bengio, and Geoff Hinton with lots of pointers to the literature: https://scholar.google.com/citat…. The Deep Learning textbook by Goodfellow, Bengio and Courville; and a recent series of eight lectures on deep learning that I gave at Collège de France in Paris (taught in French and later dubbed in English) are also useful.
  • You may also want to go back to school. If so, see the instructions above.

The future

Increasingly, human intellectual activities will be performed in conjunction with intelligent machines. Our intelligence is what makes us human, and AI is an extension of that quality.

On the way to building truly intelligent machines, we are discovering new theories, new principles, new methods, and new algorithms that have applications and will improve our everyday life today, tomorrow, and next year. Many of these techniques quickly find their way into Facebook products and services for image understanding, natural language understanding, and more.

When it comes to AI at Facebook, we have one long-term goal: Understand intelligence and build intelligent machines. That’s not merely a technology challenge, it’s a scientific question. What is intelligence, and how can we reproduce it in machines? Ultimately, that quest is humanity’s quest. The answers to these questions will help us not just build intelligent machines, but develop keener insight into how the mysterious human mind and brain work. Hopefully, it’ll also help us all better understand what it means to be human.

You can see more of our AI explainer videos here.

Leave a Reply

To help personalize content, tailor and measure ads and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookie Policy