This week we’re publishing a series of posts looking back at the technologies and advancements Facebook engineers introduced in 2017. Come back each day to learn about a different part of the stack.

Data centers are the foundation of our global infrastructure. Meeting the growing demands of serving more than 2 billion people every month comes with the dual challenges of increasing our total capacity while keeping our energy footprint low.

This year we announced four new data center locations — Odense, Denmark; Papillion, Nebraska; New Albany, Ohio; and Henrico, Virginia — growing our total fleet to 11 locations across the U.S. and Europe. We started serving traffic at our Fort Worth data center and announced expansions at our existing sites in Altoona, Iowa; Clonee, Ireland; and Los Lunas, New Mexico. As these new buildings come online over the next few years, they’ll help us scale to support more people on Facebook sharing even more immersive experiences like live video, 360 photos and videos, and AR/VR.

Our data centers are some of the most advanced and energy-efficient facilities in the world, and we’re committed to powering all of our new data centers with 100% clean and renewable energy. In addition to helping develop new sources for solar, wind, and hydro-electric energy at our data center locations, we also contribute excess energy back to the local grids and work with utilities to help grow the market for clean energy.

Inside our data centers, we’ve continued to innovate on our hardware to scale and improve the performance of our apps and services. At the 2017 OCP Summit we announced an end-to-end refresh of our hardware:

  • Bryce Canyon, our new storage chassis, will be used primarily for high-density storage, including photos and video. It provides increased efficiency and performance over its predecessor, as well as a 4x increase in compute capability.
  • Big Basin succeeds Big Sur, our first GPU server, with a 16GB memory that can train up to 30 percent larger machine learning models. With Big Basin, we can experiment faster and with more complex models.
  • Tioga Pass is our next-generation CPU server that we use for a variety of compute services at Facebook. Upgrades make Tioga Pass more flexible and allows us to maximize the memory configuration as needed.
  • Yosemite v2 is a refresh of our original multi-node compute platform. It supports hot service, which means servers can continue to operate when one sled is pulled out of the chassis to be serviced, keeping our uptime uninterrupted.

This year we also migrated our user database from InnoDB to MyRocks, cutting the number of servers and amount of storage we use to manage the social graph in half. This frees up room on those machines for other purposes, helping us to make our data centers as efficient as possible as we grow to serve more than 2 billion people.

Of course, efficient data storage is only half the battle. We also need to make sure traffic can flow quickly within a data center, among our global data center regions, and between our data centers and the people who use Facebook. Come back tomorrow for a look at how we’ve boosted our network infrastructure and the engineering milestones we’ve achieved toward our goal of improving connectivity in under-connected corners of the world.

Leave a Reply

Join Our Engineering Community