Maybe you’ve heard of Facebook’s old engineering mantra: Move fast and break things. The company dumped the “break things” part years ago, but today it’s moving faster than ever.
At its own Systems@Scale conference Thursday, Facebook engineers detailed several parts of a computing infrastructure massive enough to serve the 2.2 billion people who use Facebook. One of those details: Facebook now updates its service’s core software at least 10 times more frequently than it did about a decade ago.
“When I joined Facebook in 2009, we pushed [an update to] that main application tier … once a day. That was an epic thing,” said Jay Parikh, Facebook’s head of engineering and infrastructure. Now, though, the site “is getting pushed maybe every one or two hours,” he said.
And updates come faster even though Facebook has more than 10 times as many servers in its data centers, 20 times the engineers updating its software and more than 10 times the users it did a decade ago, he said. Oh, and it’s got more than a billion people using Instagram, WhatsApp and Facebook Messenger now, too.
The glimpse into Facebook’s inner workings is unusual. In other industries — say, banking or railroads or auto making — this kind of operational detail can be information tightly protected to keep competitors from getting an edge. But in the tech industry, it can actually help a company get ahead.
Opening up helps the technology ecosystem — hardware, software and the people who put it all together — keep up better with Facebook’s needs. The problems Facebook finds are likely to be the ones others in the industry encounter as they grow, too.
Facebook has had to work hard to speed things up, when the natural tendency of organizations is to slow down to guard against the increasing risks of change as projects grow larger, Parikh said. To get there, Facebook’s operations mission is now “move fast with stable infra.”
Tools to run tech companies at massive scale
At the conference, engineers from Facebook and other tech companies, like Amazon, Shopify, Lyft, Google and Yahoo gave talks and asked questions of their peers. These are folks for whom operating a data center packed with thousands of servers is last decade’s challenge. Today’s difficulties span multiple data centers around the globe — how do you synchronize data or get a second data center to take over when there’s a problem with the first?
“You’re building something billions of people are going to be impacted by on a daily basis. That is cool, but equally scary,” Parikh said.
Frequent updates are key to fix problems, add new features and run experiments to see what works best. Facebook has to make the changes without disrupting operations at colossal scale — 65 billion messages and 2 billion minutes of voice and video chats per day on WhatsApp, 8 billion Facebook Messenger messages per day between businesses and their customers, and more than 10 million Facebook Live videos on New Year’s Eve.
The audience was hungry for answers.
“Do you run containers directly on the bare metal or on the virtual machines?” one asks Facebook. And another: “Do you guys disable swap on the host machines?” These are folks who live in the world of tools like Spanner, Chef, OpenCensus, Kubernetes, MySQL, Kafka, Canopy and btrfs.
And Facebook added a little more jargon to the mix Thursday. It announced two projects — load-aware distribution to improve how updates are sent to millions of servers and OOMD, a utility to respond more gracefully to computers running out of memory.
Facebook starts building its own technology
Under Chief Executive Mark Zuckerberg, Facebook got its start with a few servers tucked into racks of computing gear hosted by data center specialists. By 2009, Facebook was buying off-the-shelf servers from companies like Dell and Hewlett-Packard. But the mainstream technology approach couldn’t keep pace with Facebook’s challenges, so Facebook decided to build its own technology.
“We’re designing infrastructure from the dirt on up,” Parikh said, with 14 or 15 data centers dotted around the world and hundreds of smaller sites closer to all of us who use Facebook’s services.
“This system is ever-growing, with things I’d never thought we’d have to do, like building our cable systems in the ocean and the ground for connecting our infrastructure,” Parikh said. The number of companies that build their own long-haul fiber-optic links is small — Google just announced this week that it’s building its own transatlantic cable — but the investments can pay off for big enough companies.
“We’re pushing the boundaries of things that help us advance our infrastructure,” Parikh said.
Blockchain Decoded: Techhnews looks at the tech powering bitcoin — and soon, too, a myriad of services that will change your life.
Security: Stay up-to-date on the latest in breaches, hacks, fixes and all those cybersecurity issues that keep you up at night.