Managing infrastructure demand: What to do if your game is the next Pokémon GO (Part 2)

by Sean Webster on Jul 27, 2016

Last week I covered some basic marketing and monetization tips that can be useful if your mobile game becomes an instant hit like Pokémon GO or suddenly takes a huge leap in the charts. This week I’d like to offer how to make sure that your server infrastructure is capable of handling an uptick in traffic.

As soon as your game starts climbing in the charts, you need to be proactive about the backend and your overall infrastructure — after all, the last thing you want is to have tons of growth and interest only to have your servers crash to the point where users who can’t even play the game. 

I talked to Jeune Asuncion, a senior operations engineer at AppLovin with lots of experience scaling our own infrastructure as we’ve grown, and he suggested the following strategies to head off server problems in the context of a mobile game taking off:

  • While we all know caching is important, remember that it applies to just about everything. Cache whatever you can, from database calls to assets. Caches exist for just about everything. One can cache application level data with memory based caching systems. Content delivery networks provide high availability for static assets as well as reduce network latency. There is even a way to cache dynamically generated responses by setting up web server accelerators in house or using third party services. Make sure you have a frequent cache in place. This way you can just return the cached version while updating the data at certain times (every minute or even less). This will greatly help reduce load. As they say, “Cache is King.”
  • Eliminate single points of failure. If applications all write to one database instance, what happens if it goes down? The entire application may stop working. Having multiple database instances with data replicated among them is one solution. Other techniques like partitioning can go a long way to spreading workload and increase performance. This idea is applicable to all parts of the stack. When in doubt, make a diagram of the whole system and pinpoint potential points of failure. Ask yourself, “What happens if this component goes down?” Address potential points of failure by adding redundancy. Depending on your architecture this can mean adding more servers, using a robust networking protocol or simply being not locked into one hosting provider.  
  • Automate infrastructure management. With the advent of IAAS, it’s tempting to manage infrastructure manually from a web-based user interface. This process is simple, but it doesn’t scale when you need to cater to fast-paced and rapidly changing environments. When you automate infrastructure management, you reduce costs, increase speed, and remove risks in terms of errors and security violations. The added bonus is that by doing so, you increase visibility across teams.

While these guidelines can make your application scalable, Jeune also shared that scalability is only one part of the equation. Developers should always be on the lookout to optimize their application for performance. This can necessitate, among many things, rewriting their backend systems. Historically, this has been an inevitability when application usage starts growing; Twitter has a great backstory on how their engineers rewrote its code base.

In short, do everything you can to make sure you’re prepared for every eventuality when you see terrific growth, so you are never in the position of failing to retain users because your servers crash.

Sean Webster is AppLovin’s senior director of Business Development.

We’re hiring! Apply here.