Uncertainty and performance in mobile marketing

by Katie Jansen on May 19, 2017

Eric Seufert

This is a guest post by Eric Benjamin Seufert. Eric runs a London-based mobile marketing and analytics consultancy, Heracles, as well as Agamemnon, an analytics platform for mobile marketers, and Mobile Dev Memo, the mobile marketing trade blog. Prior to Heracles, Eric was the VP of User Acquisition at Rovio. In 2014, Eric published Freemium Economics, a book about the freemium business model, through Elsevier.


If you asked a random sample of mobile marketers about the metrics they most militantly track, it’s likely that a majority would mention the measures of how much money their advertising spend generates: Return on Ad Spend (ROAS*) or Return on Investment (ROI).

Companies with reasonably sophisticated analytics infrastructure are generally able to track these metrics on a daily or at least weekly basis — they’ll follow the ROAS of cohorts and oftentimes evaluate performance of those campaigns using a rough heuristic related to their LTV timeline. For instance, a mobile marketing team might find that campaign ROAS of 10% at Day 2 has historically corresponded to an ROI of 110% at the LTV day for which the traffic was purchased (eg. Day 180), and so they’d look for 10% Day 2 ROAS on campaigns and adjust bids up or down relative to that benchmark.

A process like this scales pretty easily with the right tools: plug a team of UA managers into a marketing analytics system that ingests data from an attribution partner and the app’s own analytics, provide them with revenue guidance (“ROAS should be 10% at Day 2”), and have them cycle ad creatives and update bids. But this process is also singularly focused on unit economics: LTV exceeding CPI at the level of the individual user. Orienting a marketing function exclusively around unit economics without consideration for any other factors that affect the company’s growth isn’t practical for most developers for two reasons:

  1. It’s rare that a developer has the financial wherewithal to give their marketing team the mandate to spend as much as it wants, so long as that spend is profitable. Although companies like this certainly exist, most developers don’t have the balance sheet heft to simply spend away without worrying about monthly profit and loss. Since user CPIs are paid upfront and user LTVs are generated over some period of time (90 days, 180 days, etc.), some thought has to be put into budget allocation with respect to topline revenue growth.
  2. More importantly than the purely administrative problem of revenue scheduling, a myopic fixation on unit economics prevents companies from operating outside of a very specific set of user acquisition channels. It’s common to hear UA managers say that they “measure everything,” which sounds reasonable on the surface, but it means they don’t utilize channels that are incapable of being measured. Direct response channels — click-to-install ads run via mobile ad networks and on owned-inventory sources like Facebook — are the most straightforward and transparent sources of new users on mobile, but they’re not the only sources, and they’re not necessarily even the most lucrative. The “measure everything” mentality is actually intellectually lazy: it usually just means that a team is plugged into a robust marketing analytics system that is entirely dedicated to direct response channels and which does the heavy lifting for them.

Direct response marketing should be an important component of any mobile developer’s growth program, but if it represents the entirety of that program, the developer is leaving money on the table: they’re only operating in the most competitive market for mobile users’ attention and they’re missing out on potential first-mover advantage on new but opaque channels (Pinterest and Twitter when they launched self-serve ads, Snapchat, etc.).

“Performance marketing” isn’t synonymous with “direct response marketing.” What “performance marketing” means is that a team has a model that capably predicts revenue based on marketing spend, no matter where that spend was directed. A performance marketing growth model can start with assumptions and be tuned over time with real campaign data; there’s no reason every click and every install needs to be precisely measured in order for a team to be marketing with an eye to performance.

Uncertainty is simply impossible to avoid in a dynamic, chaotic field like mobile marketing. Nothing is stable, exogenous factors can have massive impacts on campaign performance, and some participants are so big that they can move market prices in a matter of hours. Comfort with uncertainty and the intellectual flexibility to experiment and update priors is the sine qua non of mobile marketing; part of that comfort with uncertainty is being able to look beyond directly attributable traffic acquisition to a more elegant form of performance.

But assumption-based models are actually hard to build: there are a number of moving parts to any mobile growth model, and once you incorporate non-direct channels that rely on things like organic uplift, short-term viral bursts, and increases to baseline direct response campaign performance in order to produce profit, non-programmatic tools become inadequate. This is the reason I built Agamemnon: to provide mobile marketers with the ability to build big, broad, channel-agnostic growth models that accommodate all types of traffic channels, not just direct response. Agamemnon models focus on profit, not unit economics, and they allow marketers to track revenue and expenses over time as cohorts of users mature.

*ROAS is the percentage of ad spend that has been recouped through an acquired cohort’s revenue contribution, either through in-app purchases or ad views, or both. Some companies include the revenue generated by organic installs that they attribute to their paid activity as a result of virality, meaning the numerator is the revenue of both acquired users and organic users and the denominator is the campaign cost.

Katie Jansen is AppLovin’s Chief Marketing Officer. In addition to her work, Katie is an advocate for women in tech and equality in the workplace.