How Cash App Built a Nine-Figure Paid Funnel with First Growth Hire Dyan Khor

Dyan Khor joined Cash App as their first growth marketer, scaling paid acquisition from zero to nine figures while maintaining a 2x LTV to CAC ratio. After building and leading growth at Cash App, she went on to lead global growth efforts at Speak, managing teams across Asia and Europe. Dyan is now advising companies on marketing, and launching a network to connect growth talent with her clients. In this interview, she shares insights on building measurement infrastructures, scaling growth programs, and maintaining incrementality at scale.

Early career and role at Cash App

What was the initial challenge when you joined Cash App?

When I joined Cash App, they were already growing naturally through peer-to-peer networks. If you’re paying someone for lunch and two of us were running on Cash App, we’d probably be fighting over who gets to invite the other. But that viral growth would only extend as far as our networks could go.

As Cash App continued to expand their products beyond peer-to-peer into card, bitcoin, investing, and banking products, they became interested in figuring out if the LTV could justify CAC. With just peer-to-peer, there wasn’t a lot of LTV, so it wasn’t necessarily time to think about paid or affiliate growth. But once the card started becoming valuable, we saw much higher LTVs.

What was your mandate when you first started?

I was asked to come in and build the program and the business case for paid acquisition, which seems like a relatively simple question, but was actually tricky. It was a bit of a chicken and egg question – to build out paid acquisition, we needed attribution and the actual business case first. What is the existing customer LTV? As you bring in customers, what is the LTV of those customers? What’s the CAC you can justify?

But also, what is the CAC of those customers? That’s why you have to build in attribution. They were looking for someone who could do both – build the paid program with all the qualitative factors like positioning and creative, but also build the attribution stack. This was 2020, so iOS 14 was looming in the future. How were we going to build this to be future-proof?

How did your background prepare you for this role?

My background is actually in analytics first – I started in analytics and then moved into marketing. I’ve owned analytics and marketing in a lot of roles. There’s an interesting tension between wanting to say that your ROI is high, but then also wanting your attribution to be as accurate as possible. Coming from analytics, I could look at both sides objectively when examining channels like Google UAC campaigns with brand search and display placements.

Building measurement infrastructure

You implemented sophisticated measurement systems early on - what drove that strategy?

We had this massive organic growth engine that made it difficult to understand the true incrementality of our marketing efforts. We knew we needed something more sophisticated than standard attribution. That’s when my former manager, who came from an analytics and Biz-Ops background, suggested looking into causality modeling. We talked to several MMM vendors and realized this could solve multiple challenges at once – measuring incrementality, understanding marginality, quantifying brand impact, and properly attributing organic growth.

What’s interesting is that in 2020, MMM was still primarily the domain of big CPG brands. App companies weren’t really using it yet. But we saw it as a way to future-proof our measurement infrastructure while getting better insights into our true marketing impact.

Your team developed some innovative approaches to making MMM actionable. Can you walk us through that evolution?

The traditional challenge with MMM is that it’s backward-looking and slow. We were getting quarterly readouts with significant delays. Imagine trying to make decisions in December based on data from September – it’s not exactly real-time marketing.

So we developed what we called a multiplier-based model. Instead of waiting for each new MMM readout to make decisions, we would take the incrementality multipliers from our most recent quarter and apply them to our current deterministic data. This meant we could still act on real-time data while using our historical incrementality learnings to inform allocation decisions.

The real innovation came when we started thinking about brand impacts. We built a custom model on top of our MMM that could quantify the impact of media spend on specific brand KPIs – things like awareness and a custom metric we developed around familiarity and favorability. These brand metrics then became inputs into our direct response model, which helped us understand the full picture of our marketing impact.

For example, when we ran TV ads, we could see both the immediate direct response impact and the longer-term awareness effects that would influence acquisition 4-6 weeks later. This was crucial for justifying brand investments and understanding the true ROI of our marketing mix.

How did you validate and maintain the accuracy of these models?

We learned quickly that model calibration requires constant attention. You can’t just spend the same amount in the same channels month after month and expect useful insights. The model needs variation to understand causality.

While we didn’t always run explicit geo-based holdouts or user-based incrementality tests, we made sure to vary our spend systematically. This meant strategically increasing and decreasing investments across channels and campaigns to help the model understand true cause and effect relationships.

The key was finding the right balance. Traditional incrementality testing is still valuable – we’d recommend doing it periodically to validate your models. But with a well-calibrated MMM, you don’t need to run these tests as frequently. Instead, you can focus on creating natural experiments through your regular spending patterns that keep your models accurate and actionable.

Channel Strategy & Growth

Once you had the measurement infrastructure in place, how did you approach scaling your paid channels?

We started with a test budget of about a million dollars and quickly saw strong positive ROAS. However, the question was if that return would scale due to marginality, and we eventually leveled out to around a 2x LTV to CAC with nine figures of spend for years. The scale equation came together quickly after building out the attribution and product marketing pieces.

What’s interesting is that our initial hypothesis about what to advertise was completely wrong. I came in thinking we’d mainly advertise peer-to-peer since that’s what people knew Cash App for. But we found much more success advertising the card. It’s something tangible that people could understand from an ad – it’s customizable, looks cool, and has offers. It’s more digestible than trying to explain an app feature through UI screenshots.

How did you determine which products drove true incrementality?

We were fortunate to have a high percentage of Android users because of our lower-income demographics. This allowed us to look at true user-level segmentation and determine actual user-level LTV for paid campaigns. We layered additional logic on top of our MMP attribution to identify truly net-new customers, checking against invites, referrals, affiliates, and other payment mechanisms.

But just because you’re advertising bitcoin doesn’t mean people are going to exclusively use bitcoin. You can’t just take the LTV of a bitcoin customer and use that for campaign CAC targets – the attach rate won’t be 100%. You have to look at the actual attach rates from those campaigns and compare them to organic users. Then look at how those ratios vary between iOS and Android after we lost user-level attribution to get the best possible understanding of customer value.

Your approach to incrementality testing was quite rigorous. What did you learn about channel effectiveness?

Looking at my Google UAC campaigns, with brand search and display mixed in, I could see exactly what these placements were like. The incrementality varied significantly. Display campaigns are famously non-incremental – we had a pretty generous click window, so that would show pretty poor incrementality.

The key was carving out granular data. You can’t look at search as an overall channel – you have to separate non-brand versus brand. You can’t look at Google UAC as an entire channel. You can’t even look at Meta’s entire channel the same way – you need to carve out different placements like Audience Network versus owned and operated inventory. Once you get that granular, you start seeing the true story of what’s working.

The key was finding the right balance. Traditional incrementality testing is still valuable – we’d recommend doing it periodically to validate your models. But with a well-calibrated MMM, you don’t need to run these tests as frequently. Instead, you can focus on creating natural experiments through your regular spending patterns that keep your models accurate and actionable.

Join the newsletter Weekly interviews: martech stack deep dives, martech CEOs and marketing leaders. All in one place. Free forever.

Current Trends & Lessons

How did you approach working with platforms to improve channel performance?

The first time we ran an incrementality test with Snapchat, it didn’t come back very incremental. But looking deeper, it made sense – they didn’t have incrementality data to optimize toward. When you’re running install campaigns and they’re optimizing for highest propensity installs, of course it’s going to be less incremental for a viral product.

The question became: how do we improve this? The platforms need us to tell them there’s a problem before they can fix it. It’s the same with Google or any platform that has multiple properties – they may be able to provide more data or suggest what could go into the model. But fixing incrementality starts with admitting you have a problem. Your partners might help you solve it, but if they don’t, you need to figure out where else to spend.

How do you see the relationship between brand size and incrementality?

Small brands’ ads tend to be more incremental because they don’t have as much organic volume. Large brands probably run into the most incrementality issues because they have an existing brand. That’s why they need the incrementality resources the most – they’re trying to get more juice for the squeeze.

At Cash App’s scale, you start having existential questions. If people need to use Cash App, are they going to use it anyway? It depends on where you are in the spectrum. But if you’re not a household brand, you definitely can do advertising that’s incremental. If you objectively look at it, the channels that you expect to be less incremental usually are, and you can test your way into finding what works.

What's your approach to mentorship and building professional relationships in the growth space?

Throughout my career, I’ve found that the most valuable relationships are with people who think differently than I do. These diverse perspectives have been crucial to my growth at every stage. While I’ve never formalized these relationships into paid coaching arrangements, I believe in organic mentorship that comes from genuine connection and mutual interest in helping each other grow.

I particularly enjoy connecting with talented growth marketers and analytics professionals. Often, these relationships start with specific problem-solving discussions or career development conversations, and naturally evolve into ongoing professional relationships. When I work with companies looking for talent, or when I’m hiring, I’m able to leverage these connections to create opportunities for people in my network.

I’m always open to new connections, with just one guideline: come with a specific question or topic to discuss. You can find more information about connecting with me on these topics on my site.

Final Thoughts

What advice would you give to growth teams building measurement infrastructure today?

Get as granular as possible when looking at your data and be really honest about attribution. There’s often tension between wanting to show high ROI and wanting attribution to be as accurate as possible. But if you objectively look at channels and know both sides of it, you can make better decisions.

Look through the data, be precise about campaign structure, and test your way into understanding what works. When we increased spend in different channels, we’d ask – does this hold true on a relative basis when we increase it again? Most things you expect tend to be true – channels you don’t think will be incremental tend not to be incremental.

How should marketers think about incrementality at different stages of growth?

With large brands, you have to wonder at a certain point – if people are going to use your product, will they use it anyway? Do your ads matter? That’s different from smaller brands where paid channels tend to be more incremental because you don’t have as much organic volume.

But regardless of size, there’s always something that can be figured out. Whether it’s campaign structure, targeting, segmentation – if all your channels aren’t incremental, something’s going wrong. The key is being honest about what’s working and what isn’t.

 

Dyan Khor is currently building a platform to help startups find and mentor their first growth hires. Connect with her on LinkedIn to learn more about her work in growth marketing and measurement.

 

Companies:
Building a $10B Super App: Inside Gojek's Data-Driven Growth Strategy with Crystal Widjaja
From driver analytics to super app success, we reveal the unconventional experiments that shaped Southeast Asia's tech pioneer.
GetYourGuide's VP of performance on creating their growth flywheel
Follow Wouter on this tour of his achievements and establishing martech excellence at GetYourGuide.