“I’m 99 percent sure blueberries are safe for dogs” and other CRO secrets with Shiva Manjunath

May 15, 2025
Meet Shiva Manjunath, Senior Web Product Manager of CRO at Motive and host of the "From A to B" podcast. He dives deep into the realities of building robust experimentation programs, sharing insights on navigating organizational hurdles, adapting strategies on the fly, and proving CRO's value far beyond simple A/B tests.
“I’m 99 percent sure blueberries are safe for dogs” and other CRO secrets with Shiva Manjunath

The experimentation engine: more than just testing

You like to draw an interesting parallel between scientific discovery and effective experimentation in CRO. Could you expand on that?

Absolutely. Think about understanding a disease. There’s a scientific process involved: understanding the disease itself, the pathways, how it affects humans, and then exploring substances that might offer a solution.

It maps quite directly to the experimentation and CRO space. You need to conduct qualitative and quantitative research, data analytics – basically, thorough research on your users to understand their problems on your site. Only then can you figure out the core problems, hypothesize how to solve them, and eventually offer potential solutions through tests.

Where do you see companies commonly failing when they approach experimentation?

Many people fail because they think experimentation is just about running A/B tests. That’s a superficial view. They miss that so many things factor into the inputs for experimentation. If you’re not doing the research groundwork – understanding the user, the problem – you’re not only running a poor experimentation program, but you’re fundamentally doing digital marketing wrong. It’s like launching paid search ads based on a whim without any backing. That’s likely going to be a bad paid search program. It’s the same principle. You need that foundation.

You’ve discussed 'spaghetti testing' – testing random ideas. What's your take on that versus using data-backed hypotheses?

That’s something Ryan Lucht and I talked about – throwing things at the wall to see what sticks. I go back and forth on this. For me, if you’re going to spaghetti test, there has to be a solid foundation: your analytics and experimentation platform must be very inexpensive and efficient. Plus, your program’s resourcing needs to be robust enough to actually squeeze in a couple of ad hoc ideas.

Sometimes, things you don’t expect or don’t have data for might work. That’s a fair argument. Ryan’s point is often that if the effort is low for all tests, trying more things makes sense. If you can get more ‘at bats’, it might be worth it.

However, if your experiments are expensive, or if you’re a solo CRO person spending maybe 60% of your time on this and are incredibly busy, then trying random things becomes costly. The probability of a random idea working is likely lower than one backed by data. In that scenario, I have to ruthlessly prioritize ideas with data backing them. It comes down to prioritization frameworks like RICE (Reach, Impact, Confidence, Effort). If your effort score is consistently low, perhaps spaghetti testing works. But for many programs, getting even one test out the door requires high effort and cost.

The human element: navigating politics and priorities

Beyond cost and effort, how does the internal company environment and politics impact an experimentation program?

Politics has a major impact. I don’t mean global geopolitics; I mean navigating relationships within your company. Even if testing is cheap technically, you encounter people. Sometimes, an influential person just doesn’t like an idea, and that’s that. If they have a ‘C’ (as in, CXO) in their title and you don’t, their opinion often carries the day. That’s part of the game.

Ideally, you could try anything, especially if the effort is low. But you still deal with humans. The more I talk to people, the more I realize that internal politics and whether experimentation is truly valued are huge barriers. If key people with influence don’t value it, that’s a major hurdle. Conversely, if that problem is fixed, people invest more, making testing cheaper, improving strategy, and creating a positive feedback loop. But if that initial buy-in link breaks, everything else struggles.

It seems the human aspect is critical. You discussed the importance of stability in leadership and teams. Why is that so crucial?

I initially explored the impact of stable leadership with Lukas Vermeer, but it broadened to leadership’s general impact. You need stability not just in leadership, but in any stakeholder team surrounding experimentation for it to truly grow.

Think of it like a plant: if its roots grow deep and wide, it’s hard to pull out. But if you keep replanting it, the roots never take hold. Lukas mentioned Booking.com had stable leadership and teams for years, allowing the experimentation program to set roots.

In contrast, where I’ve seen frequent leadership changes, or even key team members leaving, you constantly have to start over, reworking processes. Instead of building on a solid foundation, you’re constantly fixing a flooded basement. You can’t build a great home theater if you keep spending resources fixing the basement. That’s the analogy I’m rolling with.

Does that stability also tie into building credibility for the program?

Exactly. We touched on credibility with Lukas. If you’re at a place long enough, people remember you and your contributions. You establish roots. Lukas was well-known at Booking not just for being amazing, but because he delivered high-level work consistently over time.

That’s a challenge for CRO professionals – a Catch-22. If a company doesn’t allow you to grow the program due to politics or resources, you get frustrated and leave before establishing that foundation. Why stay if you feel blocked? But you need time for the program and recognition to mature. Our field can feel fragile; people sometimes expect quick or unrealistic results, like demanding a specific conversion rate lift. CRO often gets blamed even when factors like traffic quality are outside its control. I can’t run enough button color tests to make unqualified traffic convert.

From roadblocks to results: adapting and proving value

Given these challenges, what does it take for a CRO program to be set up right and succeed?

That’s the key question, and there’s no single answer. It depends, because CROs must be malleable and adapt to their situation.

For example, at one company (Company A), we had great teams, but inconsistent resourcing was my biggest roadblock. I audited my own program – optimized the optimization program, essentially. The solution was using third-party agencies for engineering help to build tests. That adaptation addressed my specific issue there.

At another company (Company B), I had a similar resource problem but couldn’t use an agency. My adaptation was working closely with the PMM and engineering team. We found ways I could support their releases with testing, and negotiated for more support in return. It was about finding a workable process within their constraints.

So, you sometimes have to operate without ideal resources or top-level buy-in initially?

Yes. At another place (Company C), neither previous approach worked. I lacked dedicated resources, and top leadership wasn’t bought into testing. My direct bosses hired me for it, but the C-suite resisted, thinking, “Testing loses sometimes, we just want to do things”— which seemed like questionable logic.

It might sound like a mishire, but you can still make an impact. I don’t recommend this universally, but I was hired for A/B testing, so I decided to do it anyway. I learned basic code, worked with the visual editor, and started running tests, informing my supportive boss. When we got winners, we rolled them out.

Just because you don’t have the perfect setup doesn’t mean you can’t make an impact. Will they be the absolute best results? Maybe not. But sometimes showing small impacts builds momentum and changes perceptions. People might say, “We won’t give you resources to test button colors”—I actually heard that. In those cases, I had to do the work, show the results, and then hear, “Oh, you can do that? I didn’t know. Here are more resources.”

Can you share an example of a win that really opened doors?

Yes! At Gartner – a great moment – we ran a test. About a week in, the analytics team reached out, thinking something broke because conversion rates were skyrocketing. I checked my test results and realized, “Oh, that’s us!” It wasn’t even a major visual change.

We did our due diligence to ensure it wasn’t just faulty analytics tracking. Everything checked out. The winner genuinely drove a massive lift, impacting the bottom line so much the analytics team flagged it as a potential bug. Once leadership saw that connection, they directed significant resources to the experimentation program. That single instance built credibility and secured dedicated resourcing for years, allowing us to run more tests and find more wins. Sometimes it just takes that one “red pill” moment for people to get it.

Learning from losses & looking ahead

That’s a fantastic success story. But do you also find value when tests don't win?

My favorite tests are often the ones that prove me wrong because they teach you something. As Michael Aagaard discussed, you’re not in testing just to prove yourself right; you’re in it out of genuine curiosity. If a test wins, great, it helps the business. If it loses, also great – you avoided a negative impact. Hopefully, you learned something from the loss to inform future winning tests.

Losing tests are interesting because they disprove my hypothesis. I run tests to learn. When I learn something new or surprising, that’s valuable. I don’t just see a loser and think, “It lost.” Okay, maybe briefly. But mostly I think, “Okay, what can I learn? What happened? What’s next?”

Can you give an example of a losing test that taught you something important?

Sure. There’s a common idea that fewer form fields mean more conversions. We tested this by taking a five-step form down to two steps, reducing fields from about ten to five or six. It was dramatically shorter. And, it lost. Big time.

Cutting steps hurt conversions significantly, which is ‘technically’ counter-intuitive. Digging into results and qualitative research revealed people lost trust. They thought, “Why talk to you when you only asked for basic contact info? You claim expertise but haven’t gathered info about my needs.” The brevity conveyed a lack of seriousness, eroding trust.

The removed fields asked about industry or problems. That built trust, signaling a more productive conversation. For complex software, just giving contact info means the sales call starts with basic discovery. Asking qualifying questions in the form, like a salesperson would, actually helped. In fact, we later tested adding a step to a form, and it increased conversions by about 15%. It shows you must test assumptions, as intuition about “best practices” can be wrong.

That’s a great point about challenging assumptions. How do you view the role of AI in experimentation today?

We could discuss AI at length. My general feeling aligns with folks like Slobodan Manić: just because you can use AI doesn’t mean you should, or that it’s valuable. People tend to see AI output and automatically trust it. Don’t. If AI suggests something, ask why, check the source. Don’t blindly trust it. AI can hallucinate; remember the meme asking for fruits ending in ‘um‘? It invents them. It provides plausible answers, not necessarily truth.

Sometimes AI is wrong because its underlying data is flawed or lacks nuance. As Chris Mercer noted, use AI to evaluate its process, not just the answer. Understand how it solved a problem. If the method is flawed, adjust it.

Asking AI, “Analyze my website and give me A/B test ideas,” is, in my opinion, a poor use. The best ideas come from your own research and data. A good use case might be, “Summarize themes and sentiment from these 100 user reviews,” or analyzing a dataset for outliers. These are valuable tasks where AI assists, but requires supervision. People ask AI to do entire jobs it’s not equipped for. It provides an output, but that doesn’t mean it’s good or follows a sound process.

It’s like relying on a tool for a process you don’t understand. Know the work, then use AI to enhance it, not replace strategic thinking. AI averages towards best practices, rarely yielding breakthroughs. You need personalized insights – like getting blood work done for specific deficiencies, not just taking a generic multivitamin. Don’t take the multivitamin approach with AI. Do the blood work on your website.

Speaking of programs doing things right, which companies' experimentation programs would you like to peek inside?

Duolingo is definitely one. They publish interesting work and seem to have a strong culture. I also admire how they’re framing roles around product experience marketing. Close second would be Spotify. They have their own tool and very smart people like Luke Frake (Spotify). Those are probably my top two. Separately, I’d love to work with an NHL team, particularly the Rangers, on their analytics.

Finally, what's the best piece of career advice you’ve received, and what key resources do you recommend for someone starting in CRO?

The best advice is to be problem-focused and solution-oriented. This applies tactically—identify user problems, test solutions—but also crucially with stakeholders. Instead of telling engineers, “Work faster,” approach them with the problem: “I need to increase test velocity but face challenges with build complexity. Can you help find solutions?” Show the problem, don’t dictate. It’s collaborative and more productive.

For resources starting out: On the paid side, CXL’s courses are amazing, taught by fantastic minds. It’s expensive but valuable (my genuine opinion, not a paid plug). I’m skeptical about very cheap courses. On the cheaper/free side: Books! Erin Wiegel’s “Design for Impact” is great. Talia Wolf has one on emotional marketing, and Rommil Santiago has “Prove It or Lose It” coming. These offer practical, tactical advice, not just theory. And podcasts: Selfishly, mine, “From A to B,” is free, ad-free. Also check out Slobodan Manic’s “No Hacks” and Juliana Jackson’s “Standard Deviations“—both brilliant.

Share article link
Work with us
Need expert help defining your Martech strategy and building the optimal stack? Learn how Martech Family partners with companies like yours to drive growth.
Work with us