How To Split Test Your Email Campaigns and Flows: A No-BS Guide
If you’re not split testing your email campaigns and flows, you’re leaving money on the table. Split testing (or A/B testing) is how you find out what works and what doesn’t, so you can maximize your results.
What Can You Split Test?
Subject lines, Content, Send times, Call to Action (CTAs). Read this page for ideas of what to split test. This guide is going to cover HOW to do it.
1. Start With a Hypothesis
- What Are You Testing? Before you jump in, know exactly what you want to find out. Are you testing subject lines, call-to-action buttons, or email design? Start with a clear hypothesis, like “I think a personalized subject line will increase open rates.”
- Why It Matters: If you’re just guessing, you’re wasting your time. Be specific about what you’re testing and why it matters for your business.
2. Set Up Your Split Test in Campaigns
- Pick Your Variable: Stick to one variable at a time to get clear results. If you test more than 1 thing you won’t know what changes you made actually made an impact.
- Divide Your Audience: Split your audience into two (or more) groups. One group gets Version A, the other gets Version B. Make sure the groups are large enough to give you statistically significant results. Always do a 50/50 split so that you get enough data to determine if your hypothesis was correct.
Run the Test: Send out your emails and let the data roll in. Give it enough time to gather meaningful results—don’t call it too early. The bigger the datapool the better.
3. Split Testing in Flows
- Identify the Key Points: In email flows, the key points for testing are usually the subject lines, email content/ offers, and time delays. Pick one element to test at a time.
- Use Conditional Splits: Klaviyo makes it easy to set up conditional splits in your flows. For example, you can test whether a 1-day delay between emails performs better than a 3-day delay.
- Track the Results: Watch how each version performs in real time. Are people opening and clicking more on one version? Adjust your flow based on what’s working.
4. Analyze the Results
- Look at the Metrics: Open rates, click-through rates, conversion rates—these are your bread and butter. See which version performed better across these metrics.
- Focus on the Outcome: Did the change move the needle? It’s not just about getting more clicks; it’s about getting the right action, whether that’s a purchase, sign-up, or something else.
- Don’t Overreact: One test doesn’t tell the whole story. Look for patterns over time before making big changes.
5. Make Data-Driven Decisions
- Apply What You Learn: If one version clearly outperforms the other, roll with it. Use what you’ve learned to optimize not just this campaign or flow, but future ones too.
- Keep Testing: Just because one thing worked doesn’t mean it’s the best possible option. Keep testing different elements to refine your approach. Again, the more data the better.
- Document Your Wins: Keep a record of what works and what doesn’t. This will save you time and headaches down the road. If you want to download our split testing template, check out the link here.
RECAP: Best Practices for Split Testing
- Test One Thing at a Time: If you test too many variables at once, you won’t know what made the difference. Keep it simple.
- Give It Time: Don’t make decisions based on a few hours of data. Let the test run long enough to gather solid insights.
- Use Significant Sample Sizes: Make sure you’re testing with a big enough audience to get meaningful results. Small samples can give you false positives.
- Repeat and Refine: Just because you’ve found one winner doesn’t mean you’re done. Keep testing, keep refining, and keep getting better.