A/B (C) Testing for Cold emails – Increase open, click, and reply rates

Updated on November 21, 2022

We know that creating the perfect cold outreach campaign on your first try is difficult. And there is no right path to a perfect cold email template.


The initial idea takes a bit of trial and error before you can come up with the winning cold email. And it comes with a lot of testing…


There is no magical solution to know which cold email template will perform best to really boost the results of your cold outreach campaign. This will take a lot of time, sweat, tears, and motivation from an SDR to get to the bottom of the “perfect” cold outreach.

So, this is where A/B (C) testing steals the show.


And yes, you read that right!

A/B/C Testing for cold emails is the new logic behind the ultimate campaign testing that will allow you to test the content and performance of each email template, rather than to test on a campaign level.


Instead of doing it manually and comparing the results one by one, A/B/C testing is a great way to measure and compare the effectiveness of your cold email outreach in depth.

Let’s go over what it means and how you can implement it.

What is A/B (C) Testing?

Standard A/B Testing is a well-known method of testing different email versions within one email campaign to check which of them is performing better.


The usual A/B Testing for cold emails includes comparing 2 versions of different elements of an email template such as testing subject lines for the same email content, comparing 2 completely different email templates, different opening lines for the same email content, or even comparing the results of entire sequences to one another.


What is good with this method is that you can measure each element, but you would need to do this one at a time. Ideally, you would like to test only email subject lines with one A/B experiment, different email templates with another, and so on.


The core of the A/B/C Testing is quite similar to the original A/B Testing, but in this case, you can do the testing on 3 different versions on a much deeper level.
Let me explain how we did this at Sales.Rocks

The A/B/C Testing on Sales.Rocks

Instead of limiting the testing only to the sequence level, we’ve implemented an option that I’ll allow you to do A/B/C Testing on the email level independent of the place where this email is in the branch of the sequence. As I like to call it – the AB-Ception.


Let’s start with the first email.

When we start the cold outreach sequence we usually have the first action as ‘sending an email’ to the prospect. You can split this first action into 3 different email versions that you want to test the performance for.


Let’s say I want to test the email subject with the first email action. I can set up the same email template and change only the email subjects of all 3 versions, which will be distributed 33% by 33% by 33% to the prospect list.


So, 33% of the prospects will receive the email with the 1st email subject, the other 33% will receive the email with the 2nd email subject and the last 33% will receive the email with the 3rd email subject.


I know it’s a lot of numbers, but once you get the logic, you’ll discover the significant benefit you can get out of this A/B/C Testing.


Okay, so far so good.

Now let’s say I want to test a different follow-up template for the people that have received the 1st version of the email subject.


The next action that I’m going to set up in Sales.Rocks is another email action where I split the email into 2 versions, A and B, which will be distributed to 50:50% of the prospect list (note that a C version for this step is also possible if you want to go with 3 different follow-up emails).


In this part of the sequence, I can test the email content to the same A/B/C testing sample size of 33% of the prospects I initially contacted.


On the 2nd A/B/C testing sample, we can test another email element such as different opening lines, and on the 3rd A/B/C sample, we can compare the performance of images, gifs, videos, links, and so on.


So rather than creating separate A/B Testing cold outreach campaigns, you can test out different email and content types on the same A/B Sample (the same mailing list you’ve initially chosen to do the testing for).

These steps can be done on any email action you add in the Sales.Rocks sequences and each email action can be split into 3 email versions for testing purposes.

A/B/C Testing on Sales.Rocks will be released with V3

Checking the results of your A/B/C Testing

By following the logic with the A/B/C you should have a pretty good idea of the performance of each sent email template within your sequence.


You should check every metric we mentioned above and compare it with the changes you’ve done to the email element. Some elements might have more differences than others, but that is the nice part when you can compare the results on the email level.

Here is why you should do at least an A/B test in your cold emails

Using A/B testing for your outreach campaign will give you a significant advantage. 

It can help you complete goals quicker in the short term as long as you know how to analyze your cold email data and it can especially help tackle existing issues with your outreach campaigns. 

Let’s say as an SDR you are running a cold outreach campaign with the goal to book more meetings.


What are the most important stats you should follow?


  • Are your emails landing where they need to land or do you need to check your deliverability?
  • Are your emails getting opened by the recipients or do you need to do some tweaks in the subject lines?
  • Are people clicking on the links you provide in your email content?
  • How many of them interact with the video you’ve sent or the image you’ve prepared for them?
  • How many of them are urged to reply to you?
  • Last, but not least, how many meetings have you booked with your outreach campaign?


All of these can be answered when doing proper A/B Testing.

Best Practices For Email A/B/C Testing

When doing the A/B/C Testing logic you need to follow some soft recommendations in order to conduct a successful test:

  • When doing the split, you preferably would change only one element of the email. Otherwise, you can’t measure the performance of that sample, as more than 1 variable will be different for the recipients
  • When creating the campaign make sure you have enough A/B/C testing audience. The A/B sample size will depend on the industry and the experiment you want to do
  • Do the test on the same audience and ICP. You might find out that your “best-performing” email template is not so great-performing with another ICP
  • Make sure you consider the timing. If your emails are scheduled to be sent out on different days, this will affect the results of open rates, reply rates, etc. 
  • Gather the data and optimize to improve. Take the email templates that have shown the best performance and put them to the test again to create one best-performing sequence


When implementing the A/B/C testing you can optimize the process and track additional performance without the need to change the A/B Sample and the audience, and the best tests are done on the same sample.


You can test different times for sending emails, you can play with the triggers and delays to get the results you are looking and ultimately find the ‘perfect email template’ that will boost your cold email outreach. 

Author avatar
CMO at Sales.Rocks - Jana believes in analytical approach to marketing and building up a story around it.