One of the most innovative components of Sparkpost’s Deliverability Analytics is the Blocklist Impact feature.

Even the healthiest sender occasionally ends up on a blocklist. This can create a lot of confusion since it’s often not clear which mailbox providers are using which blocklist and what is the resulting impact on deliverability and inbox placement.

SparkPost’s Deliverability Analytics Blocklist Impact feature works to address the confusion by quantifying the impact, tailored to your unique recipients.

We love data here at SparkPost, and we love sharing how we make your data work better for you. So, we wanted to pull back the curtain and show how we developed this great new feature.

We’ll discuss how we used the depth and breadth of data unique to Sparkpost to create a simple yet powerful tool. We wrote the first part of this blog post with a general audience in mind. We’ll get into the technical details, for all the math geeks out there, in the second part of the post.

**Part 1: What the heck is Blocklist Impact?**

Many senders don’t think about blocklists too much until they learn they landed on one.

Blocklists are the virtual informants of the email world, while mailbox providers are the virtual police – taking action on information from the informants. It’s ultimately up to the mailbox providers to determine which informants to listen to and how they choose to act. Examples of blocklists include Spamhaus, Cloudmark Sender Intelligence (CSI), and Spamcop.

Some typical actions we see mailbox providers take when an IP or sending domain is blocklisted include putting emails from that IP/sending domain in the spam folder, delaying the email, or even just bouncing the email altogether. However, not all blocklists are created equal!

There are thousands of blocklists. Most will have no impact on your sending. A very select few are used by a combination of mailbox providers where the impact is likely to be significant (Spamhaus). And of course, there are some blocklists in the “gray area,” that may or may not impact your sending depending on which providers make up your list.

Without understanding the impact of a blocklist on your specific list, you can spend hours or days trying to resolve a blocklisting that has no material effect on your sending, while simultaneously missing a blocklisting that will have a major impact. We built SparkPost’s Deliverability Analytics Blocklist Impact feature to address this exact problem.

So – the first step is to determine which mailbox providers are using which lists and the impact we should expect. Unfortunately, we can’t just call and ask – so let’s use the next best thing – data!

Most of the major blocklist providers will confirm when IP addresses are put on their blocklist and when it is removed. Knowing when an IP address is blocklisted is a start, but that’s not going to tell us the impact.

This is where we leverage our unique volume of data. We can study the deliverability metrics for IP addresses when blocklisted for each mailbox provider (we track over 200!). We then compare those metrics for IP addresses when *not* blocklisted and determine if there is a statistically significant material difference.

The volume of data needed to separate the noise when looking across key metrics, at infrequent events (blocklisting), for over 200 mailbox providers is enormous. We use **billions** of data points to complete this task and refresh every 6 months to keep up with changes in the landscape.

For a technical explanation, please read the second part of this post but in the meantime, here is the tl;dr of the details.

We first start by grouping our data into hourly intervals. For each IP address, we aggregate the volume of bounces, delays, engagement events (opens + clicks), and injections. We use engagement as a proxy since we don’t always know if an email is delivered to the spam folder. Observations with insufficient data are excluded from the model. Below is a mock of how the data appears at this point.

The next step in the process is to begin looking at each of the blocklists. For a given blocklist – we look at each mailbox provider and classify each observation as blocklisted or non-blocklisted. Below is a mock of the data at this next point.

For each blocklist-mailbox provider combination, we now have two rich datasets with the aforementioned three key metrics – one dataset aggregating observations when actively on the blocklist and the control dataset – aggregating observations when *not* actively on the blocklist.

In cases where a mailbox provider performs an action, such as block bouncing, to IP addresses actively listed on the given blocklist, we would expect to see a material difference in the bounce rates between the two datasets.

Conversely, we would not expect to see a material difference when the blocklist is not being used. Noise and unrelated factors may still produce differences; hence we apply mathematical techniques, detailed in the second part of this blog, to control for extraneous effects.

In this real example, where we masked the blocklist and mailbox provider, we can see a clear pattern with the bounce rate for the mailbox provider typically in the single digits when not listed on the blocklist, but close to 100% when listed 70% of the time. This provides strong evidence that this mailbox provider uses the example blocklist.

Conversely, in this example, the bounce rate is in the single digits close to 100% of the time whether or not an IP address is on the given blocklist, providing strong evidence that the mailbox provider does *not* use this blocklist.

For each combination, we quantify how different the two distributions are into a single index – the details are in the second part of this blog for those interested. What’s important is the larger the number, the more different the two distributions are, hence the more likely it is that the mailbox provider uses the blocklist to take an action (in this example, block bounces). For completely identical distributions, this value is zero.

Working with our team of deliverability experts, we analyzed a spectrum of index values to determine appropriate cutoffs for our 3 categories: unlikely, possible, and likely. We also have a category for insufficient data. When a sender is placed on a blocklist, we take a weighted average of their unique distribution of recipients’ mailbox providers to communicate the overall impact.

Had enough math and can’t wait to give SparkPost’s Deliverability Analytics Blocklist Impact feature a try? Get in touch with us for a demo today!

Eat math for breakfast and ready for more? Keep on reading for all the glorious numerical details.

**Part 2: Math Geekery**

In part 2 of this post, we will describe the technical details for how we calculate the blocklist impact index (BII). In this explanation, we will only talk about estimating the effects of block bounces. However, the identical routine is performed for delays and spam foldering (using engagement as a proxy for the latter).

For a given mailbox provider-blocklist combination, we need to estimate both the distribution of bounces for IP addresses when blocklisted (the test distribution) and the distribution for non-blocklisted (the control distribution). We call *F*_{0}*(x)* for the latter and *F*_{1}*(x)* for the former. We will use traditional notation where capital *F* represents the cumulative distribution function (cdf) and small *f* to represent the probability mass function (pmf).

Recall the table in part 1. We use the columns: *bounces*, *injections*, and *bounce rate* to derive the distributions. The right-most column, Blocklist X facets the data into two sets. We discretize our data by aggregating bounce rate into intervals of length 0.01 over the interval [0, 1], i.e. 0.00-0.01, 0.01-0.02 etc.

Let’s define the following:

*x*: the lowerbound of the interval (e.g. 0.00 for 0.00-0.01)

*b** _{j}*: bounce rate for observation j

*i** _{x}*: total injections over the given interval

*i** _{T}*: total injections over the entire distribution

We then estimate *f(x)* as follows:

We do include bounce rate=0 in the first interval. In other words, we use the estimate for the pmf as the sum of injections for all bounce rates in the interval over the total injections.

Now that we have an estimate for the two distributions, we need to quantify the disparity. The objective is to derive an index that behaves like a penalty with the following features:

- Zero when the distributions are identical
- Increases when the test distribution (
*f*_{1}*(x)*) is greater than the control (*f*_{0}*(x)*) for more adverse values (e.g. higher bounce rates) - Does not increase when the test distribution (
*f*_{1}*(x)*) is greater than the control (*f*_{0}*(x)*) for favorable values (e.g. lower bounce rates)

We begin with the Kullback–Leibler divergence^{1}, generally used in machine learning and information theory.

Above is the standard Kullback–Leibler divergence (KL div) formula, substituting the typical P and Q for the notation *f*_{0}*(x)* and *f*_{1}*(x)* we are using. This formula is the weighted average of the ratio of the pmf’s over each point in the domain. This does give us the properties:

- Zero when the distributions are identical
- Increases when the test distribution (
*f*_{1}*(x)*) is greater than the control (*f*_{0}*(x)*)

However, it does not increase for more adverse values (higher bounce rates) and is applied equivalently when the test pmf distribution is more favorable (lower bounce rate). Therefore, we will modify the formula to produce the required properties. For example, in the case shown below, due to randomness a large pmf for the test distribution is at 0% pmf – which would produce a sizable KL div when clearly the mailbox provider does not use the blocklist to bounce sending IP’s.

First let’s introduce two more notations, the median of the control (*F** _{0}*) and an indicator function:

*m*: solution to *F*_{0}*(m)* = 0.5

where m can be estimated using interpolation methods. *I** _{A}* is simply the indicator function that allows us to ignore values below the median value of the control distribution. For cases where lower values are adverse (e.g. engagement, the proxy for spam foldering), we simply switch the sign in the indicator function. We can now modify the KL div to produce the

*Blocklist Impact Index (BII)*.

The three modifications are:

*|x-m|*applies greater weight for more adverse events, which becomes particularly useful in our case where the control distribution is typically tightly centered around the median- Indicator function, which allows the BII to ignore favorable values in the test distribution that arises due to randomness
- Flooring the index at 0, which occurs for some edge cases due to the two other modifications

The BII is the final form that we now use to determine how likely it is that a mailbox producer uses a blocklist for a given action.

NOW have we sated your appetite for equations? We’d love to show you SparkPost’s Deliverability Analytics Blocklist Impact feature in action. Sign up here for a live demo.

Thanks for sticking with us to the end!

~Tim Roy, Data Science Manager and Pragna Sonpal, Data Scientist

^{1}Kullback, S., and R. A. Leibler. “On Information and Sufficiency.” *The Annals of Mathematical Statistics*, vol. 22, no. 1, 1951, pp. 79-86.