I’ve been talking to a friend who is the President of a local credit union with 7,000 members. The biggest credit unions have over 3 Million members so he’s on the smaller side, but his customers love him. We’ve been talking on and off for a while about how he could benefit from data science but it always comes down to this; most data science projects just won’t be economic for him.
Yes his customers do business with him over the internet. But the cost of implementing a NoSQL DB and building recommenders or otherwise optimizing the site would be at least four months of consulting plus the upkeep.
Predictive models and even geo-spatial analysis of his trade area would no doubt lead to some valuable insights but once again absolute size is the rub.
So after casting about for a bit, here’s what I proposed: Affinity Analysis.
If we were talking about a business where customers bought many things at the same time (e.g. retail) we’d call this Market Basket Analysis. But since banking customers generally buy one thing at a time we’ll need to do this at the customer level, not the transaction level, so we’ll call this Affinity Analysis. Yes Affinity Analysis and Market Basket Analysis are mathematically the same.
Just to review, Affinity Analysis does its work in three main steps:
1. Evaluate the strength of the relationship between each of your products and every other product you offer.
If this credit union has 20 products (checking, savings, check card, ATM, car loan, etc.) this means there are 380 two-product combinations (20 x 19) to evaluate. For example, what percent of customers with checking accounts also have auto loans.
2. Identify those pairings that have very strong affinity.
What we are most interested in are those pairings that are strongly associated. For example, a customer with a credit card might be found to be twice or three times as likely to have an auto loan than an auto loan customer selected at random.
3. Highlight customers who have one product of a strongly associated pair but not the other so that these specific customers can be targeted for cross sell and up sell opportunities.
This does not necessarily define a cause and effect relationship between the products and there may be perfectly good reasons the customer has not taken the other product. But these can be high probability opportunities.
Also, you should be aware that some strongly related product pairs will seem intuitively obvious while others may provide new insight. It may be completely obvious that ATM usage and Check Cards are strongly associated since Check Cards are typically provided as a means of ATM access. Not all findings may be actionable but chances are there will be several relationships you had not previously recognized.
Under the Hood: Affinity Analysis
An association rule is a statement of the form:
(item set A) => (item set B)
For example A might be ‘auto loan’ and B might be ‘HELOC’ which should be read as: Looking at all the customers that have auto loans, how strong is the association to also having a HELOC. Or, does buying item A (auto loan) imply buying item B (HELOC).
The goal of the analysis is to determine the strength of all the association rules among a set of items. The value of the generated rules is gauged by confidence, support, and lift.
Support: The support for the rule A => B is the probability that the two item sets occur together (or the probability that a customer has both A and B).
(Customers that have both A and B) / (All Customers)
Confidence: The confidence of an association rule A => B is the conditional probability of customers who have item set B given that they also have item set A (the probability that a customer has B given that the customer has A).
(Customers that have both A and B) / (Customers that have A)
Expected Confidence: The expected confidence of A => B is the probability that a customer has B.
(Customers that have B) / (All Customers)
Lift: The lift of the rule A => B is the confidence of the rule divided by the expected confidence, assuming that the item sets are independent.
(Confidence of A => B) / (Expected Confidence of A => B)
Consider a bank with 10,000 customers. We conduct an Affinity Analysis of their 20 products. Among the 380 product-pair comparisons we see this for Product A and Product B (A => B).
Support: (4,000 / 10,000) = .400
Confidence (4,000 / 4,250) = .889
Expected Confidence (4,250 / 10,000) = .425
Lift (.889 / .425) = 2.092
This is very interesting because it means that a customer having A is twice as likely to have B than a customer chosen at random.
Interpreting and Acting on Affinity Analysis Results
Lift is our primary focus.
Lift values greater than 1 indicate positive correlation; values equal to 1 indicate zero correlation; and values less than 1 indicate negative correlation. If Lift=2 for the rule A => B, then a customer having A is twice as likely to have B compared to a customer chosen at random.
We will look first at product pairs that have a lift greater than 2 since they are twice as likely to occur together (lift = 3 then 3X as likely). We would not be interested in pairs with a lift of 1 or less since 1 is the same as a random pick and less than 1 means the products are less likely than random to occur together.
In our small bank example with 20 products it is likely that fewer than 20 pairs will have a lift of 2 or greater and we might chose to use lift of 1.5 or greater but probably not lower.
The marketing team would then identify all of the customers who have A but NOT B and design programs to encourage these specific customers to try B. This is very highly targeted and therefore very cost effective.
There are many other ways to interpret Affinity Analysis. For example identifying ‘Orphan’ products, those that have very weak relations to any other products, suggesting an opportunity to market to these customers.
Affinity Analysis can also be used to determine the value of any promotional offering or discount, or to evaluate product placement on web pages. The sequence in which products are acquired can also be added to the considerations to create even more insightful pairings.
Not a bad investment for a small customer for the cost of just a few days of data science time.
The original blog can be seen here.
September 8, 2015
Bill Vorhies, President & Chief Data Scientist – Data-Magnum - © 2015, all rights reserved.
About the author: Bill Vorhies is President & Chief Data Scientist at Data-Magnum and has practiced as a data scientist and commercial predictive modeler since 2001. Bill is also Editorial Director for Data Science Central. He can be reached at: