Interesting take-aways from ‘Data Science For Business’

I have recently been reading Data Science for Business by Foster Provost and Tom Fawcett. I picked it up on Amazon some time back, while trying to find some good books on the applications of machine learning in industries. And I hit the bull’s eye with this one!

Data Science for Business does an awesome job at getting the reader acquainted with all the basic(as well as some niche) applications of ML frameworks in a business context. What this book does not do, is give you the rigor behind the algorithms discussed in it. The focus is on the when and why, not the how. For example, you will find detailed descriptions of the kind of problems Association Mining solves- from Market Basket Analysis to mining Facebook ‘like’ relationships- but you wont find details of the Apriori or any other actual AM techniques. Why I love this book, is because it does what most other ML texts don’t- giving the reader an intuitive understanding of when to apply a certain technique and why. For a newbie, reading this book side-by-side with an established text like Elements of Statistical Learning would perhaps be the best way to get acquainted with Machine Learning and its applications.

There were a lot of ‘Aha!’ moments for me while reading this book, especially on the lines of ‘I never thought it could be used this way!’ or ‘Thats a good rule of thumb to follow while applying this!’. To give you guys a brief idea, here are three interesting things I learned/revisited while reading this book:

1. CRISP Data Mining

Okay, anyone who has done a formal course in Data Mining knows about the CRoss Industry Standard Process for Data Mining, and for good reason. Its one of the most fundamental, extremely basic ways to go about mining any kind of data. A diagrammatic representation of the same is shown below:

crisp-dm

However, while most courses would teach their students about this framework and leave it at that, this book tries to explain many other applications of statistical learning techniques in the context of CRISP data mining. For example, in an exploratory data mining technique such as clustering, its the Evaluation part of the process thats highly crucial- since you need to extract the maximum ‘information’ that you can from the results that a standard clustering technique would provide. More on this in a bit.

The book uses the Expected Value framework to augment the way in which CRISP is applied. It basically stresses the fact that you need to formalize your Business Understanding of the problem by expressing it mathematically- as a value to estimate, or function to optimize. At this stage, you would rather think of your solution as a black box- not focusing on the algorithms and implementations underneath. You then need to understand your data in terms of the available attributes, sparseness of the data, etc. (Data Understanding). Only once you have the mathematical formulation and familiarity with the data, are you in a position to combine the two in a synergistic mix of Data Preparation and Modelling – in essence, formulating a solution that performs best with the data you have.

2. Using Decision Trees to Understand Clusters

Why understand the output of clustering?

Since clustering, as I mentioned before, is an exploratory process, you don’t exactly know what you expect to find going in. You may have a rough idea of the number of clusters(or not), but not what each of them represent- atleast in the general case.

Lets consider a simple example. You have decided to use DBSCAN to cluster all of your customers into an unknown number of groups, where customers in the same cluster would have similar properties based on some criteria. You hope to use the output of this clustering to classify a new customer into one of these groups, and deal with him better given the characteristics that set him apart from the rest. You can easily use a supervised learning process to classify a new customer given the clustering output, but what you don’t have is an intuitive understanding of what the cluster represents. Usually, the following two methods are most likely used to ‘understand’ a given cluster-

i. Pick out a member of the cluster that has the highest value for a certain property. For example, if you are clustering companies, you could just pick the one that has the highest turnover- which would mean the company thats most ‘well-known’.

ii. Pick out that member which is closest to the centroid.

However, neither of these would be applicable in all contexts and problems. Especially if the cluster members don’t mean anything to you individually, or you need to focus on the trends in a cluster- which cannot be summarized by a single member anyways.

How do you use a Decision Tree here?

A Decision Tree is usually pretty under-estimated an algorithm when it comes to supervised learning. The biggest reason for this is its innate simplicity which results in people going for more ‘sophisticated’ techniques like SVMs(usually). However, this is exactly the property that makes us select a decision tree in this scenario.

To understand a cluster K, what you essentially do is this:

i. Take all members of cluster K on one side, and members of all other clusters on another side.

ii. ‘Learn’ a decision tree to classify any given data point as a member of K or not-member of K – essentially a one-vs-all classification.

iii. Decompose the decision tree into the corresponding set of rules.

Voila! What you now essentially have, is a set of rules that help you distinguish the properties of class K from the rest. By virtue of decision-tree learning algorithms such as C4.5, your rules will optimally be based on those attributes that play the biggest role in characterising members of cluster K. Elegant isn’t it? The book explains this using the famous Whiskies dataset.

3. Virtual Items in Association Mining

This isn’t a big technique of any kind, but a nifty thing to remember all the same. Mostly always, when talking of Association Mining, we think of conventional ‘items’ as they would appear in any Market-Basket analysis. However, one way to expand this mining process, is to include Virtual Items in the ‘Baskets’. For example, suppose you are a big chain of supermarkets across the country. Obviously, every time a customer buys something, you include all the purchased items in a ‘transaction’. What you could also include, are some other attributes of the transaction such as- Location, Gender of person buying, Time of the Data, etc. So a given transaction might look like (Mumbai, Male, Afternoon, Bread, Butter, Milk). Here, the first three ‘items’ aren’t really items at all! Hence, they are called ‘virtual items’. Nonetheless, if you enter these transactions into an Association Mining framework, you could end up finding relationships between conventional goods and properties such as gender and location!

 

These are just three of such things to learn from this book. I would definitely recommend this book to you, especially if you want to supplement your knowledge of various ML algorithms with the intuition of how and where to apply them. Cheers!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s