# Top 10 data mining algorithms in plain English

Today, I'm going to explain in plain English the top 10 most influential data mining algorithms as voted on by 3 separate panels in this survey paper.

Once you know what they are, how they work, what they do and where you can find them, my hope is you'll have this blog post as a springboard to learn even more about data mining.

What are we waiting for? Let's get started!

### C4.5 data mining algorithm

C4.5 constructs a classifier in the form of a decision tree. In order to do this, C4.5 is given a set of data representing things that are already classified.

### k-means data mining algorithm

k-means creates $k$ groups from a set of objects so that the members of a group are more similar. It’s a popular cluster analysis technique for exploring a dataset.

### SVM data mining algorithm

Support vector machine (SVM) learns a hyperplane to classify data into 2 classes. At a high-level, SVM performs a similar task like C4.5 except SVM doesn’t use decision trees at all.

### Apriori data mining algorithm

The Apriori algorithm learns association rules and is applied to a database containing a large number of transactions.

### EM data mining algorithm

In data mining, expectation-maximization (EM) is generally used as a clustering algorithm (like k-means) for knowledge discovery.

### PageRank data mining algorithm

PageRank is a link analysis algorithm designed to determine the relative importance of some object linked within a network of objects.

AdaBoost is a boosting algorithm which constructs a classifier. As you probably remember, a classifier takes a bunch of data and attempts to predict or classify which class a new data element  belongs to.

### kNN data mining algorithm

kNN, or k-Nearest Neighbors, is a classification algorithm. However, it differs from the classifiers previously described because it’s a lazy learner.

### Naive Bayes data mining algorithm

Naive Bayes is not a single algorithm, but a family of classification algorithms that share one common assumption: Every feature of the data being classified is independent of all other features given the class.

### CART data mining algorithm

CART stands for classification and regression trees.  It is a decision tree learning technique that outputs either classification or regression trees. Like C4.5, CART is a classifier.

Now that I've shared my thoughts and research around these data mining algorithms, I want to turn it over to you.

• Are you going to give data mining a try?
• Which data mining algorithms have you heard of but weren't on the list?
• Or maybe you have a question about an algorithm?

Let me know what you think by leaving a comment below right now.

Thanks to Yuval Merhav and Oliver Keyes for their suggestions which I've incorporated into the post.

Thanks to Dan Steinberg (yes, the CART expert!) for the suggested updates to the CART section which have now been added.

#### Ray Li

Ray is a software engineer and data enthusiast who has been blogging for over a decade. He loves to learn, teach and grow. You’ll usually find him wrangling data, programming and lifehacking.

1. Thanks, Joe. Definitely appreciate it! 🙂

I owe a lot of it to a few threads from Reddit and Yuval (both are linked in the post above).

1. Really snappy and informative view into data mining algorithms. I clicked on a whole ton of links: always a mark of a resource done right! Kudos.

1. Thanks, Roger. I’m happy you found it snappy and click-worthy. 🙂 Sometimes data mining resources can be a bit on the dry side.

2. Thanks for the excellent compile
This is what I was looking for as a starter.

3. Out of all the numerous websites about data mining algorithms I have gone through, this one is by far the best! Explaining everything in such casual terms really helps beginners like me. The examples were definitely apt and helpful.

Thank you so much! You made my work a lot easier. 🙂

1. I’m excited to hear this helped with your work, Meghana! And really appreciate the kind words. 🙂

4. Pingback: Distilled News | Data Analytics & R

5. Hey, great introduction! I would love to see more posts like this in our community; great way to grasp the concept of algorithms before diving into the hard math.

Just one thing, though: On Step 2 in Naive Bayes you repeated P(Long | Banana) twice. The third one should be P(Yellow | Banana).

Thanks again!

1. Hi Anonymous,

Nice catch! I fixed it now, but have no one to attribute the fix to. 🙁

I totally agree about understanding the concepts of the algorithm before the hard math. I’ve always felt using concepts and examples as a platform for understanding makes the math part way easier.

Thanks again,
Ray

6. This is a great resource. I’ve bookmarked it. Thanks for your work. I love using height-zip code to illustrate independence. That will be a go-to for me now. The only thing I can offer in return is a heads-up about the API we just released for ML preprocessing. It’s all about correlating themes in unstructured information streams. Hope it’s useful. Let us know what you think. Thanks again.

7. Hello Ray,
Thanks for a great article.
It looks like there is a typo in step 2 of Naive Bayes. One of the probabilities should be P(Yellow|Banana).
Thanks again!

1. My pleasure, Raghav. Thanks also for letting me know about the typo. It should be corrected now.

8. Hello Raymond,

first of all kudos for your sum up of data mining algos!

I’ve been exploring this for a few weeks now (mainly using scikit learn and nltk in python).

In the past few days I came up with the idea to create a classifier that is able to group products by their title to a corresponding product taxonomy.

For that I crawled a German product marketplace for their category landingpages and created a corpus consisting of a taxonomy tree node in column “a” and a set of snowball stemmed relevant uni and bigram keywords ( appx. 50 per node) that have been extracted from all products on each category page (this is comma separated in column “b”).

Now I would like to build a classifier from that with the idea in mind, that I could throw stemmed product titles at the classifier and let it return the most probable taxonomy node.

Could you advise which would be the most appropriate one for the given task. I can email you the corpus…

Hope to get some direction… to omit any detours / too much trial and error.

Thanks again for your great article.

Cheers from Cologne Germany

Jens

1. Hi Jens,

I don’t know. 🙂 It sounds like there’s a bunch I could learn from you!

For example:
You just taught me about stemming and the Snowball framework. Honestly, I’m amazed there are tools like Snowball that can create stemming algorithms. Very cool!

I found the StackOverflow.com, stats.stackexchange.com and reddit.com forums invaluable when I was learning, researching and simplifying the algorithms to make them easier to describe.

Sorry I couldn’t be more help, but I’m working to catch up… 🙂

Ray

1. Hi Ray,

I found a good solution in the meantime using a naive bayes approach.

By the way your regular contact form does not work. There is an htaccess authentication popping up upon form submit.

Cheers
Jens

1. Awesome!

Also, thanks for the heads up about the contact form. It should be fixed now. There’s a small issue with the confirmation message (some fields are not displayed), but no more auth pop-up and the message successfully sends.

9. This goes in my bookmarks. Excellent simple explanation. Loved you have taken SVM. It would be great if you can put Neural network with various kernels.

1. Definitely appreciate the bookmark, Malhar! Thanks for your suggestion about the neural nets. I’ll definitely be diving into that one very soon.

2. Exactly the same concern, Malhar. I was looking for information on Neural Networks as well.

10. Man, I really wish I had this guide a few years ago! I was trying my hand at unsupervised categorization of email messages. I didn’t know what terms to google, so the only thing I used was LSM (latent semantic mapping). The problem is, when you have thousands of words and tens of thousands of emails, the N^2 matrix gets a little hard to handle, computationally. I ended up giving up on it.

What I had never considered was using a different algorithm to pre-create groups, which would have helped a lot. This was a useful read.

11. Great article! Now, as a public service, how about a decision tree or categorization matrix for selecting the right algorithm?

1. Thanks, David.

It’s a good call about selecting the right algorithm. From all the readings so far, I feel picking the right one is the hardest part.

It’s one of the main reasons I was attracted to the original survey paper despite it being a bit outdated. Might as well dive into the ones the panelists thought were important, and then figure out why they use them.

I certainly have a lot more to learn, and I’m already having some ideas on future posts.

Ray

13. Couldn’t ask for more simpler explanation. A very good collection and hoping more posts from you .

14. Hello,

It is a good review of things undergraduates learn but what about starting with just a single example of application in predicting stock returns, for example. Do you have an example of applying, for example, naive Bayes to predicting stock returns? That would be more useful that listing a set of methods one can find in most ML books.

1. Thanks, Sylvio. I appreciate the constructive comments.

Depth and real-life applications are certainly something to improve on in this article series (Yep… I think it deserves to be a series!). Stay tuned… 🙂

There’s no way this could’ve happened without you reading, commenting and sharing. My sincerest thank you! 🙂

16. Echoing all the sentiments above Ray. This is a tremendously useful resource that’s gone straight into my bookmarks. Really appreciate the informal writing style as well, which makes it nice and accessible, and easy to share with colleagues!

1. Thank you, Matt. I’m glad you found the writing style accessible and shareable. Please do share… 🙂

17. Excellent blogpost! Very accessible and rather complete (apart from multilayer perceptrons, which I hope you’ll touch in a follow up post).
I found useful that you refer to the NFL theorem and list characteristics of each algorithm which make them more suited to one type of problem than another (e.g. lazy learners are faster in training but slower classifiers, and why). I also liked you explained which algorithms are for supervised and unsupervised learning. These are all things to take into account when choosing a classifier. Wish I read this 5 years ago!
Thanks!

Thank you for your kind words.

I think I came across the standard perceptron while researching SVM. Definitely thinking about tackling MLPs and more recently all the buzz about deep learning at some point.

Ray

18. What an awesome article! I learned more from this than 20 hours of plowing through SciKit. Well done!

19. Thanks a lot Ray for your article !
I did a clustering library sometime ago, your article encourages me to try expanding it with more algorithms.
regards
david

20. This is a fantastic article and just what I needed as I start attempting to learn all this stuff. I’ll be shooting up the Kaggle rankings in now time (well, from 100,000 to 90,000 perhaps!).

1. Appreciate it, Martin. I’m really happy to hear that it helps to get the ball rolling for you. Your increased Kaggle ranking would be nice icing on the cake! 🙂

21. Excellent overview. You have a gift in teaching complex topics into down-to earth terms. Here is my comment: when using data mining algorithm, in this list (classifiers) I am more concerned about accuracy. We can try and use each one of these but in the end we are interested in validation after training. Accuracy was only addressed with SVM and Adaboost.

1. Thank you for your kind words, Yolande.

It’s a good point about the accuracy. I’ll definitely keep this in mind to explore accuracy in an upcoming post.

22. I didn’t quite understand the part about C4.5 pruning.
In the link provided, it says that in order to decide whether to prune a tree or not, it calculates error rate of both pruned and unpruned tree and decides which one leads to the lower limit of confidence interval.
It should work okey for already pruned trees, but how does it start? Usually decision tree algorhythms build the tree until it reaches entrophy = 0, which means zero error rate, and zero upper limit for confidence interval. In this case, such tree can never be pruned, using that logic …

1. This is a great question, Maksim. It got me thinking a bunch, but unfortunately I don’t have an answer that I’m satisfied with.

My investigation so far indicates that the error rate for the training data is distinct from the estimated error rate for the unseen data. As you pointed out, this is what the confidence interval is meant to bound. Based on the formula in the link, given f=0, I’m also at a loss on how a pruned tree could beat the unpruned tree.

If you’re up for it, CrossValidated or StackOverflow might be an awesome place to get your question answered. You or I could even post a link here for reference.

23. Ray, thanks a lot for this really useful review. Some of the algorithms are
already familiar to me, others are new. So it surely helps to have them all in
one place.
As a practical application I’m interested in a data mining algorithm that can
be used in investment portfolio selection based on historical data, that is,
decide which stocks to invest in and make timely buy/sell orders. Can you
recommend a suitable algorithm?

1. My pleasure, Ilan. Same here, I’ve come across a few of these algorithms before writing this article, and I had to teach myself the unfamiliar ones.

I’m planning to go into more practical applications in an upcoming post. Stay tuned for that one… 🙂

On a side note, you might already be aware of them, and the “random walk hypothesis” and “efficient-market hypothesis” might be of interest to you. It doesn’t answer your question, but it is an alternate perspective on predicting future returns based on historical data.

24. This is an excellent blog. It is helping me digest what I have studied elsewhere. Thanks a lot.

25. Fantastic post ray. Nicely explained. Helped me enhancing my understanding. Please keep sharing the knowledge 🙂 It helps.

Regards,
Phaneendra

26. Awesome explanation of some of the oft-used data-mining algorithms.

Are you thinking of doing something similar for some of the other algorithms (Discriminant Analysis, Neural Networks, etc.) as well?

Thanks,
Sanjoy

27. Thanks Ray!! Awesome compilation and explanation. This truly helps me get started with learning and applying data science.

1. My pleasure, Suresh. I’m really happy to hear the post helped you start learning and applying.

28. I’m afraid to be rather boring by having nothing to contribute than more of the well deserved praise to the quality of your article: thanks, really a great wrap-up and very good primer for the subject.
I shared the link to your post on the intranet of my company and rarely an article has received so many “likes” in no time.
The only thing I was missing was a bit more visual support. You have an excellent video embedded for SVM. But for many of the other concepts, there are also rather straight forward visual representations possible (e.g. clustering, k-nearest-neighbour).
I found the book “Data Science for Business” (http://www.data-science-for-biz.com/) a VERY good start into the subject (….though I would have prefered to have read your article beore, as it really wraps it up so well….). This book offers real real inspiration as to how the underlying concepts of the algorithms you explain can be visualized and thus be made more intuitively understandable.
Enhancing your article with a bit more visual support would be the cherry on the icing on the cake 😉

1. Hi Ulf,

Really appreciate your kind words and you sharing it with your colleagues. 🙂

That’s a good point about visualizations… especially for visual learners. Like in the case of the SVM video, I found seeing it in action made it so much clearer.

I definitely appreciate the book recommendation. From the sound of it, that book might be a fantastic reference not just for this article but for future articles covering this area.

Thanks again,
Ray

29. Thanks for your wonderful post. I like the way you describe the SVM, kNN, Bayes. Since you language is so user friendly and easy to understand. Can you also write a blog on the some of the ensembles like random forest which is one of the most popular machine learning algorithm and has a good predictive power compared to other algorithms

1. Thanks, Praveen. Those are good ones, and I’ll add them to my growing list of potential algorithms to dive into.

30. Fantastic article. Thanks.

One point:
>> What do the balls, table and stick represent? The balls represent data points, and the red and blue color represent 2 classes. The stick represents the simplest hyperplane which is a line.

The simplest (i.e. 1 dimensional) hyperplane is a point, not a line.

1. Thanks, Tom. Good “point” about the simplest hyperplane. I’ve modified the sentence to read “The stick represents the hyperplane which in this case is a line.”

31. Hi Ray,
All Algorithms are explained in a simple and neat manner. It will be extremely useful for beginners as well as pros if u could come up with a “cheat sheet”, explaining best and worst scenario, for each algorithms. ( I mean how to choose the best algorithm for a given data).

Thank you

32. Hi Ray,
Thank you for your effort to explain such algorithms with such simplicity.
Good to start on data science !

33. Pingback: Linkblog #6 | Ivan Yurchenko

34. Pingback: DB Weekly No.59 | ENUE Blog

1. Yes, even within the context of the 10 data mining algorithms, we are searching.

The first 3 that come to mind are K-means, Apriori and PageRank.

K-means groups similar data together. It’s essentially a way to search through the data and group together data that have similar attributes.

Apriori attempts to search for relationships and patterns among a set of transactions.

Finally, PageRank searches through a network in order to unearth the relative importance of an object in the network.

Hope this helps!

2. However, if you’re looking for a search algorithm that finds specific item(s) that match certain attributes, these 10 data mining algorithms may not be a good fit.

I’ve always have trouble understanding the Naive Bayes and SVM algorithms.

Your article has done a really great job in explaining these two algorithms that now I have a much better understanding on these algorithms.

Thanks alot! 🙂

36. very nice summary article … question – is the current implementation of Orange (still) using C4.5 as the classification tree algorithm … I cannot find any reference to it in the current documentation

37. THANK YOU!!!!!!! As a budding data scientist, this is really helpful. I appreciate it immensely!!!!!

This is from a far the best page about the most used data-mining algorithms.
As a data-mining student, this was very helpful.

39. Great article, Ray, top level, thank you so much!

This question could be a bit OT: which technique do you feel to suggest for the analysis of biological networks? Classical graph theory measures, functional cartography (by Guimera & Amaral), entropy and clustering are already used with good results. PageRank on undirected networks provides similar results to betweenness centrality, I am looking for innovative approaches to be compared with the mentioned ones.

Thanks again!

1. Thank you, Paolo. Really appreciate it!

From the techniques you’ve already mentioned, it sounds like you’re already deep into the area of biological network analysis. Although I don’t have any new approaches to add (and probably not as familiar with this area as you are), perhaps someone reading this thread could point us in the right direction.

40. Wonderful list and even more wonderful explanations. Question though, you don’t think Random Forests merit a place on that list?

Cheers

1. Thanks, Abdul! Random forests is a great one. However, the authors of the original 2007 paper describe how their analysis arrived at these top 10. If a similar analysis were done today, I’m sure random forest would be a strong contender.

41. I did not read the whole article, but the description of the Apriori algorithm is incorrect.

It is said that there are three steps and that the second step is “Those itemsets that satisfy the support and confidence move onto the next round for 2-itemsets.”

This is incorrect and it is not how the Apriori algorithm works.. The Apriori algorithms does NOT consider the confidence when generating itemsets. It only considers the confidence after finding the itemsets, when it is generating the rules.

In other words, the Apriori algorithms first find the frequent itemsets by applying the three steps. Then it applies another algorithm for generating the rules from these itemsets. The confidence is only considered by the second algorithm. It is not considered during itemset generation.

42. Sir,
This information is very helpful for the students like me. I was searching for an algorithm for my final year project in data mining. Now i can easily select an algorithm to start my work on my final year project. Thanks

This site uses Akismet to reduce spam. Learn how your comment data is processed.