Saturday, July 22, 2017

Interpreting cluster analysis results

Interpretation of the clustering structure and the clusters is an essential step in unsupervised learning. Identifying the characteristics that underlie differentiation between groups allows to ensuring their credibility.

In this course material, we explore the univariate and multivariate techniques. The first ones have the merit of the ease of calculation and reading, but do not take into account the joint effect of the variables. The seconds are a priori more efficient, but require additional expertise to fully understand the results.

Keywords: cluster analysis, clustering, unsupervised learning, percentage of variance explained, V-Test, test value, distance between centroids, correlation ratio
Slides: Characterizing the clusters
References:
Tanagra Tutorial, "Understanding the 'test value' criterion", May 2009.
Tanagra Tutorial, "Hierarchical agglomerative clustering", June 2017.
Tanagra Tutorial, "K-Means clustering", June 2017.

Friday, July 14, 2017

Kohonen map with R

This tutorial complements the course material concerning the Kohonen map or Self-organizing map (June 2017). In a first time, we try to highlight two important aspects of the approach: its ability to summarize the available information in a two-dimensional space; Its combination with a cluster analysis method for associating the topological representation (and the reading that one can do) to the interpretation of the groups obtained from the clustering algorithm. We use the R software and the “Kohonen” package (Wehrens et Buydens, 2007). In a second time, we carry out a comparative study of the quality of the partitioning with the one obtained with the K-means algorithm. We use an external evaluation i.e. we compare the clustering results with pre-established classes. This procedure is often used in research to evaluate the performance of clustering methods. It takes on its meaning when it is applied to artificial data where the true class membership is known. We use the K-Means and Kohonen-Som components of Tanagra.

This tutorial is based on the Shane Lynn's article on the R-bloggers website (Lynn, 2014). I completed it by introducing the intermediate calculations to better understand the meaning of the charts, and by conducting the comparative study.

Keywords: som, self organizing map, kohonen network, data visualization, dimensionality reduction, cluster analysis, clustering, hierarchical agglomerative clustering, hac, two-step clustering, R software, kohonen package, k-means, external evaluation, heatmaps
Components: KOHONEN-SOM
Tutorial: Kohonen map with R
Program and dataset: waveform - som
References:
Tanagra tutorial, "Self-organizing map (slides)", June 2017.
Tanagra Tutorial, "Self-organizing map (with Tanagra)", July 2009.

Saturday, July 8, 2017

Cluster analysis with Python - HAC and K-Means

This tutorial describes a cluster analysis process. We deal with a set of cheeses (29 instances) characterized by their nutritional properties (9 variables). The aim is to determine groups of homogeneous cheeses in view of their properties. We inspect and test two approaches using two Python procedures: the Hierarchical Agglomerative Clustering algorithm (SciPy package) ; and the K-Means algorithm (scikit-learn package).

One of the contributions of this tutorial is that we had conducted the same analysis with R previously, with the same steps. We can compare the commands used and the results provided by the available procedures. We observe that these tools have comparable behaviors and are substitutable in this context.

Keywords: python, scipy, scikit-learn, cluster analysis, clustering, hac, hierarchical agglomerative clustering, , k-means, principal component analysis, PCA
Turorial: hac and k-means with Python 
Dataset and cource code: hac_kmeans_with_python.zip
References :
Marie Chavent, Teaching Page, University of Bordeaux.
Tanagra Tutorials, "Cluster analysis with R - HAC and K-Means", July 2017.

Thursday, July 6, 2017

Cluster analysis with R - HAC and K-Means

This tutorial describes a cluster analysis process. We deal with a set of cheeses (29 instances) characterized by their nutritional properties (9 variables). The aim is to determine groups of homogeneous cheeses in view of their properties.

We inspect and test two approaches using two procedures of the R software: the Hierarchical Agglomerative Clustering algorithm (hclust) ; and the K-Means algorithm (kmeans).

The data file "fromage.txt" comes from the teaching page of Marie Chavent from the University of Bordeaux. The excellent course materials and corrected exercises (commented R code) available on its website will complete this tutorial, which is intended firstly as a simple guide for the introduction of the R software in the context of the cluster analysis.

Keywords: R software, cluster analysis, clustering, hac, hierarchical agglomerative clustering, , k-means, fpc package, principal component analysis, PCA
Components: hclust, kmeans, kmeansruns
Turorial: hac and k-means with R 
Dataset and cource code: hac_kmeans_with_r.zip
References :
Marie Chavent, Teaching Page, University of Bordeaux.

Monday, July 3, 2017

k-medoids clustering (slides)

K-medoids is a partitioning-based clustering algorithm. It is related to the k-means but, instead of using the centroid as reference data point for the cluster, we use the medoid which is the individual nearest to all the other points within its cluster. One of the main consequence of this approach is that the resulting partition is less sensible to outliers.

This course material describes the algorithm. Then, we focus on the silhouette tool which can be used to determine the right number of clusters, a recurring open problem in cluster analysis.

Keywords: cluster analysis, clustering, unsupervised learning, paritionning method, relocation approach, medoid, PAM, partitioning aroung medoids, CLARA, clustering large applications, silhouette, silhouette plot
Slides: Cluster analysis - k-medoids algorithm
References:
Wikipedia, "k-medoids".

Tuesday, June 20, 2017

k-means clustering (slides)

K-Means clustering is a popular cluster analysis method. It is simple and its implementation does not require to keep in memory all the dataset, thus making it possible to process very large databases.

This course material describes the algorithm. We focus on the different extensions such as the processing of qualitative or mixed variables, fuzzy c-means, and clustering of variables (clustering around latent variables). We note that the k-means method is relatively adaptable and can be applied to a wide range of problems.

Keywords: cluster analysis, clustering, unsupervised learning, partition method, relocation
Slides: K-Means clustering
References :
Wikipedia, "k-means clustering".
Wikipedia, "Fuzzy clustering".

Tuesday, June 13, 2017

Self-Organizing Map (slides)

A self-organizing map (SOM) or Kohonen network or Kohonen map is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map, which preserves the topological properties of the input space (Wikipedia).

SOM is useful for the dimensionality reduction, data visualization and cluster analysis. In this course material, we outline the mechanisms underlying the approach. We focus on its practical aspects (e.g. various visualization possibilities, prediction on a new instance, extension of SOM to the clustering task,…).

Illustrative examples in R (kohonen package) and Tanagra are briefly presented.

Keywords: som, self organizing map, kohonen network, data visualization, dimensionality reduction, cluster analysis, clustering, hierarchical agglomerative clustering, hac, two-step clustering, R software, kohonen package
Components: KOHONEN-SOM
Slides: Kohonen SOM
References:
Wikipedia, "Self-organizing map".

Saturday, June 10, 2017

Hierarchical agglomerative clustering (slides)

In data mining, cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters) (Wikipedia).

In this course material, we focus on the hierarchical agglomerative clustering (HAC). Beginning from the individuals which initially represents groups, the algorithms merge the groups in a bottom-up fashion until only the instances are gathered in only one group. The process is materialized by a dendrogram which allows to evaluate the nature of the solution and helps to determine the appropriate number of clusters.

Examples of analysis under R, Python and Tanagra are described.

Keywords: hac, cluster analysis, clustering, unsupervised learning, tandem analysis, two-step clustering, R software, hclust, python, scipy package
Components: HAC, K-MEANS
Slides: cah.pdf
References:
Wikipedia, "Cluster analysis".
Wikipedia, "Hierarchical clustering".

Saturday, May 20, 2017

Support vector machine (slides)

In machine learning, support vector machines (SVM) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis (Wikipedia).

These slides show the background of the approach in the classification context. We address the binary classification problem, the soft-margin principle, the construction of the nonlinear classifiers by means of the kernel functions, the feature selection process, the multiclass SVM.

The presentation is complemented by the implementation of the approach under the open source software Python (Scikit-Learn), R (e1071) and Tanagra (SVM and C-SVC).

Keywords: svm, e1071 package, R software, Python, scikit-learn package, sklearn
Components: SVM, C-SVC
Slides: Support Vector Machine (SVM)
Dataset: svm exemples.xlsx
References:
Abe S., "Support Vector Machines for Pattern Classification", Springer, 2010.

Thursday, January 5, 2017

Tanagra website statistics for 2016

The year 2016 ends, 2017 begins. I wish you all a very happy year 2017.

A small statistical report on the website statistics for the 2016. All sites (Tanagra, course materials, e-books, tutorials) has been visited 264,045 times this year, 721 visits per day.

Since February, the 1st, 2008, the date from which I installed the Google Analytics counter, there are 2,111,078 visits (649 daily visits).

Who are you? The majority of visits come from France and Maghreb. Then there are a large part of French speaking countries, notably because some pages are exclusively in French. In terms of non-francophone countries, we observe mainly the United States, India, UK, Brazil, Germany, ...

The pages containing course materials about Data Mining and R Programming are the most popular ones. This is not really surprising.

Happy New Year 2017 to all.

Ricco.
Slideshow: Website statistics for 2016

Saturday, September 17, 2016

Text mining - Document classification

The statistical approach of the "text mining" consists in to transform a collection of text documents in a matrix of numeric values on which we can apply machine learning algorithms.

The "unstructured document" designation is often used when one talks about text documents. This does not mean that he does not have a certain organization (titles, chapters, paragraphs, questions and answers, etc.). It shows first of all that we cannot express directly the collection in the form of a data table that is usually handled in data mining. To obtain this kind of data representation, a preprocessing phase is needed, then we extract relevant features to define the data table. These steps can influence heavily the relevance of the results.

In this tutorial, I take an exercise that I lead with my students for my text mining course at the University. We perform all the analysis under R with the dedicated packages for text mining such as “XML” or “tm”. The issue here is to perform exactly the study using other tools such as Knime 2.9.1 or RapidMiner 5.3 (Note: these are the versions available when I wrote the French version of this tutorial in April 2014). We will see that these tools provide specialized libraries which enable to perform efficiently a statistical text mining process.

Keywords: text mining, document classification, text categorization, decision tree, j48, lineat svm, reuters collection, XML format, stemming, stopwords, document-term matrix
Tutorial: en_Tanagra_Text_Mining.pdf
Dataset: text_mining_tutorial.zip
References :
Wikipedia, "Document classification".
S. Weiss, N. Indurkhya, T. Zhang, "Fundamentals of Predictive Text Mining", Springer, 2010.

Saturday, June 25, 2016

Image classification with Knime

The aim of image mining is to extract valuable knowledge from image data. In the context of supervised image classification, we want to assign automatically a label to image from their visual content. The whole process is identical to the standard data mining process. We learn a classifier from a set of classified images. Then, we can apply the classifier to a new image in order to predict its class membership. The particularity is that we must extract a vector of numerical features from the image before to launch the machine learning algorithm, and before to apply the classifier in the deployment phase.

We deal with an image classification task in this tutorial. The goal is to detect automatically the images which contain a car. The main result is that, even if I have a basic knowledge about the image processing, I can lead the analysis with a facility which is symptomatic of the usability of Knime in this context.

Keywords: image mining, image classification, image processing, feature extraction, decision tree, random forest, knime
Tutorial: en_Tanagra_Image_Mining_Knime.pdf
Dataset and program (Knime archive): image mining tutorial
References:
Knime Image Processing, https://tech.knime.org/community/image-processing
S. Agarwal, A. Awan, D. Roth, « UIUC Image Database for Car Detection » ; https://cogcomp.cs.illinois.edu/Data/Car/

Sunday, June 19, 2016

Gradient boosting (slides)

The "gradient boosting" is an ensemble method that generalizes boosting by providing the opportunity of use other loss functions ("standard" boosting uses implicitly an exponential loss function).

These slides show the ins and outs of the method. Gradient boosting for regression is detailed initially. The classification problem is presented thereafter.

The solutions implemented in the packages for R and Python are studied.

Keywords: boosting, regression tree, package gbm, package mboost, package xgboost, R, Python, package scikit-learn, sklearn
Slides: Gradient Boosting
References:
R. Rakotomalala, "Bagging, Random Forest, Boosting", December 2015.
Natekin A., Knoll A., "Gradient boosting machines, a tutorial", in Frontiers in Neurorobotics, December 2013. 

Monday, June 13, 2016

Tanagra and Sipina add-ins for Excel 2016

The add-ins “tangra.xla” and “sipina.xla” are greatly involved in popularity of Tanagra and Sipina software applications. They incorporate menus dedicated to data mining in Excel. They implement a simple bridge between the data into the spreadsheet and Tanagra or Sipina.

I developed and tested the latest add-ins versions for Excel 2007 and 2010. I had access recently to Excel 2016. I checked the add-ins. The conclusion is that the tools work without a hitch.

Keywords: data importation, excel data file, add-in, add-on, xls, xlsx
Lien : en_Tanagra_Add_In_Excel_2016.pdf
References:
Tanagra, "Tanagra add-in for Excel 2007 and 2010", August 2010.
Tanagra, "Sipina add-in for Excel 2007 and 2010", June 2016.

Sunday, June 12, 2016

Sipina add-in for Excel 2007 and 2010

SIPINA is a Data Mining Software which implements various supervised learning paradigms. This is an old tool but it is still used because this is the only free tool which provides fully functional interactive decision tree capabilities.

This tutorial briefly describes the installation and the use of the add-in "sipina.xla" into Excel 2007. The approach is easily generalized to Excel 2010. A similar document exists for Tanagra . It seemed to me nevertheless necessary to clarify the procedure, especially because several users have made the request. Other tutorials exist for earlier versions of Excel (1997-2003)  and for Calc (Libre Office and Open Office).

A new tutorial will come soon. It shows that the add-in operates properly also under Excel 2016.

Keywords: data importation, excel data file, add-in, add-on, xls, xlsx
Tutorial: en_sipina_excel_addin.pdf
Dataset: heart.xls
References:
Tanagra, "Tanagra add-in for Office 2007 and Office 2010", august 2010.
Tanagra, "Tanagra and Sipina add-ins for Excel 2016", June 2016.