Monday, May 19, 2014

Automated determination of distribution groupings - A StackOverflow collaboration

For those of you not familiar with StackOverflow (SO), it's a coder's help forum on the StackExchange website. It's one of the best resources for R-coding tips that I know of, due entirely to the community of users that routinely give expert advise (assuming you show that you have done your homework and provide a clear question and a reproducible example). It's hard to believe that users spend time to offer this help for nothing more than virtual reputation points. I think a lot of coders are probably puzzle fanatics at heart, and enjoy the challenge of a given problem, but I'm nevertheless amazed by the depth of some of the R-related answers. The following is a short example of the value of this community (via SO), which helped me find a solution to a tricky problem.

I have used figures like the one above (left) in my work at various times. It present various distributions in the form of a boxplot, and uses differing labels (in this case, the lowercase letters) to denote significant differences; i.e. levels sharing a label are not significantly different. This type of presentation is common when showing changes in organism condition indices over time (e.g. Figs 3 & 4, Bullseye puffer fish in Mexico).

In the example above, a Kruskal-Wallis rank sum test is used to test differences across all levels, followed by pairwise Mann-Whitney rank tests. The result is a matrix of p-values showing significant differences in distribution. So far so good, but it's not always clear how the grouping relationships should be labelled. In this relatively simple example, the tricky part is that level 1 should be grouped with 3 and 5, but 3 and 5 should not be grouped; Therefore, two labeling codes should be designated, with level 1 sharing both. I have wondered, for some time, if there might be some way to do this in an automated fashion using an algorithm. After many attempts on my own, I finally decided to post a question to SO.

So, my first question "Algorithm for automating pairwise significance grouping labels in R" led me to the concept of the "clique cover problem", and "graph theory" in general, via SO user "David Eisenstat". While I didn't completely understand his recommendation at first, it got me pointed in the right direction - I ultimately found the R package igraph for analyzing and plotting these types of problems.

The next questions were a bit more technical. I figured out that I could return the "cliques" of my grouping relationships network using the cliques function of the igraph package, but my original attempt was giving me a list all relationships in my matrix. It was obvious to me that I would need to identify groupings where all levels were fully connected (i.e. each node in the clique connects to all others). So, my next question "How to identify fully connected node clusters with igraph [in R]" got me a tip from SO user "majom", who showed me that these fully connected cliques could be identified by first reordering the starting nodes in my list of connections (before use in the graph.data.frame function), and then subjecting the resulting igraph object to the function maximal.cliques. So, the first suggestions from David were right on, even though they didn't include code. The result shows nicely all those groupings in the above example (right plot) with fully connected cliques [i.e. (1, 3), (1, 5), (2), (4, 6), (7)].

The final piece of the puzzle was more cosmetic - "How to order a list of vectors based on the order of values contained within [in R]". A bit vague, I know, but what I was trying to do was to label groups in a progressive way so that earlier levels received their labels first. I think this leads to more legible labeling, especially when levels represent some process of progression. At the time of this posting, I have received a single negative (-1) vote on this question... This may have to do with the clarity of the question - I seem to have confused some of the respondents based on follow up comments for clarification - or, maybe someone thought I hadn't shown enough effort on my own. There's no way to know without an accompanying comment. In any case, I got a robust approach from SO user "MrFlick", and I can safely say that I would never have come up with such an eloquent solution on my own.

In all, this solution seems to work great. I have tried it out on larger problems involving more levels and it appears to give correct results. Here is an example with 20 levels (a problem that would have been an amazing headache to do manually):
Any comments are welcome. There might be other ways of doing this (clustering?), but searching for similar methods seems to be limited by my ability to articulate the problem. Who would have thought this was an example of a "clique cover problem"? Thanks again to all those that provided help on SO!

Code to reproduce the example:

Saturday, May 3, 2014

Evaluating model performance - A practical example of the effects of overfitting and data size on prediction


Following my last post on decision making trees and machine learning, where I presented some tips gathered from the "Pragmatic Programming Techniques" blog, I have again been impressed by its clear presentation of strategies regarding the evaluation of model performance. I have seen some of these topics presented elsewhere - especially graphics showing the link between model complexity and prediction error (i.e. "overfitting") - but this particular presentation made me want to go back to this topic and try to make a practical example in R that I could use when teaching.

Effect of overfitting on prediction
The above graph shows polynomial fitting of various degrees to an artificial data set - The "real" underlying model is a 3rd-degree polynomial (y ~ b3*x^3 + b2*x^2 + b1*x + a). One gets a good idea that the higher degree models are incorrect give the single-term removal significance tests provided by the summary function (e.g. 5th-degree polynomial model):

Call:
lm(formula = ye ~ poly(x, degree = 5), data = df)
Residuals:
Min 1Q Median 3Q Max
-4.4916 -2.0382 -0.4417 2.2340 8.1518
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 29.3696 0.4304 68.242 < 2e-16 ***
poly(x, degree = 5)1 74.4980 3.0432 24.480 < 2e-16 ***
poly(x, degree = 5)2 54.0712 3.0432 17.768 < 2e-16 ***
poly(x, degree = 5)3 23.5394 3.0432 7.735 9.72e-10 ***
poly(x, degree = 5)4 -3.0043 3.0432 -0.987 0.329
poly(x, degree = 5)5 1.1392 3.0432 0.374 0.710
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.043 on 44 degrees of freedom
Multiple R-squared: 0.9569, Adjusted R-squared: 0.952
F-statistic: 195.2 on 5 and 44 DF, p-value: < 2.2e-16


Nevertheless, a more robust analysis of prediction error is through a cross-validation - by splitting the data into training and validation sub-sets. The following example does this split at 50% training and 50% validation, with 500 permutations.


So, here we have the typical trend of increasing prediction error with model complexity (via cross-validation - CV) when the model is overfit (i.e. > 3rd-degree polynomial, vertical grey dashed line). As reference, the horizontal grey dashed line shows the original amount of error added, which is where the CV error reaches a minimum.

Effect of data size on prediction
Another interesting aspect presented in the post is the use of CV in estimating the relationship between prediction error and the amount of data used in the model fitting (credit given to Andrew Ng from Stanford). This is helpful concept when determining what the benefit in prediction would be following an invest in more data sampling:


Here we see that, given a fixed model complexity, training error and CV error converges. Again, the horizontal grey dashed line indicates the actual measurement error of the response variable. So, in this example, there is not much improvement in prediction error following a data size of ca. 100. Interestingly, the example also demonstrates that even with an overfit model containing a 7th-degree polynomial, the increased prediction error is overcome with a larger data set. For comparison, the same exercise done with the correct 3rd-degree model shows that even the smaller data set achieves a relatively low prediction error even when the data size is small (2.6 MAE in 3rd-degree poly. vs 3.7 MAE in 7th-degree poly.):


Code to reproduce example: