(02-10-2014 07:56 PM)Maize Wrote: (02-10-2014 07:20 PM)lumberpack4 Wrote: (02-10-2014 05:46 PM)Eagle78 Wrote: In my opinion, this kind of analysis, while interesting, is somewhat problematic as a predictor of future success. IMO, it adopts a linear construct in using the 5 year and 2 year data, never bothering to factor in random elements such as coaching changes. This is clearly evident in the case of BC. During the 2/5 year window in which this data was used, BC had a disastrous coaching change which drove BC's recruiting and performance to levels far below what they had enjoyed for the previous decade. With another coaching change this past year, BC's performance and recruiting significantly improved. However, this study does not take that into account and only uses data from the nadir of BC's performance/recruiting history.
IMO, for this study to have credibility, the data would need to be run through a sensitivity analysis which takes into account these significant variables, which will influence the results.
Just my opinion and no offense intended to the authors, but this type of analysis is best left to the professionals.
Eagle, you have passed the comprehensive exams and are now cleared to present your defense. :)
I found it pretty interesting and at least it was attempted to be done in the most unbiased way possible.
With all due respect, while I have no way of knowing the intent of the authors of this analysis, IMO, it nevertheless is biased in terms of what it purports to be used for - specifically, a predictor of future success.
The data is not biased. Data, by itself, is just that - data. It is the application of this data and the methodology used to form a conclusion that, IMO, is biased. Again, I am not saying that it was the intent of the authors to devise a biased analysis. It is, however, IMO, a biased result based on how the data was used.
If the purpose of this analysis was to simply rank performance over the past 5 years by just doing what it did - e.g., regressing several data points in a basic manner to drive a ranking, I would have less of an issue with this. However, as someone who has done this type of analysis numerous times, IMO, you cannot simply take this analysis to reliably predict future results without either: (a) subjecting it to a sensitivity analysis to take into account the elements that I mentioned in my above post or (b) assigning to the analysis a margin of error which takes into account the issues I have raised.
IMO, there certainly where modifications to the methodologies used which could have been adopted so I would disagree with your comment that this was "done in the most unbiased way possible."
Full disclosure, this is a pet peeve of mine. We see this type of thing occur all to often. IMO, governments and organizations all to often make critical policy and business decisions based on analysis which, IMO, can often have the same kinds of built-in biases (either intended or unintended). That's bad for all of us, IMO.