Skimming through the often spaghetti-like code, the number of programs which subject the data to a mixed-bag of transformative and filtering routines is simply staggering. Granted, many of these "alterations" run from benign smoothing algorithms (e.g., omitting rogue outliers) to moderate infilling mechanisms (e.g., estimating missing station data from that of those closely surrounding). But many others fall into the precarious range between highly questionable (removing MXD data which demonstrate poor correlations with local temperature) to downright fraudulent (replacing MXD data entirely with measured data to reverse a disorderly trend-line).
In fact, workarounds for the post-1960 "divergence problem," as described by both RealClimate and Climate Audit, can be found throughout the source code. So much so that perhaps the most ubiquitous programmer's comment (REM) I ran across warns that the particular module "Uses 'corrected' MXD - but shouldn't usually plot past 1960 because these will be artificially adjusted to look closer to the real temperatures."
What exactly is meant by "corrected” MXD," you ask? Outstanding question -- and the answer appears amorphous from program to program. Indeed, while some employ one or two of the aforementioned "corrections," others throw everything but the kitchen sink at the raw data prior to output.
CRU relied on 8 papers linking tree ring growth to temperatures in historical times. However, these papers were full of errors despite being approved by the (now known to be compromised) peer-review process.
These papers and indeed the entire CRU methodology was to express tree ring thickness solely in terms of temperature while ignoring or minimizing the effects of variations in water availability, nutrients, disease patterns, and the fact that increased CO2 would trigger more robust plant growth even though the actual temperature might remain constant or even decline.