Briggs is right to complain that “natural variability” is an ambiguous and easily abused term, but what I would be most inclined to use it for is different from either of the usages that he identifies.
He notes that some use “natural variability” of a phenomenon (such as some average of temperature measurements) to refer to the actual values taken by the data at different points in time, and others use it for the values that would be expected in the absence of some “unnatural” factor (such as CO2 emissions from human use of technology). But to me it seems much more natural to use it to refer to the unexplained deviations of the data from what would be predicted by a (partially) explanatory model.
I misunderstood Briggs’ claim that theoretical and/or statistical modellers claim to “skillfully” predict the natural variability in his first sense as meaning that they claim to predict it completely or accurately, whereas he was referring to the technical definition used in meteorology where one prediction is said to be relatively skillful compared to another if its mean squared deviation from the observed data is less. But this depends both on the reference model used for comparison and on the interval over which the comparison is made. A model that is skillful over a long interval may well have substantial shorter intervals over which it is not skillful, and even though a prediction of an upward trend in global temperature may appear not to be skillful over the interval from 2008 to 2014, that made by Arrhenius in 1898 does seem to be so (and would be even more so if he had predicted a faster rate of increase by reducing his estimated doubling time for CO2 to account for the subsequent increase in both population and per capita energy use).