I had a question recently in "Hey, Bill", the essence of which was that we hear people say that pitchers today get hurt more than they used to, back in the old days, and is this true? I responded that I didn’t believe that it was true, to the best of my knowledge, but that it had been a few years since I had studied the issue, and I probably should look at it again. I have done that now, and this is my report on that question.
To get to the bottom line, for those of you who only care about the bottom line and need to get on with your day: The durability of starting pitchers today is essentially in line with historic norms, but has trended downward slightly with the end of the steroid era. The durability of starting pitchers right now is essentially the same as it was in 1967, less than it was in the 1970s, and very slightly less than it was in the heart of the steroid era. The durability of starting pitchers now is distinctly greater than it was in the 1940s and 1950s.
Let me explain the method. I have two groups of pitchers. Group A is those major league pitchers pitching the most innings in a season, up to a limit equivalent to three pitchers per major league team, but not including any Mike Marshall-type relievers who might have thrown an unusual number of innings. Sometimes we will refer to these pitchers as rotation anchors. Group B is all major league pitchers making 15 starts in a season. What I am actually measuring here is the persistence of the pitchers from Group A within Group B.
Explaining it better than that. . .I have a spreadsheet which contains the records of all major league pitchers since 1876. From that spreadsheet, I eliminated all pitchers
a) from seasons before 1900, and
b) who made fewer than 15 starts in the season.
I then chose the 24 pitchers who pitched the most innings in 1900 (since there were eight major league teams in 1900), the 48 pitchers who pitched the most innings in 1901 (since there were sixteen major league teams in 1901), the 60 pitchers who pitched the most innings in 1965 (since there were twenty major league teams in 1965), etc. The "focus groups" or "test groups" ran from 1900 to 2005. Since the study looks forward in time, one can’t study the most recent seasons. There is no way of knowing whether the starting pitchers from 2013 will prove to be a durable lot, over time, until several seasons after 2013.
The 24 pitchers from 1900, then, are one test group; the 48 pitchers from 1901 are one test group, the 90 pitchers from 2005 are one test group. There are a total of 6,390 pitchers in 106 test groups. For each pitcher in each test group, I asked "Did he make 15 or more starts again the next season? Did he make 15 starts again two seasons later? Did he make 15 starts three seasons later? Four seasons later? Five seasons? Six? Seven? Eight?
In the first test group in the study, those pitchers from 1900, the average number of future seasons with 15 or more starts is 4.88. This is the highest figure in the study; the average went down in 1901, and has never gone back up that high.
Why? Well, two things. The 1900 season is an atypical season. There were 12 major league teams in 1899, only 8 in 1900, then 16 in 1901. When the 12 teams were shoehorned into eight in 1900, only the better pitchers remained in the rotations. When the American League started the next year, doubling the number of major league jobs, those "select" pitchers had a competitive advantage in terms of remaining in the majors There’s never been anything else like that in major league history, and this likely explains most of the unusual number for 1900.
Second, the test group is very small—just 24 pitchers, whereas there are at least 48 pitchers in every other test group. When you have a small group, you’re much more likely to get an out-of-bounds result.
Anyway, we could describe that as 61%. If every pitcher who was in the test group in 1900 remained a major league starting pitcher for the next eight seasons, that would be an average of 8.00. The average is 4.88—61%. The major league norm, over history, is just short of 50%; it’s between 48 and 49%.
After 1900, the "durability percentage" dropped very rapidly. From 1901 to 1908 the durability percentage dropped almost every year, reaching a low of 33% in 1908. The 33% figure from 1908 is one of the lowest ever. In all honestly, I do not know why the durability of starting pitchers dropped very sharply in that era; I will speculate, but I do not know.
Remember, the data for 1908 actually involves the years 1908 to 1916, since the study is forward-looking from the base year. The years 1900 to 1916 were the years in which the spitball took over baseball. After the modern pitching distance was established in 1893, reliance on the fastball decreased for a few years, as pitchers had more room to experiment with off-speed pitches.
But once the spitball took over the game (beginning about 1903) the spitball largely replaced the changeup and the curve, or, as writers always said at the time, the curves. Pitchers in the 1893-1905 era almost all threw a variety of curves—a "drop curve", an overhand curve, a hard curve which was not too unlike a slider, a slow curve which was not too unlike a changeup.
The spitball largely drove all of that out of the game. The spitball (and later the emery ball) was thrown like a fastball, but dived or sailed as it got near to home plate due to the irregular surface of the ball. It was thrown with the delivery and the energy of a fastball. In essence, the fastball was being used as an off-speed pitch—and, for many pitchers, as a fastball as well; by 1915 a lot of pitchers didn’t throw anything except spitballs. This might very plausibly have exhausted arms more rapidly, and also it may have led to the use of younger pitchers, since younger pitchers had better fastballs, thus better spitballs. This might explain the rapid decline in pitcher durability in that era.
In any case the durability percentages began to go up after 1908. I should point out here: this are few if any "clean" eight-year periods in baseball history. In any eight-year period there is always an expansion, or a new league, or a strike, or a war, or a redefinition of the strike zone, or something that causes the data for that eight-year window to be not normal. All the data is non-normal for one reason or another.
Anyway, after reaching a low of 33% in 1908, the durability percentage recovered to 44% by 1913. The 1913 percentage is influenced by the Federal League, 1914-1915, which kept more pitchers "in the majors" in 1914 and 1915 than would normally be the case. For the same reason, the durability percentage dropped suddenly to 32% in 1915, due to the collapse of the Federal League after 1915, and recovered suddenly to 50% in 1916. What the exact percentage is for any one year is kind of a red herring; we can get a better guide to what is happening by using a rolling 7-year average.
After a low of 38% in 1912, the rolling seven-year average moved constantly upward for twenty years, reaching a peak of 52% in 1934. 1934 is the last year in which the forward-looking data is not influenced at all by World War II. The 1934 test group is "tested" in the years 1935 to 1942. After that test group, the data is influenced by the war, so the results begin to slip at that point.
By the end of World War II (1945) the rolling average was down to 34%; the data for 1945 alone is actually 24%. These are both the lowest numbers of all time. The pitchers who pitched during World War II, for the most part, disappeared quickly after World War II; this is not news. The World War II data is very strongly atypical.
Just after World War II, however, we find the most interesting thing in the data.
Since baseball statistics are circular, measuring the success of each player relative to other players, it is difficult to measure the quality of play in constant terms—in other words, difficult to say whether the quality of play in 1952 was better or worse than in 1938. One issue related to this is whether or not the quality of play in 1946 snapped back immediately to post-war levels, or whether it took time, after World War II, for the game to get fully back on its feet. This matters, for example, in how we evaluate Hal Newhouser’s 1946 season, when Newhouser was 26-9 with a 1.94 ERA, 275 strikeouts. Is that a fully certified Sandy Koufax-type season, or is that, like Newhouser’s MVP campaigns in 1944 and 1945, marked with an asterisk due to World War II?
If the game fully recovered immediately at the end of the war, there is no reason why the pitcher durability percentage for 1946 should not have been the same as it was before the war. In fact, however, the pitcher durability percentages, looking forward from 1946 and 1947, were 42% and 39%--far lower than the norms of the 1920s and early 1930s, and much lower even than the figure for 1937 (47%), which was probably artificially lowered by the war. The pitcher durability percentages went up steadily after 1947, but they did not fully recover to their pre-war norms until 1958. This certainly seems to suggest that the game did not fully recover, after World War II, for several years.
The rolling seven-year average of pitcher durability percentages, dragged down to 34% by the end of World War II, was back to 45% by 1952, to 47% by 1959, to 49% by 1966, and by 1973 was up to an all-time high of 55%. I have written about this many times, of course, but there was a remarkable generation of pitchers there, with six 300-game winners (Carlton, Seaver, Niekro, Ryan, Perry and Sutton) and an even larger group of outstanding pitchers who fell short of 300 wins (Palmer, Jenkins, Hunter, Tiant, Blyleven, John, Kaat, Blue, Koosman and others.) This exceptionally high pitcher durability percentage is yet another effect of that historic cluster.
The strikes in 1981 and 1994 seem to have only a tiny effect on our data. The strikes prevented pitchers from making 30 starts in a season; they didn’t prevent them from making 15. Anyway, the pitchers of the 1980s were not quite as durable, year to year, as those of the 1970s, despite the shift from four-man to five-man rotations. The pitcher durability percentage, from its peak of 55% in 1973, fell to 51-52% by 1983—and stayed there essentially until 2002.
After 2002, the pitcher durability percentage did begin to fall, dropping to 47% by 2005. For clarity: The 2005 data looks forward to the years 2006 through 2013, and the seven-year average for 2005 is the average of the test groups for the years 1999 through 2005. I believe that the durability percentage has slipped somewhat in recent years because of the banning of steroids. Steroids helped pitchers to recover more quickly from injuries, thus helped them to stay in the rotation. The banning of steroids has probably led to some decrease in the number of pitchers who stay in the rotation year after year.
With the caveat that the major leagues are not an immense universe, impervious to random data flukes. We have 90 pitchers in each test group. A group of 90 people is subject to random perturbations in the data. A change in the norms for a group of that size doesn’t necessarily mean anything; it could be just something that simply happens.
OK, there are some other things I should touch on here.
1) Across time, 83% of pitchers who were rotation anchors in one year made at least 15 starts the next year. 70% made at least 15 starts two years later, 59% made at least 15 starts three years later, 50% made at least 15 starts four years later, 42% did so five years later, 34% did so six years later, 27% did so seven years later, and 22% did so eight years later. These numbers are not substantially different now than they were 100 years ago.
2) The pitcher durability percentage does, of course, decline with age—however, it declines only a tiny bit as the pitcher ages. The future expectations for a 32-year-old pitcher are not dramatically different than the future expectations for a 25-year-old pitcher of the same ability. Many previous studies have shown this to be true.
In this study, the pitcher durability percentage is:
64% at age 22
58% at age 24
54% at age 26
48% at age 28
46% at age 30
40% at age 32
37% at age 34
36% at age 36, and
32% at age 38.
In simple terms, there is just really no telling when a pitcher will blow out. He may wear out when he is 25; he may last until he is past 40. Yes, the likelihood of a blowout DOES increase as the pitcher ages; his expectation for future success does decrease—but it decreases very slowly. As long as a pitcher is healthy and pitching well, a 32-year-old is not a lot different than a 25-year-old. A 32-year-old rotation anchor has 71% of the expected future of a 25-year-old rotation anchor.
3) While I was doing this study, I looked at the durability of rookie pitchers, specifically, and I did this because I have often noticed that rookie pitchers are prone to sudden blowouts—e.g. Matt Harvey, Steven Strasburg, Mark Fidrych, Vance Worley, Wally Bunker, Dick Hughes, Wayne Simpson, Neftali Feliz, J. A. Happ, etc. etc. I’m not saying I’m not impressed by what Jose Fernandez did last year; I’m just saying I’ll be a hell of a lot more impressed if he can do it again in 2014. Anibal Sanchez was sensational as a rookie in 2006; then he got hurt, and it took him seven years to get back where he was.
It is hard to overstate how big a step up the major leagues are, for a young pitcher. In Double-A, a starting pitcher is facing one or two really good hitters a game; let’s say six tough at-bats a game, he probably has to make 20, 25 pressure pitches in a game. In the majors, all of sudden he’s facing six VERY tough hitters in every lineup; he has to make 70, 75 good pitches a game. It’s totally different. It’s not that the young pitcher can’t do it; many young pitchers can do it—but very often it destroys their arms in a year or two. There is a weeding-out process, and the place where you find MOST of the weak links is right near the beginning, just after the pitcher’s first few months of success.
This study shows definitively that the durability percentages for a rookie pitcher are, in fact, substantially lower than for a rotation anchor of the same age who is not a rookie. The durability percentages:
For a 22-year-old rookie pitcher, 59%. For a 22-year-old pitcher who is not a rookie, 66%.
For a 23-year-old rookie pitcher, 48%. For a 23-year-old pitcher who is not a rookie, 63%.
For a 24-year-old rookie pitcher, 50%. For a 24-year-old pitcher who is not a rookie, 60%.
For a 25-year-old rookie pitcher, 42%. For a 25-year-old pitcher who is not a rookie, 56%.
For a 26-year-old rookie pitcher, 52%. For a 26-year-old pitcher who is not a rookie, 54%.
For a 30-year-old rookie pitcher, 21%. For a 30-year-old pitcher who is not a rookie, 46%.
The numbers of rookie pitchers are smaller than the study as a whole. There are 6,390 pitchers in the study, of whom 658 are rookies. Divide those 658 by age, the groups are small and the data unstable. But even with the unstable data, there is no age group at which the durability percentage for non-rookies is not higher than the durability percentage for rookies. There are rookies and non-rookies in every group from ages 19 to 35, and in every case the non-rookies are more durable in subsequent seasons.