On Pitcher Durability

March 31, 2014

                I had a question recently in "Hey, Bill", the essence of which was that we hear people say that pitchers today get hurt more than they used to, back in the old days, and is this true?   I responded that I didn’t believe that it was true, to the best of my knowledge, but that it had been a few years since I had studied the issue, and I probably should look at it again.   I have done that now, and this is my report on that question.

                To get to the bottom line, for those of you who only care about the bottom line and need to get on with your day:  The durability of starting pitchers today is essentially in line with historic norms, but has trended downward slightly with the end of the steroid era.  The durability of starting pitchers right now is essentially the same as it was in 1967, less than it was in the 1970s, and very slightly less than it was in the heart of the steroid era.   The durability of starting pitchers now is distinctly greater than it was in the 1940s and 1950s.

                Let me explain the method.  I have two groups of pitchers.  Group A is those major league pitchers pitching the most innings in a season, up to a limit equivalent to three pitchers per major league team, but not including any Mike Marshall-type relievers who might have thrown an unusual number of innings.   Sometimes we will refer to these pitchers as rotation anchors.   Group B is all major league pitchers making 15 starts in a season.   What I am actually measuring here is the persistence of the pitchers from Group A within Group B.  

                Explaining it better than that. . .I have a spreadsheet which contains the records of all major league pitchers since 1876.   From that spreadsheet, I eliminated all pitchers

                a)  from seasons before 1900, and

                b)  who made fewer than 15 starts in the season.

                I then chose the 24 pitchers who pitched the most innings in 1900 (since there were eight major league teams in 1900), the 48 pitchers who pitched the most innings in 1901 (since there were sixteen major league teams in 1901), the 60 pitchers who pitched the most innings in 1965 (since there were twenty major league teams in 1965), etc.    The "focus groups" or "test groups" ran from 1900 to 2005.    Since the study looks forward in time, one can’t study the most recent seasons.    There is no way of knowing whether the starting pitchers from 2013 will prove to be a durable lot, over time, until several seasons after 2013.

                The 24 pitchers from 1900, then, are one test group; the 48 pitchers from 1901 are one test group, the 90 pitchers from 2005 are one test group.   There are a total of 6,390 pitchers in 106 test groups.   For each pitcher in each test group, I asked "Did he make 15 or more starts again the next season?   Did he make 15 starts again two seasons later?   Did he make 15 starts three seasons later?   Four seasons later?   Five seasons?  Six?  Seven?   Eight?

                In the first test group in the study, those pitchers from 1900, the average number of future seasons with 15 or more starts is 4.88.   This is the highest figure in the study; the average went down in 1901, and has never gone back up that high.

                Why?   Well, two things.   The 1900 season is an atypical season.   There were 12 major league teams in 1899, only 8 in 1900, then 16 in 1901.    When the 12 teams were shoehorned into eight in 1900, only the better pitchers remained in the rotations.   When the American League started the next year, doubling the number of major league jobs, those "select" pitchers had a competitive advantage in terms of remaining in the majors   There’s never been anything else like that in major league history, and this likely explains most of the unusual number for 1900.

                Second, the test group is very small—just 24 pitchers, whereas there are at least 48 pitchers in every other test group.   When you have a small group, you’re much more likely to get an out-of-bounds result.

                Anyway, we could describe that as 61%.    If every pitcher who was in the test group in 1900 remained a major league starting pitcher for the next eight seasons, that would be an average of 8.00.    The average is 4.88—61%.   The major league norm, over history, is just short of 50%; it’s between 48 and 49%.

                After 1900, the "durability percentage" dropped very rapidly.   From 1901 to 1908 the durability percentage dropped almost every year, reaching a low of 33% in 1908.   The 33% figure from 1908 is one of the lowest ever.    In all honestly, I do not know why the durability of starting pitchers dropped very sharply in that era; I will speculate, but I do not know.

                Remember, the data for 1908 actually involves the years 1908 to 1916, since the study is forward-looking from the base year.   The years 1900 to 1916 were the years in which the spitball took over baseball.    After the modern pitching distance was established in 1893, reliance on the fastball decreased for a few years, as pitchers had more room to experiment with off-speed pitches.

                But once the spitball took over the game (beginning about 1903) the spitball largely replaced the changeup and the curve, or, as writers always said at the time, the curves.    Pitchers in the 1893-1905 era almost all threw a variety of curves—a "drop curve", an overhand curve, a hard curve which was not too unlike a slider, a slow curve which was not too unlike a changeup.

                The spitball largely drove all of that out of the game.  The spitball (and later the emery ball) was thrown like a fastball, but dived or sailed as it got near to home plate due to the irregular surface of the ball.  It was thrown with the delivery and the energy of a fastball.   In essence, the fastball was being used as an off-speed pitch—and, for many pitchers, as a fastball as well; by 1915 a lot of pitchers didn’t throw anything except spitballs.  This might very plausibly have exhausted arms more rapidly, and also it may have led to the use of younger pitchers, since younger pitchers had better fastballs, thus better spitballs.   This might explain the rapid decline in pitcher durability in that era.

                In any case the durability percentages began to go up after 1908.   I should point out here:  this are few if any "clean" eight-year periods in baseball history.  In any eight-year period there is always an expansion, or a new league, or a strike, or a war, or a redefinition of the strike zone, or something that causes the data for that eight-year window to be not normal.   All the data is non-normal for one reason or another.

                Anyway, after reaching a low of 33% in 1908, the durability percentage recovered to 44% by 1913.   The 1913 percentage is influenced by the Federal League, 1914-1915, which kept more pitchers "in the majors" in 1914 and 1915 than would normally be the case.   For the same reason, the durability percentage dropped suddenly to 32% in 1915, due to the collapse of the Federal League after 1915, and recovered suddenly to 50% in 1916.    What the exact percentage is for any one year is kind of a red herring; we can get a better guide to what is happening by using a rolling 7-year average.

                After a low of 38% in 1912, the rolling seven-year average moved constantly upward for twenty years, reaching a peak of 52% in 1934.    1934 is the last year in which the forward-looking data is not influenced at all by World War II.    The 1934 test group is "tested" in the years 1935 to 1942.   After that test group, the data is influenced by the war, so the results begin to slip at that point.

                By the end of World War II (1945) the rolling average was down to 34%; the data for 1945 alone is actually 24%.   These are both the lowest numbers of all time.   The pitchers who pitched during World War II, for the most part, disappeared quickly after World War II; this is not news.    The World War II data is very strongly atypical.

                Just after World War II, however, we find the most interesting thing in the data.

                Since baseball statistics are circular, measuring the success of each player relative to other players, it is difficult to measure the quality of play in constant terms—in other words, difficult to say whether the quality of play in 1952 was better or worse than in 1938.     One issue related to this is whether or not the quality of play in 1946 snapped back immediately to post-war levels, or whether it took time, after World War II, for the game to get fully back on its feet.   This matters, for example, in how we evaluate Hal Newhouser’s 1946 season, when Newhouser was 26-9 with a 1.94 ERA, 275 strikeouts.    Is that a fully certified Sandy Koufax-type season, or is that, like Newhouser’s MVP campaigns in 1944 and 1945, marked with an asterisk due to World War II?

                If the game fully recovered immediately at the end of the war, there is no reason why the pitcher durability percentage for 1946 should not have been the same as it was before the war.   In fact, however, the pitcher durability percentages, looking forward from 1946 and 1947, were 42% and 39%--far lower than the norms of the 1920s and early 1930s, and much lower even than the figure for 1937 (47%), which was probably artificially lowered by the war.   The pitcher durability percentages went up steadily after 1947, but they did not fully recover to their pre-war norms until 1958.    This certainly seems to suggest that the game did not fully recover, after World War II, for several years.

                The rolling seven-year average of pitcher durability percentages, dragged down to 34% by the end of World War II, was back to 45% by 1952, to 47% by 1959, to 49% by 1966, and by 1973 was up to an all-time high of 55%.   I have written about this many times, of course, but there was a remarkable generation of pitchers there, with six 300-game winners (Carlton, Seaver, Niekro, Ryan, Perry and Sutton) and an even larger group of outstanding pitchers who fell short of 300 wins (Palmer, Jenkins, Hunter, Tiant, Blyleven, John, Kaat, Blue, Koosman and others.)  This exceptionally high pitcher durability percentage is yet another effect of that historic cluster.

                The strikes in 1981 and 1994 seem to have only a tiny effect on our data.   The strikes prevented pitchers from making 30 starts in a season; they didn’t prevent them from making 15.  Anyway, the pitchers of the 1980s were not quite as durable, year to year, as those of the 1970s, despite the shift from four-man to five-man rotations.   The pitcher durability percentage, from its peak of 55% in 1973, fell to 51-52% by 1983—and stayed there essentially until 2002.

                After 2002, the pitcher durability percentage did begin to fall, dropping to 47% by 2005.  For clarity:  The 2005 data looks forward to the years 2006 through 2013, and the seven-year average for 2005 is the average of the test groups for the years 1999 through 2005.   I believe that the durability percentage has slipped somewhat in recent years because of the banning of steroids.   Steroids helped pitchers to recover more quickly from injuries, thus helped them to stay in the rotation.   The banning of steroids has probably led to some decrease in the number of pitchers who stay in the rotation year after year. 

                With the caveat that the major leagues are not an immense universe, impervious to random data flukes.   We have 90 pitchers in each test group.   A group of 90 people is subject to random perturbations in the data.   A change in the norms for a group of that size doesn’t necessarily mean anything; it could be just something that simply happens.

                OK, there are some other things I should touch on here.

                1)  Across time, 83% of pitchers who were rotation anchors in one year made at least 15 starts the next year.   70% made at least 15 starts two years later, 59% made at least 15 starts three years later, 50% made at least 15 starts four years later, 42% did so five years later, 34% did so six years later, 27% did so seven years later, and 22% did so eight years later.   These numbers are not substantially different now than they were 100 years ago. 

                2)  The pitcher durability percentage does, of course, decline with age—however, it declines only a tiny bit as the pitcher ages.    The future expectations for a 32-year-old pitcher are not dramatically different than the future expectations for a 25-year-old pitcher of the same ability.    Many previous studies have shown this to be true. 

                In this study, the pitcher durability percentage is:

                64% at age 22

                58% at age 24

                54% at age 26

                48% at age 28

                46% at age 30

                40% at age 32

                37% at age 34

                36% at age 36, and

                32% at age 38.


                In simple terms, there is just really no telling when a pitcher will blow out.   He may wear out when he is 25; he may last until he is past 40.   Yes, the likelihood of a blowout DOES increase as the pitcher ages; his expectation for future success does decrease—but it decreases very slowly.     As long as a pitcher is healthy and pitching well, a 32-year-old is not a lot different than a 25-year-old.    A 32-year-old rotation anchor has 71% of the expected future of a 25-year-old rotation anchor.

                3)  While I was doing this study, I looked at the durability of rookie pitchers, specifically, and I did this because I have often noticed that rookie pitchers are prone to sudden blowouts—e.g. Matt Harvey, Steven Strasburg, Mark Fidrych, Vance Worley, Wally Bunker, Dick Hughes, Wayne Simpson, Neftali Feliz, J. A. Happ, etc. etc.   I’m not saying I’m not impressed by what Jose Fernandez did last year; I’m just saying I’ll be a hell of a lot more impressed if he can do it again in 2014.   Anibal Sanchez was sensational as a rookie in 2006; then he got hurt, and it took him seven years to get back where he was. 

                It is hard to overstate how big a step up the major leagues are, for a young pitcher.   In Double-A, a starting pitcher is facing one or two really good hitters a game; let’s say six tough at-bats a game, he probably has to make 20, 25 pressure pitches in a game.   In the majors, all of sudden he’s facing six VERY tough hitters in every lineup; he has to make 70, 75 good pitches a game.   It’s totally different.   It’s not that the young pitcher can’t do it; many young pitchers can do it—but very often it destroys their arms in a year or two.     There is a weeding-out process, and the place where you find MOST of the weak links is right near the beginning, just after the pitcher’s first few months of success.

                This study shows definitively that the durability percentages for a rookie pitcher are, in fact, substantially lower than for a rotation anchor of the same age who is not a rookie.   The durability percentages:

                For a 22-year-old rookie pitcher, 59%.  For a 22-year-old pitcher who is not a rookie, 66%.

                For a 23-year-old rookie pitcher, 48%.    For a 23-year-old pitcher who is not a rookie, 63%.

                For a 24-year-old rookie pitcher, 50%.   For a 24-year-old pitcher who is not a rookie, 60%.

                For a 25-year-old rookie pitcher, 42%.   For a 25-year-old pitcher who is not a rookie, 56%.

                For a 26-year-old rookie pitcher, 52%.   For a 26-year-old pitcher who is not a rookie, 54%. 

                For a 30-year-old rookie pitcher, 21%.   For a 30-year-old pitcher who is not a rookie, 46%.

                The numbers of rookie pitchers are smaller than the study as a whole.   There are 6,390 pitchers in the study, of whom 658 are rookies.   Divide those 658 by age, the groups are small and the data unstable.   But even with the unstable data, there is no age group at which the durability percentage for non-rookies is not higher than the durability percentage for rookies.   There are rookies and non-rookies in every group from ages 19 to 35, and in every case the non-rookies are more durable in subsequent seasons.


COMMENTS (30 Comments, most recent shown first)

When I saw Pineda pitch in Tacoma in 2011 I noticed something. Late in the game with runners on base his mechanics changed. He overthrew, his velocity increased and he got wild. He was overthrowing. Because he was rushed to the big leagues next spring instead of spending a couple months working in Tacoma I assume this wasn't fixed. I'll bet a lot of pitching coaches don't mess with successful rookie mechanics because they don't want to screw them up.
11:48 AM Apr 4th
You seem to think there's a magical formula for determining which pitchers will flame out. There's not. Some guys do and some guys don't. Pitchers on bad teams with bad mechanics who throw a lot of pitches early with low SO data are much much more likely to flame out. The Dave Fleming types. That's not new it's been known for a long time. Mark Fydrich flaming out was not a big shock.
I knew Pineda's arm was blown in June and if there's something to be learned from it I'm glad to start this argument. What I was hoping was a Met's fan who watched Harvey saw something or a Marlins' fan who's seen Fernandez pitch has seen something good or bad. That's unlikely now.
9:35 AM Apr 4th
Verlander's ERA was 1.01 in July, 6.83 in August, and 4.82 in September. Sabathia's ERA was 5.23 in August and 5.61 in September.

Mark Fidrych in 1976 had a 1.40 ERA in July, 2.66 in August, and 3.83 in September. Below you said, "Matt Harvey's ERA went up every month but not as drastically. Looked up Fidrych same thing. I'll bet it holds for most of the blown out rookies." Now you're saying July is the key month.

You have no clue what you're talking about, and you've offered no evidence that you can do anything other than claim that pitchers who got hurt were overworked and/or had bad mechanics, and that pitchers who stayed healthy had good mechanics. None of that adds anything to any discussion. By jumping out of the discussion early, tangotiger was much smarter than I was.

8:14 PM Apr 3rd
The Pineda sore arm thing wasn't all that mysterious. I remember getting texts in May, June saying "What's Wedge doing?" "Better get him outta there?" "Oh crap! Another batter!" As an M's fan it was a crisis in waiting, wondering if Wedge knew he was on pace for well over 200 innings. And than Wedge says he has no plans to shut him down, he's a big league pitcher he needs to be used to pitching in September. That's what real men do. It was ridiculous. When he was getting shelled through July and August even Wedge realized what was happening.
6:54 PM Apr 3rd
You can look up the month by month splits of Blyleven, Buehrle, Verlander and Sabathia. Wasn't watching every game they pitched those years so not sure if the manager left them in to labor late in games. But Verlander's ERA in July was 1.01 Sabathia's was 2.83 Buehrle's was 2.88 so they weren't burning out. They were getting stronger. When the league made adjustments against them they weren't forced to overthrow. All those guys have good mechanics.
6:05 PM Apr 3rd
You answered 1 of the 3 questions. So you have no idea either. But I'm sure when a starter does get hurt, you'll be saying it was evident all along.
2:52 PM Apr 3rd
The only reason Globex isn't running a major league team is that his daddy isn't Buzzie Bavasi.
2:19 PM Apr 3rd
Pitcher with bad mechanics throws a lot of innings in May and June than has his ERA shoot up in July. Let's watch this year and see if it happens. James Paxton and Taijuan Walker have good mechanics but lets watch and see. Will be interesting.
2:05 PM Apr 3rd
Which was my point.......
2:13 AM Apr 3rd
I think that any of us could predict who was going to get hurt we wouldn't be posting on this site.

We would be making a lot of money working for a MLB team.

8:33 PM Apr 2nd
Okay, that was actually 3 questions!
7:48 PM Apr 2nd
Since you know the difference between how good organizations and bad organizations handle their pitchers, and you believe "the Yankees should have been aware (Pineda) was damaged goods," I have 2 questions for you:

1. Which pitchers are damaged goods right now? All of your examples are with 20/20 hindsight. Which pitchers have been handled well and are less likely to get hurt?

2. How do you explain Bert Blyleven (25 starts, 164 innings as a rookie), C.C. Sabathia (33 starts, 180 innings as a rookie, no time in minors), Mark Buehrle (221 major league innings at age 22, minor league high was 119), and Justin Verlander (186 innings as a rookie).
7:44 PM Apr 2nd
Looking at Pineda's month by month split he had a 2.01 ERA in April 2.81 ERA in May 3.03 ERA in June 6.75 ERA in July when he was exhausted 4.7 ERA in August and 4.0 ERA in 2 starts in Sept.
Matt Harvey's ERA went up every month but not as drastically. Looked up Fidrych same thing. I'll bet it holds for most of the blown out rookies.
1:57 PM Apr 2nd
Good contrast between a good organization and a bush league organization. The good organization can start it's stud prospect in AAA baby him through half a season establishing success there than call him up in July or August as a 4 or 5 starter.
The bad organization short on pitching starts the year with the stud starting 2nd in the rotation relies on him to get through the tough months May and June than panics when they realize he's on pace to throw 225 innings and shuts him down in September.​
1:18 PM Apr 2nd
Pineda was also the victim of his own success. He had pitched so well early and the M's bullpen was so bad he was often left in to labor in the 7th and 8th inning. In tough situations with runners on base a normal rookie would've been pulled. Pineda also noticeably labored putting extra stress on his arm after already throwing a lot of pitches. I wonder if the same thing happened with Harvey last year and some of the other rookie burnouts.
12:42 PM Apr 2nd
Why are you ignoring that Felix pitched in the minors?
9:22 AM Apr 2nd
Felix started 12 games as a rookie pitched 84 innings. Pineda started 28 games 171 innings. All year manager Wedge said Pineda needed to throw a lot he'd stay active through September. Pineda's previous high for innings in the minors was 139. Major league pitching is much more stressful. Wedge should not have been allowed to abuse the arm of the M's top pitching prospect. The Yankees should have been aware he was damaged goods.
11:44 PM Apr 1st
Great article, Bill. Thanks.
5:30 PM Apr 1st
Bill, I would think the numbers in the first decade would be skewed by the fact that the start group may have been at the end of their careers...i.e.: 35 year old guys in 1900, in their last one or two full seasons.
2:17 PM Apr 1st
This info that only 50% of rotation anchors make 15 or more starts every year for 4 years is major. It makes me think of the concussion stats of NFLers. If I had a son, I'd think twice before encouraging him to be a pitcher. Yeesh.
11:27 AM Apr 1st
An additional big factor to the anomaly factor in the 1900 group is that pitching effectively suddenly got a lot easier in their near future with the introduction of the modern foul strike rule. (1901 in NL, 1903 in AL). A pitcher is suddenly throwing fewer pitches per game and facing fewer situations that took them out of coasting mode on their way to a complete game. But out of habit pitchers were still being used in a cautious pattern shaped by the blowouts in the first few years of the modern pitching distance. That is, as we entered a new era where pitchers could be worked harder, they initially were not.

And that may also relate to what was happening by 1908 beyond the spitball factor. As folks began to realize that it was a new era and pitchers could be worked harder, we entered an experimentation phase where the envelope was pushed too hard until we got a better feel of the balance between immediate return and long-term return.
8:46 AM Apr 1st
Pineda faced nearly 700 batters in his rookie season. Including the minors, Felix faced.... nearly 700 batters.
6:28 AM Apr 1st
Seems like the M's babyed Felix Hernandez along when he was 19 limiting his innings and workload. They took the opposite approach with Michael Pineda tried to get him to throw all year. Looks like they're trying the same thing with Taijuan Walker and James Paxton this year giving them major roles in the rotation as rookies. Is there a pattern of heavy rookie workloads succeeding or does babying guys work out more often?
10:05 PM Mar 31st
Bill, have you noticed any difference between AL and NL pitchers' durability since the DH started being used? A DH represents an additional tough out. Is it possible that the DH helps to wear down pitchers?​
8:05 PM Mar 31st
Yes, we cannot prove that it is not a selection bias issue, but it isn't. I anticipated this argument and tried to stave it off by mentioning pitchers, but. ..look at the pitchers who exit the data after one year. Mark Fidrych, Matt Harvey, Wayne Simpson, Neftali Feliz, Joe Black, Bob Grim, Dave Rozema, Jim Nash in '66, Dick Drott in '57. . .these guys do not leave the majors after one good year because they're not really good enough. They get hurt.
5:22 PM Mar 31st
Re: Rookie pitchers. This may be a selection bias issue. A non-rookie at age 24 will have played at least two seasons, and so, we are more sure of his talent level. A rookie at age 24 will have played only one season, and so, we aren't as sure about his talent level. If both players at age 24 had the same FIP and same ERA, it's possible that the non-rookie is better, simply because he was talented enough to enter the league at age 23.

So, the pattern being shown is not necessarily about rookie v non-rookie, but simply between talent v lesser-talent.
2:04 PM Mar 31st
On the issue of the pitcher durability score going down "despite" the change to a five man rotation. I would think that perhaps the change to the five man rotation introduces a batch of less-reliable, marginal fifth starters to the mix, who are easily replaced and therefore don't last as long.
12:31 PM Mar 31st
Responding to Jalbright, one could study that with this data by dividing the “A” group of pitchers into A1 and A2, A1 being those pitchers who pitched more innings, A2 being those who still pitched a lot of innings, but not as many. But I don’t think I’m going to do that research, because I’m 90% certain what it would show: it would show that the A1 pitchers had GREATER durability in future years than the A2 pitchers. It would show that, I am confident, because:
a) the A1 pitchers would have stronger track records in previous seasons than the A2 pitchers (which is why they pitched more innings), and
b) the A1 pitchers would be slightly better pitchers than the A2 pitchers. ..very slightly better ERAs, strikeout to walk ratios, etc.
This doesn’t mean that it might not be true that there is some “workload limit” at which injury rates increase; it just means we’re not going to find it with this approach.

11:41 AM Mar 31st
Is there any correlation between either IP in the season or career and pitchers blowing out? I realize that the intuitive idea, that heavier usage tends to lead to a blowout, may not actually come true in practice, but it would seem worth investigating
9:26 AM Mar 31st
I wonder how much the spike in pitcher durability in the 1970s led to the impression that there was a problem in the 1980s, when in reality things were just returning to the norm. Great article.

7:47 AM Mar 31st
©2023 Be Jolly, Inc. All Rights Reserved.|Powered by Sports Info Solutions|Terms & Conditions|Privacy Policy