Remember me

Good Seasons and Poor Seasons for Pitchers

March 8, 2017
 2017-17

Good Seasons and Poor Seasons for Pitchers

         &n​bsp;    In a study published two days ago, I took a look at the issue of pitchers having good seasons (by their own standards) or poor seasons.   In the hope that someone else may wish to pick up this line of research later and do more with it, I thought I should explain how I decided whether a pitcher had had a good season (by his own standards) or a poor season.  

              My first decision in this pathway was that I would evaluate the pitcher’s performance based on his strikeouts, walks, hit batsmen, home runs allowed and wild pitches—that is, on those portions of his data which are more or less independent of park and defensive context.     To do that, I needed to state this data as a winning percentage.

              Backing off a half step, I developed a way several years ago to state a pitcher’s strikeout/walk ratio as a winning percentage.   It’s actually a very useful system, useful for a lot of different purposes.   What you do is, you multiply the pitcher’s strikeouts by the LEAGUE walks, and then divide this by the sum of (1) the same thing, and (2) the league strikeouts, multiplied by the pitcher’s walks.  Cross-multiplying, I call it; I suppose others would call it the same. It works remarkably well, and very often delivers a winning percentage which is close to the pitcher’s actual winning percentage, or to the winning percentage that he deserves, given his overall performance.  

              I am proud of that system, but it ignores home runs, hit batsmen, and wild pitches—and balks; I forgot balks.   We don’t want to ignore these things.  

              My second step, then, was to create a "damage number", which is

            ​;  Walks * 2, plus

              Hit Batsmen *2, plus,

              Home Runs * 15, plus,

              Wild Pitches, plus

              Balks.  

              That’s fairly straightforward, and I’ll trust you to figure it out for yourself.   A Home Run is about 7 to 8 times as damaging as a walk; I pegged it at 15 to 2. 

              Those are the "bad" things that a pitcher does, the "loss" things.   For the "good" things a pitcher does, the "win" things, we rely primarily on strikeouts, but I didn’t want to rely entirely on strikeouts.    I hold to the old theory that a pitcher who gets people out has done something, even though he may be dependent on the fielders for the exact numbers.  

           ​   So anyway, for the "positive" number for a pitcher, I use simply 4 times strikeouts, plus outs recorded by the pitcher (thirds of an inning pitched.)    Now we have a positive and negative number for each pitcher.  

              But a positive and negative number for each pitcher is not enough to move on to a winning percentage.   To do that, the positives and negatives must be balanced against context.  

              My initial assumption was that I would balance the pitcher against the league to create a "pure pitching winning percentage", and then compare the pitcher’s winning percentage in one season to his winning percentage in another season to see whether he had exceeded expectations.    But then I realized. . .the league is just really an intermediary there, isn’t it?   I mean, there is SOME information contained by the "league" totals, because the league totals do change a little bit from one season to another.    But what we really are trying to do is to compare one of the pitcher’s seasons to another one.  

              It then occurred to me that I should simply compare the pitcher to himself.   This can be done by cross-multiplying the pitcher’s "good" numbers from the season with his "damage" numbers from his career, and his "good" numbers from his career with his "damage" numbers from the season.   If the ratio of good to damage is better in the season, this leads to a figure higher than .500.   If the ratio for his career is better, it yields a number less than .500.    I will call this the Damage Winning Percentage.  I mentioned Eric Gagne in 2003 in the first article in this series.   Gagne in 2003 had a winning percentage—for that season compared to his career—of .781.   This is the highest figure ever for a Cy Young Award Winner, and one of the highest percentages ever posted.     That was what I meant by saying that Gagne had a .781 winning percentage when pitching against himself.    It means that he was a great deal better in 2003 than he was for his career as a whole.  

              I was quite delighted to stumble upon this method, and my first thought was that this would be all that I needed to mark each season of a pitcher’s career as a good season or a bad season.   Since the pitcher is compared to himself, every season in which he has a "damage winning percentage" over .500 is a good season by his own standards, and every season when he is under .500 is a bad season for him.  

              Alas and alack. . .what the hell is "alack", anyway?   What does alack do, when it is not being partnered with alas?

              Alas, this proved not to be the case.    If I simply label all the +.500 seasons (by this method) as "good" seasons and all the sub-.500 seasons as "poor" seasons, it gives us a certain number of completely absurd results.     To cite two extreme examples, Sandy Koufax in 1961 and Roger Clemens in 1986.   Koufax’ breakthrough season was 1961.   After years of struggling, after finishing 8-13 in 1960, walking 100 batters in 175 innings, Koufax won 18 games in 1961, striking out 269 batters, which at that time was considered a "modern" National League record for strikeouts in a season.   (Some National League pitchers had struck out more in the 19th century, but none since 1900.)  

              At the time, Koufax’ season was considered not only successful, but sensational—but later on, Koufax showed us what sensational really was.   His later performances were SO sensational that, by the end of his career, the 1961 season was below his career norms—thus, by this method a "bad" season.  

              Well. . .can’t have that.   A worse yet example is Roger Clemens in 1986.   Clemens won 24 games, lost 4, won the Cy Young Award, struck out 20 batters in a game, and had what seemed at the time to be a sensational season.    But Clemens had many wonderful seasons later on, and by the end of his career his 1986 season, in terms of strikeouts, walks and home runs and such, was actually just a hair below his career norms, creating a "damage winning percentage" for the season of .497.  

             ​; Well, that’s absurd.   In order to see whether the pitcher has had a good season, you have to sequence his seasons so that you can see what was expected of the pitcher before the season started.     

              To establish expectations for each season. . .well, let’s assume the pitcher has been in the league three years.    I divide the pitcher’s innings pitched by nine to give me a "games" number, and then multiply the games by the Damage Winning Percentage, so that we have a won-lost record for each season. . .a won-lost record in which the "opposition" is the rest of the pitcher’s own career.     It’s actually a very logical system.

           &nb​sp;  Anyway, once we have won-lost records for each season, we sequence them, and calculate the pitcher’s expected won-lost record for the next season in this manner.   Let’s assume the pitcher has been in the league for three years, 1950, 1951 and 1952.   Then his expected winning percentage for 1953 would be his winning percentage from 1950 to 1952, but with 1950 weighted at one, 1951 weighted at two, and 1952 weighted at three.  

              If the player has only been in the league one year or two, that’s not really a problem; you can figure that out.    But what if he’s a rookie?

              You need to put some "filler" in the formula so that you don’t wind up dividing by zeroes for a new pitcher, and also so that you don’t wind up with an expected winning percentage of .000 for a pitcher who has no strikeouts and something negative on his ledger.  

              OK, so we add a little bit of "ballast" into the process so that a pitcher doesn’t start out at .000 or with an error number.    But where do we start him out?

              Had to do some research here.     I had to do some research, but I did learn something really interesting.     The question I had to answer is, what is the expected winning percentage of all rookie pitchers, by this method?    I ASSUMED that the answer would be something less than .500, maybe .400, maybe .450.    I assumed that because I assumed that rookie pitchers did not pitch as well as they did in the rest of their careers.  

              Which turns out to be not true!   It turns out that rookie pitchers actually have BETTER ratios of strikeouts to bad stuff than they will have in the rest of their careers.    The won-lost record of all rookie pitchers in history (by this method) was 29,959 wins, 28,077 losses—a .516 winning percentage.  

              Rookie pitchers, of course, rarely or never reach the heights of the best pitchers, the Pedro Martinezes and Randy Johnsons and Greg Madduxes of the world.   Ordinarily, it takes a year or two for a pitcher to hit his stride in the major leagues.

              But on the other hand, many, many second-year pitchers go backward, rather than forward.    It turns out that more of them go backward than go forward.   I didn’t know that before.  

              That simplifies one problem.   We can start pitchers out at .500.   We can do this by just adding 3 wins and 3 losses to the process we had before.   Let’s do Robin Roberts, 1953, for purpose of illustration.    Roberts had a damage winning percentage of .505 in 1950, .573 in 1951, and .597 in 1952.   Given the enormous numbers of innings that Roberts pitched, that makes a damage won-lost record of 17-17 in 1950, 20-15 in 1951, and 22-15 in 1952.

              Weighting those by one, two and three, that makes 17-17 for 1950, 40-30 for 1951, and 66-45 for 1952—a total of 123 wins, 92 losses.   We add three wins and three losses; that makes 126 and 95.    That makes an expected damage winning percentage, for Roberts in 1953, of .570; actually it is .573 if you carry some extra decimal points.    So Robin Roberts has an expected damage winning percentage for 1953 of .573, repeating once again that this number does not compare Roberts to an average pitcher, but to the rest of his career.   What we are really saying is that Roberts at the end of the 1952 season was at a relatively high point of his career.   He was pitching some of the best ball of his career—as opposed to the end of the 1961 season, when his damage winning percentage was down to a career-low .406.  

              Later in my work, I confirmed that the rookie pitchers actually outperform the rest of their careers.   It turns out that the overall expected damage winning percentage of all pitchers, over time, is a little bit UNDER .500.   Why does that happen?

          ​    It’s because rookie pitchers, on average, are a little bit BETTER than they will be later in their careers.   If pitchers started out at a low point and worked up, then the average pitcher, comparing expectations to actual performance, would be slightly positive.   In fact, he is slightly negative—which means that pitchers over time go down, rather than up.  

              Anyway, having done all of that, I multiplied the difference between the pitcher’s expected and actual damage winning percentages. . . I multiplied the difference between those by the outs recorded by the pitcher (thirds of an inning pitched.)   If the result in absolute terms was less than 5.0, I marked the pitcher as "neutral".   If he was +5, that’s a "good" year; -5, that’s a bad year. 

              OK, so that takes care of the problems with Sandy Koufax in 1961 and Roger Clemens in 1986; those now show as "good" seasons, which they obviously should.     But now we hit another problem.    You wouldn’t believe who turns up as a having a bad season now.  

          &nb​sp;   Lefty Grove in 1931.   Lefty Grove in 1931 was 31-4 with a 2.06 ERA, and won the Most Valuable Player Award.    He turns up in this system as having a disappointing season.   Less than expectations.

              OK, now why in the hell does that happen?

             ​ It happens because the assumption that everything a pitcher does is reflected in his strikeouts, walks, home runs and such is not absolutely true.   Pitchers actually do a lot of other things.   They get ground balls; they prevent stolen bases.   They pitch well at key moments of the game, or they don’t.   They actually do a whole lot of things other than get strikeouts and issue walks and give up home runs.  

              For Lefty Grove in 1931, you can see that by strikeouts, walks and home runs allowed, his 1931 performance was not the equal of his previous seasons.    In 1931 he dropped off by 31 strikeouts, from 206 to 175, while walking two more hitters than he had in 1930.   (He was also great in 1930.)      He gave up 10 home runs in 1931, as opposed to 8 in 1930 and 8 in 1929.  

           ​;   So by THOSE measures, he actually WAS down in 1931.    He has an actual damage winning percentage for the season of .571, but with an expectation of .582.   It’s just that the proposition that all a pitcher does is strike people out and get walks is just not true; that’s all.   (Some of you will leap to the conclusion that this is a 1930/1931 thing, since 1930 is a famous big hitting season.   But that’s actually backward.   The fact that there wasn’t quite as much hitting in 1931 as in 1930 should have HELPED Grove meet expectations in the strikeout/walk/home run categories, not hurt him.)

              Well obviously, the conclusion that Grove had a poor season in 1931 is completely intolerable.   It’s not a COMMON problem in the system, but we can’t have it; we had to put in a special rule to make sure that sort of thing doesn’t happen.   The special rules requires that we pay SOME attention to runs allowed and ERA.   Old-school stuff.  

              Well, eventually I made the system work.    I put in a rule that if a pitcher was 20 runs better than the league average, and had a damage winning percentage over .500, that was automatically a "good" season, regardless of expectations.   Oh, neutral seasons. . .I didn’t finish explaining about that.  It’s different than hitters.  If a pitcher pitched less than 30 innings, it was automatically a neutral season, no questions asked.    But if a pitcher pitched 30 or more innings, he would still be marked as "neutral" if he meets two conditions:

              1)  that the product of the discrepancy between the pitchers expected and actual winning percentages, multiplied by his outs recorded, is less than five, and

              2)  his earned runs allowed are within 10 of the league average, given the innings he has pitched. 

              In other words, a pitcher has a neutral season if he is BOTH league-average, based on runs allowed, and does almost exactly what he is expected to do, based on his strikeouts and walks and home runs.     A pitcher CAN pitch 200 innings and still be marked as neutral, although he really has to thread the needle for that to happen.   The last time it happened was Mark Buehrle in 2013.   He was almost exactly league-average, and also did almost exactly what was expected of him, so he pitched 204 innings, but it’s a neutral season, neither good nor bad. 

              OK, well. . .it’s a long explanation, and I appreciate your sticking with it.   Assuming that anybody did.    Thanks for reading.  

 

 

 

 

 
 
 
©2024 Be Jolly, Inc. All Rights Reserved.|Powered by Sports Info Solutions|Terms & Conditions|Privacy Policy