Remember me

The Target Score

February 15, 2019
 

 

The Target Score

              In yesterday’s article I led you through a series of steps to arrive at an "Expected Game Score" for each pitcher in each start, based on what team he is pitching against and what park they are playing in.  Let us call the expected Game Score the Target Game Score or Target Score.   If the pitcher does THAT well or .01 better, he has done better than a league-average pitcher would have done in the same circumstances, on average, so he has had a good game.  If he falls short of that target, then it wasn’t a good game. 

              This is a new method for me; I don’t think I have done this one before, although I remember doing something similar one time maybe six or seven years ago.  

              My next thought, then, was that perhaps we could consider any game in which a pitcher reached the Target Score as a Win, and any game in which he failed to reach the Target Score as a Loss.   There is an obvious problem with that, which is that it would assign the starting pitcher a win or loss in every start, which makes won-lost records very different than real-life won-lost records.  A pitcher in 36 starts would go 19-17, which doesn’t really happen very often.

              I "fixed" that problem by making a rule that a pitcher was credited with an "Alternative Win" or "Alt-W" if his Game Score exceeded the target by 6.5, and was charged with an Alt-L if his Game Score fell short of the target by 5.5.   The numbers were jiggled so that the total number of wins in the data would match the actual total of starter wins, or come very close to it, and the same with losses.  

              But this method didn’t work.   Here’s what happened.  The method in some cases comes close to matching actual records; Denny McLain in 1968 (31-6 in real life) has an Alternative Won-Lost of 31-4, and Ron Guidry ten years later (25-3) has an Alternative Won-Lost of 27-3.   But Sandy Koufax in 1965 and 1966 (26-8 and 27-9) goes to 32-3 and 32-5.  He also would have 30 wins in 1963.   Randy Johnson has records of 29-1, 29-5, 28-3, 27-3 twice, 25-2 and 22-0.   Steve Carlton in 1980 is 31-1.  Nolan Ryan in 1977 (actually 19-16) has an alternative won-lost record of 29-4. Gibson is 29-1 in 1968. 

              Those kind of guys just don’t have "bad" games very often; maybe 1 to 5 times a year they will get hit, but not very often.   We wind up with non-representative won-lost records.

              We wind up with non-representative won-lost records because we made a bad assumption.   We assumed that whenever you beat the Target Score by 6.5 points, you deserve 100% of a win.   But, of course, if your Target Score for a game is 50 and you have a Game Score of 57, you’re not going to have a winning percentage of 1.000.   You’re actually going to have a Winning Percentage of .617, which is a good winning percentage, but it isn’t 1.000.  

              The (failed) alternative-win system assumes that having a Game Score of 57 with a Target Score of 50 is the same as having a Game Score of 95 with a Target Score of 50.   Of course it is not the same.  That’s why the Alt-Win system failed.

 

Deserved Won-Lost Records

              I moved on, then, from Alternative Won-Lost Records to Deserved Won-Lost Records, or D-Wins and D-Losses.  

              The data shows that a pitcher who has a Game Score of 57 in a game in which he has a Target Score of 50 (57/50) has a 74.7% probability of getting a decision, and a .617 expected winning percentage if he does get a decision. 

              If a pitcher goes 57 over 50, then, we credit him with .461 Deserved Wins, and charge him with .286 Deserved Losses.  That gives him a .617 winning percentage, and 74.7% of a decision.  Totaling up each pitcher’s deserved wins and losses for the season, you have his deserved won-lost record.

              These winning percentages never reach 1.000.   There are 870 games in my data in which the starting pitcher exceeded his Target Score by 40 points or more and also received a decision, the top one-quarter of one percent of games.  Those pitchers won 843 games of those games, but lost 27.  It’s a .969 winning percentage.   But no matter how well you pitch, it is always possible that the other guy will pitch a shutout.   The highest expected winning percentage that you can have in a game is .974. 

 

Taking Stock of What We Have

              I hope you guys know me well enough to know that I am not going to claim that we have a perfect system for evaluating starting pitchers here, or even (necessarily) that it is better than any other system for evaluating starting pitchers.   But let me point out, before I share the data with you, that this approach does have multiple and serious analytical advantages over some other methods of evaluation.

              First, this approach puts in Park Effects adjustments game by game, rather than assuming that all teammates have the same park effect.  A pitcher making 34 starts in a season might reasonably often have a 19-15 split in the home/road starts, and occasionally might have a 20-14 split or higher.  If the park effect is meaningful for those pitchers, you have a more accurate evaluation if you put in the park effect game by game, rather than applying one park effect to every pitcher on the team.

              Second, this approach adjusts for the quality of teams faced by the pitcher, so that if one pitcher is matched up against the better teams and one is not, the one who faces the better competition has an adjustment.

              Third, this approach (in the end) looks at the game-by-game impact of the pitcher’s performance, rather than the cumulative impact, thus giving an advantage to a pitcher who is consistently good, over a pitcher who is sometimes brilliant.  I’ll discuss that more in the second following segment. 

              Fourth, this approach considers and holds the pitcher accountable for a fairly wide range of categories—hits, walks, strikeouts, earned runs allowed and runs allowed—and places values (weights) on each one, rather than relying on strikeouts and walks, ignoring runs, or relying on runs allowed, ignoring strikeouts and walks.  I will acknowledge later that there is a problem with this, in that the values are not established by proper scientific methods, but, for example. . . .ERA or any system relying strictly on ERA or runs allowed implicitly places zero value on hits.

              Let’s take these two games for contrast.   Walter Johnson shut out the Chicago White Sox on May 23, 1924, beating them 4 to nothing, and again on May 11, 1925, beating them 9 to nothing.   But in the first game, Johnson struck out 14 hitters, walked only one, and gave up only one hit, making a Game Score of 98.  In the second game he struck out only 4 batters, walked two, and gave up five hits, making a Game Score of 79.  The question is, should these two games be treated the same, or different?

              If you base your analysis strictly on runs allowed, both games are the same.   But if you look at Run Elements, rather than just Runs, they’re quite different. 

              Old line analysis, of the type that was dominant before sabermetrics, focused only on the Runs the pitcher allowed, ignoring the Run Elements.   Some modern analytical methods still do that, or do that in effect.   But suppose that you did that for hitters.  Would you say, for a hitter, that these four hits don’t count, because they didn’t lead to any runs, or that this walk that he drew doesn’t count, because it didn’t lead to a run?

              If you did that, you would be left evaluating hitters by runs scored and RBI.   My point is that to do so for pitchers is not entirely different from doing so for hitters.   This method, the method I am advocating here, looks not only at the runs in each game but the run elements in each game—not perfectly weighted, I agree, but weighted.  

              Some methods treat un-earned runs the same as earned runs.   Other methods completely ignore un-earned runs, blaming them on the defense.   This method gives weight to the un-earned run, but only one-half the weight of the earned run.   I believe that is a better approach.  

              Now let’s look at the weaknesses of the approach. 

              First, we are missing data for many starts before 1958, so we are unable to use this method to evaluate those pitchers.

              Second, many pitchers over time (and almost all pitchers from 1920 to 1960) were used both as starters and relievers.  Since this method applies only to starts, we are unable to look at the full picture of the season for those pitchers.

              Third, our method (Deserved Wins and Losses) generates output which is not directly comparable to the output of other methods such as Win Shares and WAR, although we could and will use the method to derive a Starting Pitcher WAR based on Deserved Wins and Losses. 

              Fourth, while our method applies weight to every event, these weights are not meticulously created, or scientifically accurate.   They’re just what was necessary for the purpose I was pursuing when I invented Game Scores, 35 years ago or whenever it was. 

              When I developed the system in the early 1980s, it was (at the time) widely criticized because I gave weight to a strikeout, as opposed to another out.  "Why should a pitcher get extra for a strikeout," people asked at the time, "rather than a fly ball or a ground ball?   As long as he gets the batter out, what difference does it make?"

              Since then, history has swung in favor of the decision to give extra credit for a strikeout.   I think almost everybody understands now why it is in fact appropriate to give extra credit for a strikeout.   There remains, however, the issue of whether I may perhaps have given TOO MUCH credit for the strikeout, two much extra weight for the strikeout.  I rather think that, for this purpose, for the purpose of the Target Score approach, the strikeout probably IS over-valued, and this probably does marginally effect some conclusions.  Comparing Nolan Ryan to Tommy John, comparing Randy Johnson to Greg Maddux, I would be concerned that the over-valuing of strikeouts might bias any conclusion we would reach.   If it isn’t an unusual case, like Ryan or TJ or RJ or Maddux, then I don’t think it’s a real issue, and I wouldn’t worry about it, myself.

 

Back to 1961

 

              So back to the 1961 season, which we talked about with regard to Jack Kralick and Curt Simmons in articles that were actually part of this series, although they were published earlier as separate articles.  (As this series goes on, I will re-publish those articles, so that you will have the opportunity to read them in context.)  But who was the best pitcher in the majors in 1961?

              On the level of "Cumulative Margin", the number one pitcher in the majors in 1961 would be Camilo Pascual of the Minnesota Twins.  I know I have reached THAT conclusion before, using a methodology very similar to this one.   If you rummage through the articles on this site, you can find an article in which I made many of the same adjustments that I made here, and concluded that Pascual was the best pitcher in baseball in 1961. 

              I’m not going to reach that conclusion this time; we’re not at the finish line yet.  But let’s take on this question.   Jack Kralick and Camilo Pascual were teammates in 1961, with similar records.  They made 33 starts each, gave up 97 earned runs each.   Pascual was 15-16 with a 3.46 ERA; Kralick was 13-11 with a 3.61 ERA, although, as we know, Kralick allowed fewer TOTAL runs, just a higher percentage of them were earned as opposed to un-earned.   So the question is, if it is not credible to argue that Kralick was the best pitcher in the league, but merely has a record that basically disguises this because of park effects and run support. . .sorry.   The question is, if it is not credible to argue that Kralick is the best pitcher in the league, why is it credible to argue that Pascual is the best pitcher in the league?

              Two reasons.   First, Pascual was probably the best pitcher in the American League in 1959, was a highly effective, top-3 American League pitcher in 1960 although he missed starts with injury, won 20 games in 1962 and was 21-9 with a 2.46 ERA in 1963.  He led the American League in WAR for pitchers in both 1959 and 1963.  In 1959 he led the American League in WAR, period, finishing ahead of the ERA leader (Hoyt Wilhelm), ahead of Mickey Mantle (third) and the MVP, Nellie Fox (fourth).  He was a great pitcher in 1959, 1960, 1962 and 1963; it is not such a stretch to argue that he was the best pitcher in the league in 1961, but merely happened to have a combination of park and run support which made it look like he wasn’t.

              Second, Pascual led the American League in strikeouts in 1961, and gave up 205 hits in 252 innings, whereas Kralick had a nothing strikeout rate and gave up 257 hits in 242 innings.   Most people realize now that strikeouts are not Christmas decorations; they actually matter.   

              I don’t have any doubt that Pascual was a better pitcher than Jack Kralick in 1961, as he was in every other season.  Pascual threw eight shutouts in 1961—eight.   That’s a big number.  Even back when complete games were common, a lot of people would lead the league in shutouts with four.  But it’s a very good question.   If Kralick isn’t a credible candidate for the best pitcher in the league, is Pascual? 

             

Consistency

The impact of pitching well on getting a win, or on your team’s getting a win, is not a straight-line impact.   To begin with, one might assume that if you meet the Target Score for a game—that is, if you pitch as well as an average pitcher might be expected to pitch in the same circumstances—that you should have, and your team should have, an expected winning percentage of about .500.   This is not quite true.  If you exactly meet the Target Score for the game, your expected winning percentage for the game is .450.   Your team’s expected winning percentage is higher, up a little short of .500, but if you just meet expectations, don’t exceed them, there’s a good chance that the bullpen will get the decision. 

To get an expected winning percentage of .500, you actually have to exceed the Target Score by 1.43 points.   If you exceed the Target Score by 5.00 points, you have an expected winning percentage, as a starting pitcher, of .572.    Those five points in your Game Score—basically one clean inning—increase your expected winning percentage by 122 points.

But do EVERY five points of Game Score improvement increase your winning percentage by 122 points?   Obviously they can’t.   That would mean that, if your Game Score was 25 points above the Target Score, your expected winning percentage would be greater than 1.000, which is of course impossible. 

I am making a point here which many, many people have made before in other forms, and which most of you, I would guess, already know to be true:  That pitching well in a game has the greatest impact when the expectation is near .500.   The Target Score for a game is almost always near 50.   If the Target Score is 50, the difference in the pitcher’s expected winning percentage between hitting 55 and 50 (Game Score) is enormous.   The difference between 95 and 90 is tiny, tiny, tiny.   If your game score is 70, 75, 80, you’re probably going to win the game anyway.   At a Game Score of 70, a pitcher’s expected winning percentage is about .825.    There’s not that much room for it to go higher. 

The point is this is that for a starting pitcher, consistently good performance is more efficient than occasionally brilliant performance of the same overall quality.   This is not a universal truth in regard to brilliance versus consistency.  For a Hall of Fame candidate, for example, the opposite is true:  brilliant seasons count more—and should count more—than good seasons adding up to the same total.   The reason this is true is that BIG seasons, in a career, have more impact on your team’s pennant chances.   But for a starting pitcher, consistency is better than inconsistent dominance. 

A few articles back in this series, when I was listing the positives and negatives about this line of analysis, I listed "consistency" as one of the benefits of this line of analysis.   This article is trying to explain why consistency in an actual benefit which should be measured.  If you evaluate a pitcher’s season by his season totals, you miss the issue of consistency.  A pitcher has 200 innings with an ERA of 3.60; you don’t know whether he has been consistent or not—but it actually does make a difference.   That’s why this is a benefit to this approach, missing from many other approaches.

This also has to do with Camilo Pascual in 1961.   Pascual was the best pitcher in baseball in 1961, if you just compare his Game Scores to his Target Scores, and total up the margin.   But remember what I said:  Pascual threw eight shutouts in 1961.  

Pascual in 1961 pitched a lot of brilliant games.   Pascual in 1961 had 12 games in his which his Game Score was +25 versus the Target Score, and 7 games in which he was +33 or more.    But he also had 12 starts in 1961 in which his Game Score was below the Target.   That’s a little high, for a pitcher of that quality.  He had three starts in which he was 23 points or more below the Target Score.

Pascual was not the best pitcher in baseball in 1961 because he was inconsistent.   This is not a general observation about Pascual.   He was not inconsistent in 1959 or in 1963; in those seasons he had normal numbers of dominant games and off games.  But in 1961, he had a measurable and meaningful inconsistency in his game-to-game performance. 

 

 

Back to Deserved Wins

I made a little twist in the direction there that I wanted to be sure that you followed.   There is a Target Score for each start, and the pitcher’s "Deserved Wins" and "Deserved Losses" are based on how he does vis a vis the Target Score.   But I made it sound like it could have been "expected wins", and it isn’t exactly expected wins.   It is Deserved Wins, based on information derived from expected Wins. 

If you post a Game Score of 40 against the 1936 New York Yankees in a Bandbox park, that’s a good game, and you deserve credit for pitching a relatively good game under the circumstances.  But that’s not the same as saying you expect to win.   The Yankees had some good pitchers, too, a couple of Hall of Famers.   You pitch a good game; they pitch a good game.   It’s not expected wins; it’s actually deserved credit, expressed as a number of wins.   It’s a new concept for me.   Over time, deserved credit and expected wins should balance out. 

This pathway of analysis enables me to calculate, better than I have ever calculated before, how many games each pitcher deserved to win, based on a careful, start-by-start evaluation.   Based on this, we can identify the pitcher most deserving of the Cy Young Award, for example, in each league in each season, and those evaluations are as good as fWAR or rWAR or any other system.  But here’s the problem.  We have done this too many times.  I have done it too many times; other people have done it too many times.  One year in four, one year in five, this list will be different than the last list. . . .who cares?   Who even knows?  That is the importance of a case like 1961, where the dominant system (rWAR) gets an eccentric answer in both leagues.  That puts us in a position to straighten it out, so to speak.   That gives us the opportunity to look again, to look for contrast.

Well, here’s what we’re going to do.  I’m going to start with the Deserved Wins and Deserved Losses for each pitcher each season.  From that, we can make a reasonable estimate of his Wins Above Replacement—and anyone who understands this stuff will tell you that that’s all that WAR is; it’s a reasonable estimate.   It’s not a fact; it’s not a precise calculation.  It’s a reasonable estimate.   We’ll form OUR reasonable estimate, this reasonable estimate, in this way:

DWin – ((DWin + DLoss) * .35)

Implicitly assuming a .350 replacement level.  There is a reason for using a little higher replacement level here than the consensus .294 or whatever it is, but let’s not get into it.   Then we’ll use this new WAR, which I will call D-WAR, to identify the pitcher most deserving of the Cy Young Award.  Then I’ll go to Baseball Reference, and I’ll check the leaders in pitching WAR.   If they’re the same, then we’ll just move on.   When they’re different, we’ll look at them, try to understand why they went in different directions, and I will debate with myself whether we have a more credible answer or a less credible one. 

As this series of articles went on, I began to realize that My WAR, D-WAR, is calibrated a little bit higher than Baseball Reference WAR or FanGraphs WAR.   This wasn’t my intention; it was just kind of how it worked out. 

1921 AL—Red Faber  (both systems)

1921 NL—Burleigh Grimes (both)

1922 AL—Red Faber (both)

1922 NL—Wilbur Cooper (both)

1923 AL—George Uhle (my system) Urban Shocker (R-WAR)

Uhle went 26-16 with a 3.57 ERA in 358 innings.  Shocker was 20-12 with a 3.41 ERA in 277 innings.   In the American League in 1923 there were several pitchers of comparable dominance, no one clearly ahead.   R-WAR (Baseball Reference WAR) has the leader as Shocker (6.2), but with Uhle only a half-game behind, in fifth place at 5.6.   My data is missing six starts for Shocker (have 29 out of 35), plus my system doesn’t deal with relief appearances, and Shocker also made 8 relief appearances as was common at that time.   I accept that their evaluation is better than mine.

1923 NL—Dolf Luque (both)

Luque, a 5-foot-7 inch Cuban, was 27-8 with a 1.93 ERA, one of the greatest pitching seasons of the 1920s.   He leads easily by both systems.  R-WAR has him at 10.3.   I have him at 11.0, although I am missing data for two of his starts and his four relief appearances.

1924 AL—Walter Johnson (my system) or Howard Ehmke (R-WAR)

Ehmke was 19-17, 3.46 ERA, leading the league in losses, for a bad Red Sox team, also leading the league in innings pitched (315).   In his career he finished exactly .500, 166-166, although Baseball Reference has him at 11.8 Wins Above Average.  My data is missing 9 of Ehmke’s starts, plus he made 9 relief appearances that this system does not deal with, and I still have him third in the league in D-WAR, behind Johnson and Herb Pennock.   Walter Johnson was 23-7, 2.72 ERA, and led the league in wins (23), Earned Run Average (2.72), Starts (38), strikeouts (158), Shutouts (6), ERA+ (149), FIP (3.31), WHIP (1.116), fewest hits per 9 innings (7.6), strikeouts per nine innings (5.1), and strikeout to walk ratio (2.05 to 1), and was named the Most Valuable Player.  

Obviously Johnson would have won the Cy Young Award had there been such a thing.  Also obviously, I can’t argue the selection either way when I have missing data for both pitchers.  

              1924 NL—Dazzy Vance (both systems)

              Vance, the National League’s Most Valuable Player, had THE most dominant pitching season of the 1920s, finishing 28-6 with a 2.16 ERA.   I am missing data for two of his starts, but have him with a Deserved Won-Lost record of 23-4, whereas I have Luque in 1923 with a deserved won-lost record of 21-7—still easily the best in the league—and Walter Johnson, the American League MVP, with a deserved won-lost record of just 15-7, granting that I am missing more data for him, but you can’t get from 15-7 to 23-4 by adding more data. 

Vance, who led the National League in strikeouts every year from 1922 to 1928 and led in strikeout to walk ratio every year from 1924 to 1931, had probably the most dominant strikeout season of all time in 1924, relative to context.   Vance had 262 strikeouts.  The #2 pitcher in the league had barely over half that number (135), and he was the only other pitcher in the league who had one-third as many strikeouts as Vance.   Vance had 7.65 strikeouts per nine innings, almost three times the league average of 2.77 strikeouts per nine innings.

 

1925 AL—Herb Pennock (both systems)

The 1925 Yankees had a bad year.  Pennock was just 16-17, but both approaches agree that he was the best pitcher in the league—the only time in his career that he was.

 

Pennock Against Grove

On July 4, 1925, two Hall of Fame pitchers met in one of the most remarkable pitchers’ duels in major league history.  

The Yankees were having an off season.  Babe Ruth’s eating habits had caught up with him over the winter, and he was unable to play the first two months.   Through July 3 he was hitting just .263, and through June he had hit only 3 home runs, although he added two more on July 1 and one on July 2.   The Yankees, troubled by Ruth’s absence and other problems, had dropped 16 and a half games behind, with a won-lost record of 31 and 39.   For the rest of the season Ruth was the regular Babe Ruth, but the Yankees were not able to climb back into the pennant race. 

Philadelphia, on the other hand, was having their best season in a decade, with a record of 45 and 24, just two games out of first.  They hadn’t finished over .500 in ten years, and had been in last place almost all of those years.   45-24 was borderline incredible.

Pennock had come up with the A’s in 1912, when they were the powerhouse of the American League.  He had gone 11-4 for the A’s at the age of 20 (1914), but was sold that winter to Boston, where he teamed up with Babe Ruth and many others.  He was a minor part of the powerhouse in Boston, but by the time he rejoined Ruth with the Yankees in 1923 he had become one of the Yankees’ best pitchers, finishing 19-6 in 1923 and 21-9 in 1924.

Grove, on the other hand, was still trying to get established in the major leagues.   He had starred for four years in the American Association, working for Jack Dunn, the same man who had discovered and nurtured Babe Ruth just a few years earlier.   Dunn had been forced to sell Ruth to the Red Sox in order to meet his payroll, but had regretted doing so.   He was determined not to repeat that mistake with Lefty Grove.  He had no intention of selling Lefty Grove to the major leagues.  He wanted to keep Lefty Grove for himself, for HIS team, the American Association team in Baltimore. 

That was a controversial choice, to say the least.  The fight over whether a team like Baltimore should be allowed to keep a player in a minor league, or whether they should be forced to sell him to the majors, was one of the central stories in baseball in the early 1920s.   Eventually it became clear that Dunn would lose this fight, and would be forced to sell Grove to the majors.   Getting ahead of the curve at the last minute, Dunn sold Grove to the A’s for $106,000, the highest amount ever paid for a minor league player at that time.  It was only $19,000 less than the Yankees had paid for Babe Ruth five years earlier.

Grove, however, had not paid off in the early season.   Going into the game his ERA was 5.67.   The first game of the Fourth of July double-header would be the first outstanding game of his career. 

Through six innings Herb Pennock faced only 18 batters; Al Simmons had hit a single, but was caught stealing.   In the 7th inning he gave up a leadoff single; the runner made it to third on two outs, but became the first runner left on base by Pennock.   He gave up a two-out single in the eighth, retired the side in order in the 9th. 

Grove was having more trouble with the Yankees.   Babe Ruth singled in the first.   He walked Pee Wee Wanniger, a weak-hitting shortstop, in the third inning, and put him in scoring position with a Wild Pitch.  He walked Ruth in the fourth, and gave up two singles in the fifth inning.   He loaded the bases in the sixth, on two singles and a walk, and gave up a single in the 8th.   He gave up a single and a walk in the 9th, putting two runners in scoring position.

He did everything except allow a run to score, but it was 0-0 after nine.    Each team went 1-2-3 in the tenth inning, and again in the eleventh.   The A’s went in order again in the 12th; Pennock by that time had retired 10 in a row.  The Yankees got an infield single in the 12th, didn’t score. 

Pennock retired the Yankees in order once again in the bottom of the 13th; 13 in a row.  In the bottom of the 13th Ruth led off with a single.  Bob Meusel doubled, putting runners on second and third with none out.  An intentional walk to Gehrig loaded the bases. 

At this point an odd sequence occurred.  Whitey Witt pinch ran for Ruth at third base—and then Bobby Veach pinch ran for Witt.   It appears that Witt was injured at that point, and apparently very seriously injured, as he did not play again the rest of the year.  How this happened I do not know. Anyway, Grove worked his way out of it, what I call a Houdini.  Bases Loaded, nobody out, Yankees didn’t score.  A 13th inning Houdini, but a starting pitcher working on a shutout.  I would bet it’s never happened again in the major leagues.

Both sides went in order in the 14th, the game still tied at 0-0, Pennock and Grove still on the mound.  In the 15th inning Jimmy Dykes hit a triple to right field.   Bobby Veach, who had pinch run for the pinch runner for Babe Ruth, was an old player who rarely played the field anymore, but now he had to.   He was in right field.  Veach threw to third, too late to get Dykes, who headed home, perhaps intending to make it an inside-the-park home run, or perhaps the ball had momentarily gotten away from the third baseman, Joe Dugan.   In either case Dugan threw home in time to nail Dykes at the plate. The game headed into the bottom of the 15th, still tied 0 to 0.

Bobby Veach singled leading off the bottom of the 15th, and Meusel bunted him to second.  Grove struck out Gehrig, his tenth strikeout of the game, the first time in his career he had struck out ten men.   Steve O’Neill singled to center, Veach scored, and the Yankees won the game, 1-0 in 15 innings. 

In 15 innings, Pennock faced only two batters over the minimum.  He gave up four hits; one man was caught stealing, and one man thrown out at home plate after hitting a triple.  He walked no one, struck out four.   His Game Score, 114, is the fifth-highest among the 329,988 games in my data, but in view of the quality of the opposing team, his performance scores as the second-best game in the data compared to the Target Score for the game. 

 

 

 

 
 
 

COMMENTS (2 Comments, most recent shown first)

mjhnyc
Another tidbit from the game story: the only extra-base hit Pennock allowed, Dykes's triple in the 15th, was "a longish fly to right. Veach, who had replaced Ruth, misjudged the hit and then fell going after the pill."
12:22 AM Mar 8th
 
mjhnyc
The explanation for the Whitey Witt situation, from the Yankees game story in the July 5, 1925, New York Times: "Huggins sprang a new one for the book by releasing Whitey Witt one day and using him as a pinch runner the next. Maybe Hug forgot about having given Whitey the pink slip. At any rate, a minute later Miller withdrew Witt and sent Veach in to do the necessary running." Leaving open the question of why Witt was in the dugout in uniform.
2:15 AM Mar 3rd
 
 
©2024 Be Jolly, Inc. All Rights Reserved.|Powered by Sports Info Solutions|Terms & Conditions|Privacy Policy