In college, around 1973, I wrote a short story, my earliest piece of writing I can honestly say I’m still sort of pleased with, called "The Pitch," that took place on the mound, narrated by a big-league pitcher, in between pitches to a dangerous batter. The whole thing, in other words, transpired entirely between his last pitch and his next one. The story’s final line was "I’m going to take my time. I’m the thinking man’s pitcher."
There are a lot of places I could go with this (later, around 1978, I made an oil painting on canvas, also called "The Pitch," using Tom Seaver in a Cincinnati Reds uniform as my model. In 1987, I took this canvas, framed, to meet Tom Seaver at a talk he was giving to journalists in the State Museum in Albany, New York, where I almost accomplished two goals: getting his autograph on the ball in my painting and getting to bat against Seaver. I ended up accomplishing neither, but it’s a cool story, Bro, of how close I came to doing both that day.) But I’ll go this way instead: as of 1973, I hadn’t yet read Bill James, of course, but some of his revolutionary sabermetric ideas must have been in the air, or maybe his early work, which started coming out in print around that time, derived from the concept of baseball players getting edges over their opponents that were mostly mental rather than purely physical.
No one, certainly not Yogi Berra, could accurately quantify how much of baseball was mental, though Yogi’s estimate ("90% of the game is half-mental" translates into 45% being purely mental) is a starting point. (Assuming Yogi was saying "mental" and not "Mantle." Almost one-third to one-half of Yogi’s star teammate’s name is 100% "Mantle," depending on whether we count his full name as "Mickey Mantle" or "Mickey Charles Mantle." I suppose the opposite postulate, "90% of the game is half-physical," would be voiced by a teammate of a player named "Freddie Fezzicle.") But the idea of baseball having a strong cogitative function was, I think, just beginning to form in the 1970s, when Bill first invented sabermetrics.
Long before my fictional pitcher or that night watchman in Kansas came along, people had, of course, dwelt upon the crucial role that thinking plays in baseball, but I don’t believe that it had any special application to the way the game was played on the field, at least not as compared to how other games are played. If you were a "thinking man’s pitcher" in the decades before the 1970s, that just designated you as someone capable of strategizing during your pre-game warmup, or during down-moments in the game while it was being played at its normal speed. There have always been brainy baseball players, educated, articulate, thoughtful players, but they had to apply their in-game thoughts to the game as it was being played. What was happening at the time, which sabermetrics went on to add to immeasurably, was that thinking during the game, players taking time to consider carefully their full range of options, began to be seen as extremely useful to winning the game, and eventually essential.
I have proposed before my simplistic (yet I think effective) cure for overly lengthy baseball games, that of requiring batters to remain in the batter’s box for the duration of each at-bat, encouraging pitchers to throw pitches while the batter is still thinking about whatever-it-is that batters mull over between pitches. Forced to stand in the box continuously, a batter who really needs to adjust his batting gloves, or adjust his jock, or his attitude, will also need to balance that that need against the risk that he might still be in mid-adjustment when the ball is thrown. This requirement would be virtuous because it would not only speed up the game beyond measure, but because it would restore the balance of physical/mental processes to the game’s origins as an athletic contest. I simply do not want to pay money to watch athletes thinking, and that is one thing that sabermetrics has done to ruin the game.
I probably don't need to add, at this point, that I believe sabermetrics to be the greatest enhancement of baseball (all sports, really) since the invention of the hot dog bun, and that it has added to my enjoyment of baseball, and my appreciation, in more ways than I can hope to name. But here’s my objection to this source of pleasure:
Something that makes winning more likely is not necessarily making the game more enjoyable to watch.
In fact, I will argue, most things that improve a team’s chances of winning take time. Often, lots of time.
Lots and lots of time.
Think about this a moment. (Ironic, right?) I would confidently make an educated guess that if you compared a random boxscore of, say, the 1940s to one from the 2010s, there will be many more players in the more recent boxscore. Many, if not most, of these will be relief pitchers, and many, if not most, of those will be relief pitchers inserted while an inning is in progress. I’m sure someone must have done a study measuring the precise increase in players, pitchers, pitchers in mid-inning, inserted into a game in every season ever played—in fact, let me see:
Yes. A rigorous examination of the available literature (academese for "a quick Google search") shows nothing immediately relevant. In 2010, a guy named Andy studied a sorta-kinda related subject in an article posted on Baseball-reference.com’s blog, available here: https://www.baseball-reference.com/blog/archives/6927.html I can’t access the full article (BBREF.com seems to have it archived somewhere-- only God knows where) but the summary is sufficient: "In 2009, teams averaged 22 pitchers and 38 1/2 hitters per season," Andy maintains, while "In the 1880s, teams averaged about 4 pitchers and 15 hitters per season." Since you couldn’t care less about the 1880s, Andy goes on to show that in "the 1950s and 1960s, numbers were steady at lower values: 15 pitchers and 35 hitters per year." Presumably, larger rosters result in more players available to substitute in a game, but roster size tells us only how many players are on teams over the course of a season, not how many are in the lineup over the course of one game. Hard to believe there are studies correlating ounces of beer sold to number of pitches thrown but not one study of how many players appear in a typical game, but that may just be my bad searching skills. So I did my own research.
Would it surprise you very much to learn that the number of pitchers’ appearances per game has more than doubled over the past 70 seasons? I tried looking at this in a number of ways, starting with getting a general sense that I was on to something here by picking boxscores at random from 1947 and 2017, and comparing the number of players used in each game, especially pitchers. Unsurprisingly, that total was way up, from between 10 and 11 players in the boxscores I looked at from 1947 to over 14 per game in 2017, roughly 30-40% more, depending on how truly random the 20 or so boxscores I looked at were. (Some of that increase is due to an added player, the DH in half of the 2017 games, so we should be looking at the lower end of those percentages.) But, deciding that something more systematic was called for (I didn’t want to spend my life looking over boxscores to confirm what we all understand to be true), I looked at MLB’s total of pitcher appearances, which bbref.com conveniently provides per each 180 innings of play, and found it to be 88. Per nine innings then, there are 4.4 pitchers used by each team. In 1947, by sheer coincidence, it happens that there were precisely 2.2 pitchers per 9 innings. Again, totally unsurprising, but nonetheless interesting to have it quantified so neatly, exactly twice the number of pitchers per game as there were 70 seasons ago.
This metric is much more extreme than it seems at first blush, because I’m interested in looking, not at the total number of pitchers per game but the number of relief pitchers per game. Each team’s starting pitcher doesn’t, of course, add to the delay for a new pitcher, because he’s there when the game begins, so subtracting one pitcher from each side of the ledger, we actually learn that in 1947 there were 1.2 pitching changes per game while in 2017 that number was 3.4, or nearly triple the number of relief pitchers per game.
As each pitching change is preceded, often enough, by a consultation on the mound resulting in no pitching change, there is no precise upper limit on the number of game stoppages (plus strolls in from the bullpen, warmup pitches, etc.) these pitching changes engender—suffice it to say that there are well more than three times the number of such delays in 2017 than there were 70 seasons earlier.
These delays are due, directly or indirectly, to sabermetric thinking.
Of course, you may well argue that this number (and others) was creeping upwards by the time that Bill was sharpening his first pencil, and of course you’d be correct. But let us not forget that sabermetrics is not just the study of stats for the sake of studying stats, riveting as they may be. Sabermetrics is about the search for truth, and specifically for truths that lead to winning baseball games. Put more vulgarly, it is about the rejection of bullshit—the stuff that people like to say that may or not have any effect on winning or losing baseball games, but that they like to talk about because it’s such fun to keep our jaws flapping.
Winning baseball games is (I would argue—not sure about Bill) the ultimate goal of sabermetrics. If we were expending this kind of energy and verbiage on things that were incapable of having a positive effect on winning games, sabermetricians would face far more hostility from the general public than we in fact do. It’s only because so much of sabermetrics leads arguably or directly to winning games on the field that it has become as widely accepted as it has. And what I’m arguing here is that most innovations loosely or tightly affiliated with sabermetrics tend to add time and complexity to the game, towards that goal of winning.
Now, this isn’t necessarily the case: it might have been, for example, that years of sabermetric study showed that swinging at the first pitch is remarkably conducive to winning. The first pitch (let’s posit) is far and away the pitch that most reliably passes through the center of the strike zone, and so it is the pitch that batters are advised to whack away at. Free swingers, the Vlads and Yogis and Robertos of the world, might have formed the sabermetric model of perfection. Instead, as we understand, they are the oddballs, the exceptions to the rule, the mad geniuses, but in general, whacking away wildly at anything that approaches within a yard of home plate is now severely discouraged, and it's no longer a matter of personal taste or style. Swinging at lots of bad balls is now, simply, wrong. It leads away from winning.
But where did this idea come from? You might say "Tony La Russa" or "Earl Weaver" or whoever, but I’d say that ultimately this idea derives from the person who discovered the small but significant advantage gained by substituting a fresh pitcher, typically throwing from the proper orientation (i.e., lefty or righty) for the current batter, especially when that pitcher knows that his job is to come into the game at around that juncture, and that he will often not be asked to do more than to get that one batter out. I would say that person is the guy who invented sabermetrics, broadly speaking.
Now, like many a crackpot idea, it took a Tony La Russa to put it into practice, and to confirm that the crackpot idea was actually conducive to winning, but the postulation that the crackpot idea may be conclusive to winning belongs to Bill James. Or to someone very like Bill James—Craig Wright, perhaps, or Rob Neyer or—you get the idea. Sabermetricians have come up with this idea, and many others, because they thought that maybe, just maybe, this would give an edge to teams willing to test them out. When they were found, in fact, to correlate with winning, they became universally accepted.
Another example would be the lengthy pitch-counts each at-bat adds to the length of games. When I started watching and playing baseball, it was considered neither good nor bad for a batter to take a lot of pitches. (We used to call out "Walk’s as good as a hit!" to a teammate who drew a walk, but I don’t think we actually felt that that was true, just consoling to the poor doofus who failed to connect with the baseball. Now we understand that it’s pretty close to being absolutely true—a walk is some very high percentage as good as a hit.) What counted (we thought) was the result: if an at-bat ended with a batter returning to the dugout, that was a bad thing, but now if the batter wends his way back to the bench after fouling off six pitches, and adding a dozen pitches to the opposing hurler’s pitch-count, he’s greeted as a hero having accomplished a great feat. And I don’t mean to be mocking: for all I know, it IS an accomplishment to tack a dozen pitches on the chart kept in the opposing dugout, perhaps a greater accomplishment than getting a hit on the first pitch would be. It may well be that driving the pitch-count up constitutes a genuine tangible contribution to winning the game.
But with this truism accepted throughout baseball, long at-bats and long innings have become increasing common. The strategy, from a perspective of winning games, may well be right. It may well be brilliant. But it sure does add to the length and the slow pace of games.
What I’m speculating here has little to do with these two examples, but rather has to do with the correlation of sabermetric methods to length of games in general. I can think of other examples that add to length of games, but very few that reduce the length of games. (I’ll name one, as a counter-example, in a few more paragraphs.) The upshot is that the innovations that cause the pace of games to be slower, more contemplative, deliberately abusive of the fan’s desire to see action may well also lead directly to the team practicing these innovations to win more games.
Imagine that someone quantifies that the single most important thing a pitcher can do to upset the batters’ timing (and thus lead to his efficiency in getting those batters out) is to dawdle. Even if the home plate umpire scolds him and demands he pick up his pace, absent some massive punishment for dawdling, the scoldings will just add to the delay and the tension in the air, making the batters even more nervous as they wait for the pitch to be thrown, so the more dawdling the pitcher does, the more effective he will be. Even if the pitcher ultimately gets ejected from the game, that just means (in a future age when six or seven pitchers per game has become the norm) that one extra reliever gets used perhaps a batter or two earlier than he would be used otherwise.) Now imagine that other pitchers, seeing this pitcher’s successful technique, emulate his style, and imagine that MLB will opt to allow this style of pitching to prevail. (It probably wouldn’t—MLB would no doubt enforce the 20 second clock at some point.) We would now have the tools to show how each additional second of dawdling leads to a few points of OPS+ to be shaved off each plate appearance—sabermetrics, in other words, can now assure teams that there is a positive correlation between dawdling and winning, and whatever baseball does to discourage dawdling, that correlation never becomes negative.
In this imaginary construct, in which an outcome that is desirable to no fan, in which the game’s pace is even more torturously delayed than it has now become, that same outcome is extremely desirable to teams seeking above all to win.
My point here, if I have one, is not so much that sabermetrics has already ruined baseball (that’s the clickbait part) but that in theory it could ruin baseball, in that its goal to create a type of game that optimizes winning, while fans want to see a type of game that is entertaining to watch. These two desired ends have literally nothing to do with each other. It’s only a happy coincidence (coupled with some ignorance that sabermetric study has wiped out) that winning games and enjoying games have mostly meshed over the years, but less and less as we have come to understand how to optimize winning.
When I speak of "ignorance that sabermetric study has wiped out," what I mean is that baseball tactics and strategy has always been based on intelligent evaluation and analysis, from a century before Bill James was born. But that intelligence was never any more than hunches and guesswork—sometimes very keen hunches and guesswork, which occasionally led so directly to winning that they became incorporated into the game over the years. Such maxims as "Never put the winning run on base," for example, must have originated in somebody’s strong opinion which proved wise rather than to a solid, well-reasoned mathematical proof. But since the advent of sabermetrics (or maybe, to a limited degree, from the primitive asystematic methods of Allan Roth or Earnshaw Cook or someone like that), these hunches have been systematized into increasingly reliable laws. In fact, I would say that "Never put the winning run on base" has been reinforced by sabermetrics to the point of solidity—studies have discouraged the Intentional Base on Balls in general, underlining the validity of pitching to the potential winning run rather than choosing to walk him, no matter how feared a batter he might be. Now we KNOW (rather than just think or believe or fear) what the probable result of walking hitters intentionally will be.
By "law," of course I mean something like "general rule"—obviously there are circumstances, such as Babe Ruth batting with a weak hitter on deck and no available pinch-hitters, where walking the winning run might make sense, so the rule would dictate varying from it in spots like that. "Not walking the winning run" is, by the way, the counter-example I referred to previously in describing the rare coincidence of winning baseball and enjoyable baseball: in this one instance, sabermetric thinking encourages managers to pitch to dangerous batters rather than walking them, which is also the time-saving, action-seeking outcome that most fans would want to see. But in the main, I’m arguing that this is a rarity: most sabermetrically dictated strategies discourage moves that seem to be high-risk, mainly because they ARE high-risk.
Have you ever rooted for a team that played your own preferred style of baseball, or for another team that played your own personally detested brand of ball? I grew up in the 1960s, rooting for Seaver’s Mets and Koufax’s Dodgers, so I came to a keen appreciation for pitchers’ duels, low-scoring, defensively oriented games in which winning depended on a few key hits, which were the type of game that the early 60s Dodgers teams and the late 60s Mets teams tended to win. Where runs are rare, every at-bat becomes vital to watch closely, whereas in a run-rich environment, you can’t get too excited about a three-run homerun in the top of the second inning because it’s likely to get wiped out by a three-run homer in the bottom of the second. Anyway, this is just a matter of personal taste, formed by my own rooting interests growing up, but when in later years I found myself rooting for teams that scored (and gave up) runs wholesale, where 10-6 victories and 13-10 losses and other football scores became more common, it became hard for me to root for my team to run up the score, even though I knew that was how my team won games. I just didn’t like that type of game, though I still rooted for my team to win. Well, that’s similar to the effect that sabermetrics has had on my viewing pleasure: I understand perfectly how the strategies being employed make sense, I realize how running up pitch counts and running in LOOGYs and batters stepping out of the box between pitches to adjust every article of clothing they own increases by some small increment my team’s chances of winning, but I also recognize the possibility that winning baseball games may work against my real desire—to see a fast-paced game with a lot of daring, extremely athletic talent on display.
A better example of the enjoyment-vs.-victory principle may lie in the notorious tale of Carlton Fisk berating Neon Deion for not running out a fly pop, back in the day. (For those who forget the details, here’s an account of it: http://www.chicagonow.com/soxnet/2015/09/when-pudge-met-neon-deion/.) I instinctively sided with Fisk: I believe in "playing the game right" and "showing respect to your teammates and opponents by giving 100% effort," and all that, but what if NOT running out soft popups to the infield can be demonstrated, via careful studies, to be smarter baseball than busting your ass down the line? In that case, if study after study proves that only .001% of popups result in a baserunner, provided he gives 100% effort running to first base, but .1% of such futile efforts result in a pulled hamstring, sprained ankle, blown ACL, etc., wouldn’t it be smarter baseball to follow Neon Deion’s lead, offended as you might be by that practice? I remember Bill’s observation years ago that Amos Otis chose to give up doubles rather than to pursue hard-hit balls into outfield walls, reasoning (correctly, IMO) that it’s far more prudent to give up the occasional two-bagger than it is to risk life and limb colliding with concrete walls. What if sabermetrics can demonstrate, beyond all doubt, that emphasizing effort, in general, is a bad idea? That maximum effort results in injuries, and that smart baseball, winning baseball, derives from giving only 99% or 97% of one’s effort? I have no idea what this lowered effort would look like—maybe it would be a lack of intensity, of hustle, of crashing into outfield walls ever, or running out groundballs fielded by the first baseman, or popouts to infielders. What if it could be shown that winning baseball violates your own principles of what you like to see when you watch a baseball game?
What if it could be shown, beyond all doubt, that teams losing by more than five runs recover from such deficits to a remarkable degree by instigating brawls? (You might enjoy brawls—bear with me here.) I find most brawls to be pointless, irritating distractions, and I wish they’d result in month-long suspensions, just to discourage them, but even if you don’t mind the occasional player flipping his lid and attacking an opponent once in a while, what if there were a positive correlation between brawling and winning? Would you enjoy seeing two or three free-for-alls break out every game, purely as a strategic device that boosts the losing team’s chances of winning by a few percentage points?
What I’m saying, by all these absurd examples, is that all knowledge that leads to winning is not good knowledge. It is entirely possible to have too much knowledge, at least where aesthetics and enjoyment are concerned. Thanks to sabermetrics we now know a certain, large amount of data that in previous decades we could, at best, only believe to be true. Some of this new knowledge has led to longer games (or as John Thorn would have it, slower-paced games) and it is at least possible that this trend could lead to even longer, even more slowly paced games, all in the pursuit of winning games. That end, which all teams seek, could easily lead to the ruination of the game itself.