Remember me

The Big Item on Our Agenda

April 1, 2020
                        The Big Item on Our Agenda

            Perhaps now would be a good time to mention that this entire effort has been an extended April Fool’s Joke? 

            The Big Item on our Agenda is DER, Defensive Efficiency Record.  The next three categories to be dealt with are Balk Avoidance, Defensive Efficiency, and Error Rate/Fielding Percentage.  

            Balks are trivial and annoying and we have already talked about them two or three times, so let’s just skip them in the current discussion.   They’re in there, but they don’t really make much impact. 

            For many years, Defensive Efficiency was the biggest item in Run Prevention.   "Defense" was essentially how runs were prevented, and balls in play were essentially how runs were scored.  In the 1920s teams struck out 2 to 3 times a game.   There were many fewer home runs, somewhat fewer walks, and  many more balls in play.  Balls in Play resulted in outs; Balls in Play resulted in runs.  The essence of the game was Balls in Play. 

            Our analytical pathway here is parallel to the other categories.  I figured the DER for every team, and reported on that earlier; the highest ever was the 1906 Cubs, with Tinker and Evers and Chance.   Then I figured the norm for each decade, and the Standard Deviation for each decade.  Then I figured what 3 Standard Deviations below the norm would be, and 4 Standard Deviations, and 5 Standard Deviations, although I am only using "four" in these test runs, but I don’t actually know what the right zero-value point is, but I’m guessing 4 Standard Deviations as a starting point to the analysis. 

            Then I figured how many Balls in Play would have become hits against a given team at that DER, and subtracted the number of hits they gave up (Excluding Home Runs) from the number that they would have been expected to give up with a zero-value defense.   The difference is the number of hits the team prevented, compared to a totally incompetent defense. 

            But then we have to place a run value on each hit, and what is that value?   The problem is that a ball in play that is not caught and becomes a hit might be a single, a double or a triple, and I don’t have the data about how many doubles and triples are allowed by each team in  my data set.   That data DOES exist now; it did not exist 20 years ago, but it does now.  But it would probably take me two or three weeks to type all of that stuff into my data, for very little gain, so I don’t think I’ll do that; we have to leave something for the next generation of researchers.  Maybe it wouldn’t take me two weeks; maybe it would only take me three days, I don’t know, but in any case I don’t want to do it.

            Let’s assume that 70% of the balls in play which become hits are singles, 24% are doubles, and 6% are triples.  Let’s assume that the value of a single is .70 runs, the value of a double is 1.00 runs, and the value of a triple is 1.27 runs.  With those assumptions, the run value of a ball in play which becomes a hit would be .8062 runs; I know that I haven’t lost all of my marbles yet because I was still able to calculate that in my head.   Very easily; I wonder if, after I lose my other mental faculties, I’ll still be able to do that?  I visualize myself on  my deathbed; I no longer recognize my children and cannot remember the words for "salt" and "pepper", but somebody asks me what 19 divided by 26 is, and I’m like, "Oh, everybody knows that, it .731." 

            Anyway, let’s assume that the value of a ball in play on which no play is made would be .81 runs.  The 2019 Oakland A’s saved 199 runs on balls in play, contrasted with a completely incompetent defense, which is figured as follows:

 

1.      The A’s had a team DER of .722 (.722 384 4). 

2.     The Decade norm is .705 (.705 433).

3.     The Standard Deviation is .010 696. 

4.     Four Standard Deviations below the norm would be .662 648; I can’t do that in my head, or at least didn’t.

5.     The A’s had 4,110 balls in play against them (6,153 batters faced, minus strikeouts (1,299), walks (477), Hit Batsmen (66) and Home Runs (201).)

6.     If they had had a DER of .663 (.662 648) they would have allowed 1,387 hits on balls in play. 

7.     They actually allowed only 1,141 hits on balls in play,

8.     A difference of 246 hits (245.517).

9.     Multiply that by .81, and you have 199 (198.86). 

 

The 199 runs is a very good total; out of the 2,550 teams in our study, that is the 785th best.  The four BEST totals are actually all from the 1915 Federal League; it’s an atypical league, apparently.   I’ll probably have to fix something at some time to get a different reading out of that league, but you know. . . I’m just outlining the system now, not doing finishing touches, and anyway, how much does anybody care about the Federal League?   The highest total NOT from the Federal League is the 1908 Pittsburgh Pirates, whose defense saved them 408 runs.  That’s a good choice.  The DERs were very high in that era.  The 1908 Pirates were a good team (98-56) and posted a 2.12 team ERA, second-best in the league, despite a well below average strikeout total and a slightly worse than average number of walks.  I’m very comfortable with them as the team that saved the most runs by running down balls in play.  They prevented a lot of runs, and they weren’t preventing runs by strikeouts or walks, so DER is a good explanation for their success. 

On the other end of the scale is the 1930 Philadelphia Phillies, who are credited with only 4 runs saved by DER.  Their DER basically IS the zero competence line. In this there is a story, but let’s move along for now. 

Fielding Percentage.  The team which saved the most runs by making the plays they are supposed to make, by my calculations, was the 1905 Chicago White Sox.   They were managed by, appropriately enough, Fielder Jones.   The White Sox were a good team; they finished 92-60 and led the league in ERA, at 1.99.  They had a Hall of Fame shortstop, George Davis, and nobody else in the lineup was very notable.  They won the World Series the next year, 1906, with a team that is remembered as the Hitless Wonders. 

Anyway, they had a team Fielding Percentage of .968, which would be very low by today’s standards, but was very high by the standards of the time.  The league average for Fielding Percentage was .957; the period norm was .952, and nobody else in the American League was higher than .962—six points behind the White Sox.  Had the White Sox been 4 standard deviations below the norm, their fielding percentage would have been .919, and they would have committed 542 errors, as opposed to the mere 217 that they actually committed.  So they get credit for not committing 325 errors.

But how many runs do you save, by not committing 325 errors?  Tough one.   Not so many people seem to have written about the Run Cost of an error, in part because errors are not parts of either the hitter’s record or the pitcher’s record, so who the hell cares about them, and in part because it is difficult to know what the cost of an error actually is.  An error can be any of a number of things.  It can be a ball that should have been fielded cleanly, but wasn’t, putting a runner on first base, or second base, or third base.   Many errors, however, "merely" advance runners; a batter rolls the ball slowly to shortstop, but the shortstop, who has no real chance to make a play, makes an ill-advised throw to first, and throws the ball away.   It is a hit AND an error.  That is common; an error on a stolen base attempt, sending the thief to third base, is not uncommon.   Some errors don’t really do anything; you pop up the ball in foul territory, the third baseman drops it, but the batter strikes out on the next pitch.

Errors are more diverse than other plays, more unlike one another than are singles, doubles, triples or walks.  That makes them harder to evaluate.   But if we assume that a single is worth .70 runs, and an error can be either more costly than a single or less costly, but is less costly than a single more often than it is more costly. . .well, does .60 runs for an error seem reasonable to you?  It seems reasonable to me.  If you know of a better reason to pick a number, let me know. 

So we concluded that the 1905 White Sox get credit for not making 325 errors that they could have made with an infield of Dave Kingman, Jerry Browne, Jose Oquendo and Dean Palmer, and that each error not made has a run prevention value of .60 runs.  That’s 195 runs.   On the other end of the scale is the 1981 New York Mets, who beat the zero-value standard for fielding percentage by less than 1 run.  

 

Through the study, the sum total of Runs Prevented by DER (Range) is estimated at 458,845 runs.  The total of Runs Prevented by Fielding Percentage is estimated at 136,142 runs.   Along with a few runs prevented by not committing Balks (5,705), this brings the total of Estimated Runs Prevented to 1,433,040.   Despite my earlier misgivings, this is 80% of the expected total of Runs Prevented. 

In a certain perhaps twisted sense, I think this validates my concept.   My idea was that if I could (1) establish the level of zero competence in each performance area leading to run prevention, and (2) estimate how many runs were prevented by each, that the total of those should more or less balance with the total of runs prevented, assumed equal to runs scored.   I established the level of zero competence by looking for the level at which there are no teams.   There are basically no teams 4 standard deviations below the average, so that’s where I have put the limit.   This all seems to have worked in a certain sense.  I expected them to balance, more or less, and they do. 

 

But man, have I got some problems with this thing.    At this point, the origin of the disparities that I commented on in the last article are comically obvious.   Many of the teams which I have represented as saving very few runs are calculating, category by category, as saving many, many more runs than I have assumed, while many of the teams which I have represented as saving quite a lot of runs are calculating as having saved many, many less than I have assumed.  

Does that make sense, or do you need specific cases to explain?   For example, the 1909 Washington Senators were projected to save only 368 runs.  So far, accounting for their runs saved category by category, they have saved 695.  They’re already 88% over budget, and the work isn’t finished.   The 1908 St. Louis Cardinals are 81% over budget (710/397), the 1908 Yankees 77% (636/359), the 1906 Dodgers 70% (641/378), the 1908 White Sox 66% (882/531), the 1915 Giants 62% (675/416), etc.    Altogether, 403 teams—all of them in low-run environments—have already exceeded their allocation of Runs Prevented. 

On the other end of the see saw, the 1929 Philadelphia Phillies, budgeted for Run Prevention at 902, have only identified 286 runs so far (32%).  The 1930 Phillies are also at 32%, the 1925 Phillies are at 40%, the 1999 and 2000 Colorado Rockies are at 40%, the 1950 St. Louis Browns are at 42%, etc.   43 teams are under 50%, and 499 teams are under two-thirds.   We’re not going to get there.   All of them are teams that played in high-run environments, and mostly, they are BAD teams that played in high-run environments. 

So I understand what the problem is now, on one level.   I have to use the term "environmental effects" here; no, I haven’t gone green on you.   Environmental effects means the run environment.   If the run environment is very high—a high-scoring park in a high-scoring league--then the value of each item of run prevention is inherently exaggerated.   This is not a statistical quirk; it actually is.   A strikeout is worth more in a high-run environment than it is in a run-scarce environment, because the hitter is likely to do more damage when you don’t strike him out in a high-run environment.   A walk is more damaging in a high-run environment than in a low-run environment, because in the high-run environment it is more likely that the player who draws the walk will come around to score. 

It isn’t more damaging in the WIN column, but it is more damaging in the RUN column.   We’re dealing now with runs. 

My system has no way of adjusting the value of each event for the run environment.  So we’re evaluating run-prevention elements as if they existed in a neutral universe, but comparing the results to the results for teams that played in a specific run environment—sometimes high, and sometimes low.   So what do I do about that?

Well, I COULD invent a way to adjust the value of each Run-Prevention event for the specific offensive context.  I could do that, but I won’t be happy about it. 

 

As I have noted previously, my Runs Prevented system is counter-intuitive, anyway.   It proposes answers that seem, on the surface of it, totally wrong.   It argues that the 1930 Phillies—the most notoriously inept pitching-and-defense combination of all time—prevented 845 runs, while the 1967 Chicago White Sox, who had a staff ERA of 2.45, prevented only 613 runs.  On the surface of it, this seems nuts.  

The system works that way, as I explained before, because the differences between run environments are actually much larger than the differences between good and bad teams.   There is no doubt that that is true; you can verify that in 100 different ways.   The differences between run environments are OBVIOUSLY larger than the differences in runs allowed by good teams vs. bad teams. 

The problem is, there is an intuitive logic which says that the 1960s teams were preventing more runs than the big-hitting teams of the steroid era, and there is a mathematical logic, founded on the types of mathematical logic that we use all the time in the rest of Sabermetrics, which says the opposite.  

If you ask me, do I really believe that the 1930 Philadelphia Phillies "prevented" more runs than the 1967 White Sox, well, no, I don’t really believe it.  It is past credulity.   I would be much happier with a system that accommodated our intuitive logic.  But the problem is that, to work with the NUMBERS, I absolutely have to have a mathematical logic that holds together.  It has to be coherent, it has to be comprehensive, and I have to be able to explain it and defend it.   At the moment, I just do not have any such structure.  And I don’t know how I would create one. 

There is a second problem with my system at this time, which is that, as I have it structured now, pitchers are only going to be accounting for about 50% of Runs Prevented, with Fielders accounting for the other 50%.   That doesn’t seem right, either.   But one problem at a time.

I’ll keep plugging away.  In the next installment I’ll estimate the Runs Prevented by Double Plays, Passed Ball Avoidance and Stolen Base Defense, which will finish this stage of the process.  Then I’ll study what I have, and I’ll listen to what you have to say, and I’ll try to see if I can figure out a way forward.   Thanks for reading.

April Fools!

 
 

COMMENTS (25 Comments, most recent shown first)

Guy123
I just realized that the xBA metric includes strikeouts as well as expected outcomes on BIP. That obviously increases the SD. If we subtract Ks for each team, the SD for expected hits on BIP is .0084, or about 27 runs per team. So the pitchers contribute about 2x as much as the fielders, not 3x. Apologies for the bad data.

So updating the key numbers for 2019:
Team DER: 41 runs
Outs Above Average (fielders): 14 runs
xBA sans Ks (pitchers): 27 runs

7:06 AM Apr 5th
 
tangotiger
I can't really do an xBABIP, as that would require I remove HR first, as if they were 100% hit probability. So, balls that just cleared the fence, that might have been on the warning track in other parks, would count as 100% hit prob against the pitcher.

That's why I said that handling that is going to be tricky. I'll do some research, post it on my blog, then we can take it from there.​
6:12 PM Apr 4th
 
Guy123
Tango: Yes, I should have mentioned that HR are included. I didn't think that would change the story much. But if you want to create xBABIP for us, that would be excellent!
11:54 AM Apr 4th
 
tangotiger
Guy: fascinating! Not sure why I haven't yet considered this, but obviously I should.

Note that you are including HR in the expected batting average (which is a weird thing to handle in any case). Thanks for making me think about this now.
11:09 AM Apr 4th
 
Guy123
To illustrate how surprisingly small the impact of fielders is on team DER, consider 2019. The SD for DER in 2019 was 0.01315, or about 41 runs per team. In comparison, the SD for team Outs Above Average (Statcast) was 18 outs, or about 14 runs. Whether or not one is a big fan of OAA methodology, I think that's a reasonable estimate of much fielding teams varied last season in converting BIP into outs given the difficulty of chances they faced.

Statcast also has a metric, "expected batting average (xBA)," that estimates the likely batting average allowed for each pitching staff based only on the characteristics of the ball off the bat (exit velocity and ball angle). The SD for this metric last year was 0.0124, or about 40 runs. That would imply that in 2019 pitchers had about three times as large an impact on DER as the fielders.
Maybe the SD for fielders was unusually low last year, but it seems clear that pitchers have more than half the responsibility for DER (and way more than 5%). This is not surprising when you consider the distribution of balls in play: the large majority are either an easy out or an obvious hit. Fielders have little impact on these plays (and when they do, it's mostly in the form of errors and captured by fielding %). This is one reason that Voros's DIPS discovery was so surprising: the outcome for most balls in play is in fact clear right off the bat, regardless of what fielders do.

Finally, as a check on whether these numbers make sense, they would predict a total team DER SD of 42 runs (sqrt(40^2 + 14^2) = 42), which is very close to the actual SD of 44. (NOTE: all of these metrics include errors, which Bill's version of DER excludes. But they should be valid for understanding the *relative* impact of fielders and pitchers.)
8:26 AM Apr 4th
 
MarisFan61
Stray but significant note:
Hey everybody -- don't miss that about Pbspelly !!

(The Billy Sullivans were his grandfather and great-grandfather.)

PB, I'd love for you to tell more about them some time, whether in Reader Posts or wherever might be appropriate.
Maybe BJOL would even be interested to have you write a guest article...?

9:34 PM Apr 3rd
 
tangotiger
Bill:

With regards to the split, it really does become a philosophical issue. Almost half of what you see with regards to DER is Random Variation. You can prove that easily enough. One standard deviation in DER is .011 or 11 points. Random Variation is 7 points. That leaves root of 11^2 - 7^2 for pitching, fielding, park, or 8.5 points.

Say that we agree that the split is 6 for pitching, 5 for fielding, 3 for park, just to throw numbers out there for discussion.

If you decide that you handle fielding first, as baseball reference does, and then decide "whatever is left goes to pitching", the pitching will absorb all that random variation.

Fangraphs instead handles fielding, then ignores DER altogether, treating the pitching portion of DER as 0. And so, they have an "unexplained" variation that is not assigned to any player. The Random Variation (plus the true pitching contribution) disappears.

Given that you, Bill, seem to want everything to add up, and you want it to add up to the player level, you have to distribute the Random Variation within DER proportionally to whatever their true contribution to DER is.

Therefore, you would come up with your best estimate for each pitcher's direct contribution to DER, and each fielder's direct contribution to DER. And find that proportion. Then whatever is left over, the "indirect" (read Random Variation) contribution gets assigned proportionally.

***

Those interested in more reading can check out this discussion from about 15-20 years ago (pdf). It's very long, but informative. www.tangotiger.net/solvingdips.pdf
8:47 AM Apr 3rd
 
pbspelly
Not that this is all that relevant, but my great grandfather, Billy Sullivan Sr. was one of those not-very-notable players on the 1905 White Sox (and 1906 Hitless Wonders). Tons of news articles and quotes from players (like Ty Cobb) from that era say he was a fantastic defensive catcher and field general, but this never seems to show up in any discussions and analyses of good fielding catchers on this site, making me figure that either his defense was overrated at the time or there is something not showing up in the stats. Terrible hitter by any measure, though, one of the worst of all time.
8:19 AM Apr 3rd
 
Guy123
But the hardest thing is that some percentage of the credit for DER has to go to pitchers, probably just 5% of something, but I really don't know how to approach that issue or how to find the right percentage.

Tango has thought hard about this for years, so maybe he will weigh in. It's in part an analytical question, and in part a philosophical one. We know that two things are simultaneously true:
1) in terms of being a repeatable skill, pitchers vary only somewhat in terms of their ability to prevent hits on BIP, but...
2) in any given season, pitchers vary tremendously in terms of how likely their BIP are to become hits.
That is, if every MLB pitcher threw in front of equally talented fielders, we would still see almost as much variation in team DER as we see today. Pedro posted a .325 BABIP/DER in 1999, and then .237 in 2000 -- little of that change can be explained by Boston fielders.

So, if you want to hold pitchers responsible for the quality (hit likelihood) of the BIP they allow, then they should get far more than 5% of the credit/penalty for DER. The easy solution is to divide credit 50-50 between pitchers and fielders, though in fact pitchers likely have more responsibility than the fielders. In reality, a lot of DER reflects luck, good or bad, beyond the control of pitchers *or* fielders. (This would be better understood if Voros had called his metric "LIPS" (luck-independent pitching statistics). Another option would be to assign most of the responsibility to the team as a whole.

Statcast data will probably provide a more accurate assessment at some point. We will be able to compare the variation in pitchers' expected hits allowed (based on how hard they allowed balls to be hit, launch angle, etc.), to the variation in fielding performance (outs above or below average). My guess is we will see that most of the variation occurs right off the bat, before we even consider the fielders' reactions.
6:52 PM Apr 2nd
 
voxpoptart
"Well, I COULD invent a way to adjust the value of each Run-Prevention event for the specific offensive context. I could do that, but I won’t be happy about it."

Understood. Gosh yes. But if I feel like, if you did it, it might save the framework of the entire project.

Like you, I think "runs prevented equals runs scored" is a good basic premise -- since, again, they have equal effect on winning. And as has been pointed out, bad offenses from 1930 or 2000 scored/ "created" more runs than good offenses from 1968, etc., so the goals you've set for each time make sense in exactly that way.

Right now, you have teams in high-scoring environments credited with saving too few runs, and teams in low-scoring environments credited with saving too many ... and that seems to be THE thing making your results screwy.

Admittedly, there's the deeply depressing possibility that you could find a systematic way to adjust run values, and apply them all over the place, and your results still might be too screwy to ever use anyway. That would be really intensive work for a dead end. But right now, signs point to "That's where the solutions lie".
4:17 PM Apr 2nd
 
bhalbleib
Ok question about the 30s Phillies. They played in the Baker Bowl, with a 60 foot RF Wall. Obviously, lots of balls were hit off that wall, many of which landed higher than the approximately 10 foot mark that a good athlete could jump up and catch the ball. Thus any ball from approximately 10 foot to 60 foot that hits off the wall isn't really in "play", like all other fair balls that are not HRs. Doesn't that mess with any defensive stat we are trying to measure as it throws in plays that no defense could possibly make with all other plays that they could conceivably make? (this has always bothered me with the Green Monster too, and it is only 37 foot tall, but still I think there could be a small but noticeable effect on defensive stats)
3:38 PM Apr 2nd
 
bjames




The data Bill is talking about is available at baseball-reference.com for most of the history of MLB, it seems. Here is the American League page for 1922:
https://www.baseball-reference.com/leagues/AL/1922-batting-pitching.shtml

I appreciate the thought, but the problem is integrating THAT information with the large data sets I have already created for this project. Not saying that would be the wrong way to do it; this is just the way I work.
11:47 AM Apr 2nd
 
bjames
Bill: Have you tried running these comparisons for doubles and triples allowed? We do have them for teams going back to 1904.


Even if I did that, I'm not sure it is an appropriate adjustment. I mostly don't think it IS an appropriate adjustment. I mean. . .we don't make parallel adjustments in other categories here. We're treating a strikeout as a strikeout, not as one thing one time and one thing another time. We're treating a home run as a home run. A grand slam home run is very different from a solo home run. We COULD get back into the data and figure out. . .well, how many RBI resulted from each team's home runs, for example; I suspect that would be no harder to find the doubles/triples splits, and actually would be a lot more significant. But that's another level of detail,another level of research. We're outlining the system here, roughing it in. If we start chasing details we'll get lost in the weeds.
11:46 AM Apr 2nd
 
bjames

I always thought Oquendo was a pretty good fielder--have I crossed some other synapses there?


He is very much like Lonny Frey, 1930s/early 1940s. Frey (and Oquendo) were absolutely, just SO erratic that you couldn't really play them there. They moved to second base after failing at shortstop; Frey became a brilliant defensive second baseman, Oquendo a pretty decent second baseman. Oquendo then moved to first base for a year or two, and he was very, very good there.
11:40 AM Apr 2nd
 
bjames
Guy123

So, the DIPS variables account for only 40% of team defense in the 1900s, then 60% in the 1960s, and 70% today. (Of course, pitchers also have *some* influence on DER/BABIP, but that's a discussion for another time.)


Well, no actually, that might be a good discussion to have right now, before I get to the point of the process where I have to make some decision. This is a helpful comment; I appreciate it. Several categories will have to be split between pitchers and fielders; for example, I'll probably charge Wild Pitches 80% to the pitcher and 20% to the catcher, and passed balls 70% of the catcher, 30% to the pitcher, or something like that. Pitchers will receive some credit for stolen base prevention, and I'll probably credit 1% of strikeouts to catchers, just as a kind of tip of the cap. But the hardest thing is that some percentage of the credit for DER has to go to pitchers, probably just 5% of something, but I really don't know how to approach that issue or how to find the right percentage. Any ideas you have might be helpful.
11:36 AM Apr 2nd
 
CharlesSaeger
@Tango: To extend that, a bad-hitting team in the 1930s could and usually did score more runs than a good-hitting team in the 1960s.

@Guy123: Good chart.

@Bill: Have you tried running these comparisons for doubles and triples allowed? We do have them for teams going back to 1904.
10:55 AM Apr 2nd
 
ksclacktc
Bill,

Have you checked out dWAR over B-R recently, before you roll your eyes, it has gotten quite good IMO. I was always opposed to the fielding ratings for years, but the ratings they have come up with really pass the smell taste as far as reputation and removal of statistical anomolies.

Good luck


PS don't look at fangraphs, they've added framing ratings to catchers and it looks like all the greatest defensive catchers of all time have played in the last 20 years.

PSS/ Kaiser Yes
9:21 AM Apr 2nd
 
KaiserD2
Bill wrote:

"The problem is that a ball in play that is not caught and becomes a hit might be a single, a double or a triple, and I don’t have the data about how many doubles and triples are allowed by each team in my data set. That data DOES exist now; it did not exist 20 years ago, but it does now. But it would probably take me two or three weeks to type all of that stuff into my data, for very little gain, so I don’t think I’ll do that; we have to leave something for the next generation of researchers. Maybe it wouldn’t take me two weeks; maybe it would only take me three days, I don’t know, but in any case I don’t want to do it."

I may be mistaken here, but it sounds as if Bill is speaking literally about "typing all that stuff into my data." I would just like to point out to him and anyone else who wants to do this kind of research that that isn't necessary.

The data Bill is talking about is available at baseball-reference.com for most of the history of MLB, it seems. Here is the American League page for 1922:
https://www.baseball-reference.com/leagues/AL/1922-batting-pitching.shtml
There is no such page for 1912, I found, but there is for relatively recent years and evidently for most of history. This table shows opposition hits, doubles, triples, and home runs against each team. But that isn't all. There's a link over the table, "share and more." Click it, and you get the option to download it as an excel spread sheet. (I usually use chrome as my browser, but chrome has problems doing this download, for some reason, so I use firefox for this.) Once you have it as a spreadsheet it's pretty easy to incorporate the data you want into your own database. This is how I managed to write [I]Baseball Greatness[i], by doing this thousands of times.

David Kaiser

8:50 AM Apr 2nd
 
Guy123
pitchers are only going to be accounting for about 50% of Runs Prevented, with Fielders accounting for the other 50%. That doesn’t seem right

Fortunately, we don't have to guess. You have already measured exactly how big a role the DIPS factors (K, BB, HR) and fielding played in run prevention each decade. Using the run values you suggest here, it looks like this (contribution of a 1 SD improvement in each factor):

1900s
Runs / Proportion
DER 67 0.41
K 35 0.21
BB 18 0.11
HR 13 0.08
Fld% 32 0.19

1960s
Runs / Proportion
DER 43 0.30
K 34 0.24
BB 21 0.15
HR 32 0.23
Fld% 12 0.08

2010s
Runs / Proportion
DER 37 0.25
K 44 0.29
BB 15 0.10
HR 45 0.30
Fld% 9 0.06

So, the DIPS variables account for only 40% of team defense in the 1900s, then 60% in the 1960s, and 70% today. (Of course, pitchers also have *some* influence on DER/BABIP, but that's a discussion for another time.)

8:36 PM Apr 1st
 
tangotiger
Bill:

I'll set aside my opinion on the concept of "Runs Prevented", as it's not germane to the helpful point I will make.

You can have a bad hitting team in the 1930 "creating" more runs than a great hitting team in 1968. It doesn't mean anything in terms of which team was "better".

So, the same would apply on the flip side, that a great fielding team in 1968 would "prevent" fewer runs than a poor fielding team in 1930.

You can even extend that to say that Mets fielders prevented very few runs when Gooden was pitching in 1985. And some terrible fielding team prevented alot of runs when some horrible pitcher was pitching.

6:40 PM Apr 1st
 
willibphx
The value of individual events is definitely dependent on the run environment. Using your 70/24/6 split of H per Tom Tango's table. The value of a non HR H is .98 runs in 3 run environment, 1.08 for four runs and 1.17 in five run per game environment. www.tangotiger.net/customlwts.html

As you have noted all events become more valuable as the run environment increases. Positive events become positive and negative more negative. From your comments it seems like you have three problems. 1) Sum of the parts does not tie to the whole for each team, 2) The team runs prevented numbers (White Sox vs Phillies) do not align logically and 3) pitching vs defense ratio appeared low (though I would be interested in what you expected it to be? I think WS ended up at 62% ish.

Bill, for the sake of my own sanity could you walk through the runs prevented calculation for the Phillies (845)and White Sox (613) teams?

Thanks as always


4:26 PM Apr 1st
 
bjames
It seems silly to count the Federal League as Major League Baseball and not count the 1890s Big League, so meh to the Feds.


Oh, I think the Federal League was far, far better quality baseball than the 1890s National League. Miles better.
3:52 PM Apr 1st
 
shthar
Is that with Oquendo pitching?


2:50 PM Apr 1st
 
CharlesSaeger
It seems silly to count the Federal League as Major League Baseball and not count the 1890s Big League, so meh to the Feds.
2:46 PM Apr 1st
 
SteveN
The jokes on you. I'm a fool almost every day.
2:26 PM Apr 1st
 
 
©2024 Be Jolly, Inc. All Rights Reserved.|Powered by Sports Info Solutions|Terms & Conditions|Privacy Policy