Remember me

On the Relative Quality of Leagues

June 3, 2022
 I appreciate the response about one shouldn't adjust for league quality using WinShares.  


I know you mentioned in the "Win Shares" book that one might be able to use the value to evaluate trades.  


Somehow adjusting for league quality would also be extremely useful in Hall of Fame selection debates.  


A NL slugging first baseman from the 1950 (oh, say Gil Hodges) might look better compared to AL counterparts if his values (whether WS, WAR, etc) correctly accounted for league quality.  


As for WAR, that value DOES adjust for league quality. However, the exact way BBref is doing it (and whether it's the correct way) is a bit of a mystery.  


Some of us have been discussing this on the reader boards.  


In the 2nd Historical Abstract, you listed some 16 ways to possibly use as a method for addressing league quality. Batting by pitchers compared to their league was among them.  


If you had the time, how might you proceed? (I've read the conversation you had with Tom in 2018 on cross-era comps.

Asked by: DefenseHawk



The first thing I should tell you is that tackling this problem is probably beyond your skill set.   I don’t know you and I don’t know what your skill set is, of course, but what I am saying that this is a REALLY complicated problem.   If your skill set is at the level of John Dewan, Tom Tango, Ben Jedlovec, etc. then maybe you can deal with this, but if it isn’t, then I doubt that you’ll be able to produce anything.  I did have an employee once, Matthew Namee, who was capable of doing this, so you never know; some people just have skills. 

            The second thing I should tell you is that anything you get out of your research is probably going to provide very limited insight into the Hall of Fame qualifications of anyone.  The difference between the quality of the American League vs. the National League in the same years is just not large enough to be meaningful in evaluating the performance of individual players, with a few exceptions.   There have been exceptional eras when one league got to be significantly stronger than the other for a few years, but as generalization, there’s just nothing meaningful there.   Gil Hodges hit .273 with 370 homers.  If you move that to the other league, same park, it might be 374 homers and .276 or something, but.. . .that’s it.  That’s all you are going to find.  If you find numbers larger than that, it is probably because you’re doing something wrong.  

            It’s an entirely different scale.  Differences between leagues don’t operate on the same scale as differences between players.  I spent 17 years trying to explain this to Red Sox scouts and completely failed, but. . Which is larger, the difference between two players or the difference between two teams?   Obviously, the difference in skill level between two players in the same league is MUCH larger than the difference between two teams. 

            OK, which is larger:  the difference between two teams, or the difference between two leagues?   For the exact same reasons and to essentially the same degree, the difference between two teams is MUCH larger than the difference between two leagues at the same competitive level.  You can walk it back one more level:  which is larger, the difference between players’ individual SKILLS, or the difference between their overall skill levels?   For the same reason, the differences in individual skills are MUCH larger than the differences in overall skill levels.  The more you aggregate, the smaller the relative differences become.

            I spent 18 years trying without success to explain this to Red Sox scouts.  There are major league players who never get drafted because they play in the wrong league in college. You’d see some guy who hit .410 with 22 homers and 70 stolen bases in 65 games—and don’t misunderstand me, the Red Sox never drafted anybody based on their stats—but you’d ask for reports on this guy, and somebody would say, "Yeah, but we don’t know anything about the quality of the pitching in that league."  Well, obviously, but here’s the thing.  People think about  differences between leagues as being on the scale of differences between players—but they’re not.  They’re not anywhere NEAR that level. They’re 6 to 10% of that level.  The aggregate level scale has been condensed 10 to 15 times. So you see a guy who has Babe Ruth level stats in college, it’s not all that relevant whether he’s facing Roger Clemens level pitching or Mike Gardiner level pitching, because the difference between the between a Roger Clemens level LEAGUE and a Mike Gardiner level league is only 6 to 10% of the difference between a Roger Clemens level player and a Mike Gardiner level player.   It’s not really that big a deal.


            Getting now to the issue of why the related questions here are difficult to analyze.   There are many, many measurements which are indicators of the quality of play within a league, I would say about 40 different things; let’s say there are 35 small and weak indicators.  In any given year 15-25 of them are going to point in the right direction, and 10 to 20 are going to point in the wrong direction.  

            You CAN draw reliable conclusions based on data like that; you can.   But it’s really, really hard.  There isn’t a large or dominant measure of league quality, American League to National, until inter-league play begins in 1997.  After 1997, it’s fairly easy, up until then, it’s like herding statistical kitty cats.  And before you can start herding these statistical kitty cats, you have to create the data.   To completely create all of the data that I would like to have to study this issue would take me, I would guess, two to three years. 

And you know in advance that you’re not really headed toward any big payoff.  I mean, probably you narrow the year in which the National League moved ahead of the American League from ‘sometime between 1948 and 1957" to "1953 or 1954", but that’s about it.   You’re never going to get paid for the three years it will take you to do that research—and, since very few people will ever read the research, very few people will ever believe that you actually do know what you actually do know. 

            Nonetheless, let me try to outline as best I can from memory what those 35 or 40 indicators of relative league quality might be, in case somebody wants, against my advice, to take on the research.  It is a really interesting question; not a very important question, but an extremely interesting one.


            Interleague Play is the largest and dominant element of the research, since 1997. 

            Prior to 1997, the strongest indicator that we’ve got of the quality of play is players moving from league to league.

            In the years 1901 to 1903, the American League did not attract SOME of the National league’s stars; the American League lured away MOST of the National League’s stars, certainly over 50%.  In the seven years after that, the American League teams did a far better job of finding and developing their own talent, coming up with Walter Johnson, Ty Cobb, Tris Speaker, Eddie Collins, Home Run Baker, Shoeless Joe Jackson, Ed Walsh, Smoky Joe Wood and others.   Combining them with the stars who came over from the National League 1901-1903 (Nap Lajoie, Cy Young, Ed Delahanty, Sam Crawford and many others), there were more superstars in the American League in that era than in any other league, ever.  The National League in the same era had three very good teams and five non-competitive, fumbling around wasting time teams. 

            What I am trying to get to is, the number of long-term stars in a league is an important indicator of the quality of play in the league.  Well, they’re all weak indicators; this one is just a little stronger than some others.  If you divide the players in the league into major and minor stars versus background level players (average and replacement level players) the background level talent is fairly stable.  The quality of the stars is much more variable, thus a better indicator.  Hall of Fame selections are useful info here, but of course you have to steer around the Frankie Frisch/selection bias problem.

            Back to the issue of inter-league movements.  When the Federal League operated for two years (1914-1915), it provided a conduit for a few players to move between leagues. But the 1903 peace treaty between the American and National Leagues did not create a process to make trades between the leagues, and so until the late 1950s, there was no such process.  The only players who moved between leagues were those who were released and signed in the other league, often with a visit to the minor leagues between.  These player movements, although the information is of some limited value, do not provide a reliable index of the relative strength of the leagues.

            (Occasionally things would happen.  In August, 1949, Johnny Mize was suddenly "waived" by all the National League teams, and signed by the New York Yankees.   But such events were not common.)

            Post-season, 1959, an interleague trading period was established for the period of the winter meetings, and then there was a second inter-league period and the time frame was expanded, etc., until interleague trades became common.  As that happened (1960-1975) there was much more interleague movement, thus strengthening the value of the indicator.  When free agency started (1976) movement between leagues became yet more common.  In that era, movements between leagues are the best thing we have to compare the relative strength of the leagues.  

            Before then, we have:

            The World Series, and

            The All-Star Games


            But that’s just a few games a year so it doesn’t mean a lot.   Weighted over a period of years, it can be taken to be an indicator.

            There are all kinds of things which are "internal indicators" of the strength of a league.  The record in interleague play and the movement of players between leagues can be looked at as external indicators of league quality.  There is a much wider field of internal indicators.  An internal indicator is one which works without a direct comparison to any other league.  For example, the league age spectrum.

            A high quality league will have very, very few players in the league who are 18, 19, 20 years old, or who are 36 or older.  The more players in the league who are at ages well off prime, the weaker the league.  You can make this into an indicator of the quality of play in this way:

            Subtract 27 from each player’s age,

            If the player is OLDER than 27, divide the result by 2,

            Square that number,

            Subtract the result from 100, and

            Divide by 100.

            If a player is 17, this will give a result of 0% (.00); if he is 18, you get 19%;, at 19, you get 36%, at 20, you get 51%; at 21, you get 64%; at 22, 75%, at 23, 84%; at 24, 91%; at 25, 96%; at 26, 99%, and at 27, 100%.

            Going down the slope post-27, at 28 you get 99.75%; at 29, 99%; at 30, 97.75%; at 31, 96%; at 32, 93.755; at 33, 91%; at 34, 87.75%, etc.   You can figure it out from there.   At age 42, you’re back down to 43.75%. 

            Then, for every non-pitcher in the league, you multiply his plate appearances by that percentage.  Call that his "age weighted plate appearances."   In the same way, for a pitcher, you can create age weighted innings.  

            Total up the age weighted plate appearances for the league, and divide by the total plate appearances for the league.  The result is the. . . .what do we call it?  We’ll call it LASE—the league age spectrum evaluation.


            THIS IS A UNIVERSAL, ACROSS-THE-BOARD indicator of the quality of play in a league.  It will always work, not with regard to every league but with regard to every class of leagues.   It will work in comparing a AAA league to a Single-A league, a Single-A league to a Rookie League.  Go back to 1954, and it will work in comparing an A league to a B league, a B league to a C league.  It will work in comparing the SEC to an NAIA league,, or in comparing a league of low-level Universities to a league of Junior College teams.  It will work in in Japan.  It will work in comparing slow-pitch softball leagues. 

            In baseball, the two most obvious backward steps in the quality of play are World War II baseball and the first expansions, which expanded the major leagues by 50% in a period of nine years (1961-1969). Both of those backward steps WERE accompanied by drops in the league’s age spectrum evaluation, but not immediate or dramatic ones.  I should add this:  that those backward steps in reality were not nearly as large as contemporary reporters suggested that they were.  World War II is often described as baseball played by teenagers and old men, but the LASE did not actually drop meaningfully until 1945.  In 1945 it DID drop to very low numbers in both leagues, but in 1942-1944 there is hardly and impact on the major league age spectrum.   At the time of the first expansions there was no immediate change in the age spectrum, but then a couple of years later, when the Ed Kranepools and Rusty Staubs started to pile up, there was a dip. 



            But this is what I am trying to get to: that there are things you can look at which are universal and really kind of obvious, but which do provide useful information about the quality of play within a league.   If you combine ENOUGH indicators of that nature, then you start to get a pretty clear picture of the quality of play within a league. 

            If a league is segregated, all black or all white, then that obviously is an indication of weakness for that league.

            The number and variation of international players is an indication of strength in the league.  If the league pulls in players from Thailand and Australia and Venezuela, they’re working at it. 

            The attendance in the league is an indicator of the strength of the league.   If one league draws three million fans per team and the other league two million, the one that draws three million fans per team is probably the stronger league.  Again, that’s universal. I don’t know the facts, but I’d bet you anything that the SEC draws more fans than the Mountain West League.   In 1954, the International League drew larger attendance than the Three-I League.  

            The record-keeping within the league is an indicator of the strength of the league.  If they don’t keep track of RBI, walks and strikeouts by hitters, it is probably not a strong league.  If they’re recording Exit Velocities, they’re probably a strong league. 

            The number of coaches employed in the league is an indicator of league quality. 

            The experience level of those coaches/managers is an indicator.   If a head coach has been employed in baseball for 25 years, he probably has some skills.  If he has been working in baseball for three years, we have less confidence of that. 

            Competitive balance within a league in an indicator of quality.   If one team plays .700 baseball and another plays .250 baseball, that’s probably not a quality league. 


            In sabermetrics, we normally treat statistics as RELATIVE records which have no absolute meaning.   This is what defined sabermetrics in the early days:  that we would say that hitting .300 in Dodger Stadium in the 1960s might be more impressive than hitting .370 in Sportsman’s Park in 1922.   People thought we were nuts, but we eventually won the argument; the whole world eventually came around to see it our way. 

            But once in a while, maybe 5% of the time, there IS some absolute meaning in a stat.   There are a few stats which are systematically higher in a higher quality league than in a lower quality league.   For example, fielding percentage.   If you go through a 1954 Baseball Guide (pick your year), you will find that fielding percentages were higher in the major leagues than in the A leagues, higher in the A leagues than in the B leagues, higher in the B leagues than in the C leagues, higher in the C leagues than in the D Leagues. 

            For some of you, your first response will be that fielding percentages were higher in the better leagues because of better field conditions.  Well, yes, of course.   But that doesn’t invalidate the generalization. If you have a high-level league and a low-level league and one has a well-maintained field and the other does not, which do you think is the stronger league?

            Wild Pitches and Passed Balls are more common in lower level leagues than in higher level leagues, particularly (as I recall) passed balls. 

            I also seem to recall that Hit Batsmen are more common in lower-level leagues, but I would want to check that out before I did any research based on that belief.

            Lop-sided games (16-1, 22-3, etc.) are more common in lower leagues. 

            Triples and Inside the Park Home Runs are more common in lower leagues. 

            Double Plays are slightly more common in better leagues.


            Triples, Inside the Park Home Runs and Double Plays are long-sequence events.  Long-sequence events are indicators. They stand out in the data.  Several things have to be "right" to get an inside the park home run.  

Generally, high standard deviations are indicative of lower quality play.  If the league batting average is .270 but with a standard deviation of 22 points, that’s probably a stronger league.  If the league batting average is .270 but with a standard deviation of 45 points, that’s a weaker league. 

Also from memory, I believe that you can show that the percentage of runs scored by home runs is higher in high-quality leagues than in lower-quality leagues.   I’m not certain of that fact and I don’t like it, anyway, so I’ll let that pass for now. 

Runs Not Drive In (RNBI); that is, Runs Scored in the League minus RBI in the league, divided by Runs Scored. . . that turns out to be a surprisingly strong indicator of league quality.   RNBI are runs produced by Errors, Wild Pitches, Balks, Passed Balls, and a few other events, so it somewhat replicates things we are measuring in other ways, but I believe than RNBI are actually the strongest single indicator of the quality of play within a league that I am aware of.   However, for whatever reason, this indicator suggests that the American League was somewhat stronger than the National League in the years 1962-65, which I am fairly certain is not true.

            Any of these statements—and probably I have got some of them wrong—but any of these statements can be confirmed by studying the frequency of events in different levels of minor league ball, different levels of college ball, or different levels of high school baseball.   They’re consistent variables. 

            If you want to compare the quality of leagues in the years Gil Hodges played, there is a vast amount of material that you can study—but there is no straight line toward the answer.  All you can do is piece together little indicators.  It’s a fascinating subject.  Good luck to you.



Later—I got interested enough in this subject to do a few studies about it.   I’ll publish these next week.  


COMMENTS (5 Comments, most recent shown first)

This is a very interesting start for a discussion and analysis. The one area that it could shed some light is the NeL comparisons. Many of the methods Bill is likely to look at would probably not shed any light due to the inherent differences in the limitations placed on the NeL due to the abhorrent policies in place for most of US History but the LASE analysis could be an interesting one. It would also be interesting to see how the quality of play began to change with integration finally happening and then the increase in global talent in MLB.
6:38 AM Jun 6th
Thanks, Bill, for the detailed response to my query.

Yes, the relative quality of the major leagues compared to each other is indeed a fascinating subject and I hope your article will spark a renewed interest in examining this by those in the sabremetric field.

No, I don't have the in-depth knowledge that you and the others you mentioned have. Part of that is lacking the knowledge on how to use certain databases. But another part is not knowing where to find the needed statistics or how to extricate the data from Retrosheet, BB-ref, etc.

It's one thing to be able to calculate the light-year distance between two distant stars (say, Rigel and Deneb, which are approximately 2136 light-years apart) with spherical trigonometry when you have the formula, the data you need and simple software like Excel in which to input the data. It's quite another if you don't know where to find the data you need to plug into that formula. And absolutely impossible if you don't know the formula itself.
That's what I think is the biggest challenge for us who are far more knowledgeable about baseball and its history than the average fan: is being able to find and access the needed data and have the knowledge (or learn) to utlilize the right software with the right formulae needed to crunch the numbers.

As for me, I'm fair good with Excel, less so making graphs with it as I had been in the past (and, yes, I have actually used it to calculate astronomical distances thanks to source data from published star catalogs). And thanks to you publishing your own sabremetric formulae, I've been able to calculate major league equivalences (MLEs) and used Brock2 (never quite understood why you picked Greg Brock to name it after) to not only calculate career projections, but also to come up with "missing" seasons for players like Ted Williams and Joe DiMaggio due to World War II, what Lou Gehrig might have done if he was healthy for several more seasons and even go backwards for guys like Luke Easter to see what they might have accomplished if segregation hadn't denied them a full major league career.

But plugging in data into an already existing formula from charts on BBref is one thing. Being able to select data from Retrosheet or BB-ref boxscores? That's beyond my current capabilities. Partly because I haven't found good instructions on how to do it correctly and partly for lack of time to learn how to do it.

I threw Gil Hodges into the question only because there had been so much discussion a while back about his Hall of Fame worthiness. Willie Mays is a much better candidate to consider the question: "what he might have done had he played in the American League."

I suspect there's a chance Mays might have passed Babe Ruth before Hank Aaron. Candlestick Park (and for his last two seasons, Shea Stadium) depressed homeruns for right-handed hitters based on single-season ballpark factors I've seen. Just playing in a more neutral park in the American League would have helped his numbers - even before taking into account league quality.

The National League had added a number of pitcher-friendly ballparks in the 1960s. Besides Candlestick, there was Dodger Stadium, Shea Stadium, Colt Stadium (which was replaced by the Astrodome). Only the Braves move to Atlanta added a hitter's park. As a result, Wrigley Field went from pretty much a pitcher's park into a homerun hitting haven by the late 1960s.

Perhaps an overlooked factor is that Wrigley was still lightless, while other teams played more and more of their games at home at night. How the leagues may have differed regarding night baseball may also be a factor. It certainly had an effect on production, as the American League overtook the N.L. in homeruns hit from the mid '50s. The National League was playing 39.4% of their games at night in 1955; the A.L. only 32.7%. By 1964, that difference had shrunk to 51.8% to 48.4%. (I stopped at 1964 because I'm not quite sure whether BBref is counting a daytime game in the Astrodome as a day game or, as it should, a night game (meaning under the lights and not under the sun).

(The change in scenery and lighting conditions for his home games is something that also effected Gil Hodges for his four years in Los Angeles. Anecdotal evidence seems to indicate that the Los Angeles Memorial Coliseum had poor lighting. Hodges certainly did far worse in night games in three of those years, but I don't have a breakdown between home and away night games. Some teammates also made comments about WHY it may have effected a pull hitter like Hodges more than others, such as lefty Wally Moon who became known for his "moon shots" over the screen in left.)

Breaking the Color Line

I've previously pointed out this study on the reader boards.

Mark Amour used Win Shares to do a study, "The Effects of Integration," which was published in the Baseball Research Journal 36 (SABR, 2007).

He looked at WS by black and Latino ballplayers from 1947-1986 and found that those in the National League had a consistent and at times a far higher percentages than their counterparts in the American League. The difference was even more prounced when looking at star players (one of the categories you mentioned as likely evidence of league quality superiority).

You specifically emphasized that "the number of long-term stars in a league is an important indicator of the quality of play in the league."

Amour found that "By the early 1960s, half of the stars in the (National) league were black, and the number was over 60% by 1967. The dramatic effect of the star players illustrated in Figure 3 is nearly completely fueled by the NL; the AL did not begin to field many black stars until the late 1960s."

"One can just plot the difference in the value of the black player in the two leagues."

Amour looked at Hall of Famers, too, from that period and found:

"In 1947, each league had a single Hall of Famer—Jackie Robinson, and Larry Doby. Doby remained the sole American Leaguer until his retirement in 1959—at which point there were no black Hall of Famers in the AL for six years.

"Meanwhile, the NL added a new Hall of Famer nearly every season, until 1965 when their gap on the Americans was 15-0. In 1966, Frank Robinson was traded to the Orioles, reducing the gap to 14-1, and, perhaps not surprisingly, he was immediately the best player in the AL, winning the Triple Crown.

"It should be noted that the players represented on this chart were all top-flight stars. The Veterans Committee, the so-called “back door” into the Hall of Fame, has inducted only two black players—Larry Doby and Orlando Cepeda—along with several white players from this period. Furthermore, if one removed the contributions of the 15 National League Hall of Famers from 1965, the remaining black players in the NL still accumulated more Win Shares than their AL counterparts. The NL dominance extends past the superstars."

Amour concluded:

"In order for the leagues to be of comparable strength in the 1960s, the white American Leaguers would have to have been significantly better than the white National Leaguers.

"Could this be true?

"Returning to 1965 again, who were the best players, of any color, in the American League? According to Win Shares, the best players were Tony Oliva, Zoilo Versalles (who won the league’s MVP award), and Don Buford, three fine black players. Going down the list, the best AL white players that year were Rocky Colavito, Brooks Robinson, Curt Blefary and Jim Hall. How much better could they really have been than Sandy Koufax, Don Drysdale, Pete Rose, Jim Bunning, and Ron Santo, each of whom had excellent seasons that year in the NL?"

I would point out that his study would seem to indicate N.L. superiority extended throughout the 1970s.

The complete study can be found at:

All Star Competition

As you mentioned, the only interleague play that counted until 1997 was the World Series. And there was also the All-Star Game, which the N.L. dominated, at one stretch winning 19 of 20 games. I know that BB-ref 's WAR data shows the A.L. passing the N.L. in quality by 1968. But for that to be true, the team WINNING the All-Star Game EVERY year in the 1970s would be the INFERIOR league.

The chance of a coin flip coming up heads 10 times in a row is less than 1/10th of 1%. In a sense that's the chance of one league, both being equal in quality for all ten years, winning the ASG every year of those ten.

What might change that is the composition of the All Star teams themselves. If American League fans and managers were selecting star players INFERIOR to that of the N.L., that could tip the edge of an overall inferior N.L. in the '70s into the N.L.'s favor.

But that would have to mean players were being selected for the American League who were undeserving.

WAR has the A.L. with the edge in quality (despite expansion) in 1977. Looking at just the starting eight position players, the A.L. has the edge 43.5 to 39.8 in WAR. When one looks at the highest WAR value at each position (top three outfielders, regardless of whether they played left, center or right), that difference shrinks, 51.8 to 50.0.

Win Shares has the two teams identical at 208. And counting the players with the highest WS totals? The N.L. wins by a margin of 233 to 232. Seems as though the voting public knew what it was doing, at least that year. The National League won the game, 7-5, with Joe Morgan, Greg Luzinski and Steve Garvey going yard on Jim Palmer (3rd in WAR, 1st in WS among A.L. pitchers that year).

A more sophisticated analysis would take some time but certainly could be done. It would not only take into account the pitchers used but also how much playing time a player got in the game.

But I doubt that the American League was somehow putting together a much more inferior squad for decades vis-a-vis the National League than they could have. If they were, it might help to explain the N.L.'s dominance, but I doubt an analysis will show that. But, that's why evidence needs to be examined, to either prove or disprove a theory.

World Series

World Series play is a small sample size, but one which also might help towards proving the National League's dominance. Assuming the leagues were of equal value, a .646 team should defeat a .611 club about 51% of the time. But it 1963 it was the assumed "inferior" Dodgers who defeated the Yankees in a sweep. I examined 10 World Series results, from 1955 to 1964. The A.L. representative had an average winning percentage of .625, the N.L. an average winning percentage of .605. If the two leagues were of equal strength, the A.L. should have been expected to win 33 of the 64 World Series games played, likely winning five championships. But it was the N.L. than came up on top six times, winning 34 of the games played.

Yes, a small sample size, but just another factor that points in the direction of N.L. superiority during that time frame.

Spring Training

It's true that the games "don't count." But they are an indicator of an organization's overall strength. Do good teams have bad spring training records? Of course. But the better question is do superior LEAGUES have bad spring training records against the other league's teams. The data would be tedious to compile and may not be worth the effort, but thanks to websites such as and other newspaper archives, one can now easily find spring training results from the daily newspapers going back to the 1940's, as well as being able to weed out "split-squad games" from the newspaper results. What to look at is not a team's overall record, but their W-L record in only interleague games. As compiling the results means inputting the scores anyway, the Pythagorean Theorem can also be analyzed at the same time.

AAA Baseball

I believe another indicator of the quality of major league play is the quality of their top minor affiliate in "interleague" play. Not a AAA's overall record, but only their record against other clubs whose parent clubs are in the opposite major league.

Several years ago I was reading "Minor League Baseball Standings" by Benjamin Sumner and was astonished at how often, year after year, the N.L. AAA clubs finished ahead of their A.L. counterparts. I did a quick tally just by creating a +/- of who finished above who and found the N.L. was dominate starting right after World War II. In 1947, that edge was 21-11 for the N.L. In 1948 it was 27-5. In 1949 it was 24-7. In 1950 it was 24-7 again.

Then I realized an even better indicator would be head-to-head records.
Looking at 1946, A.L. AAA affiliates had a slight edge, 353 to 348. That changed the following year, when Jackie Robinson broke the major league color barrier and some N.L. teams, the Brooklyn Dodgers in particular, began signing a few more black ballplayers in the minors. The Montreal Royals of the International League went 93-60, with the help of future Hall of Fame catcher Roy Campanella. They were 54-33 against A.L. affiliates, which were the minor league clubs of the A.L. powerhouses at the time: the Yankees (1947 World Champions), Red Sox (1946 pennant winner), Indians (1948 World Champions) and Tigers (2 second place finishes, one in 4th from 1946-1949). Overall, in 1947 N.L. AAA affiliates beat their A.L. counterparts, 382-325.

Now, sure, it takes a few years for young talent to manifest itself in the majors. But if one league's top minor league affiliates are consistenly beating up on their opposite league counterparts, that shouldn't be dismissed as random chance. Nor can it be said that the minors only exist to develop players, that winning games doesn't matter. Well, if players aren't performing, they're not going to get promoted. If they ARE performing well, their team is going to do well. And those guys are going to get called up sooner or later. It defies logic to assume that a an entire league is going to keep well-performing players in the minors year after year who would help their parent club if their parent club can use the help.

The 1976 Rochester Red Wings went 88-50. They were 53-25 against the four N.L. affiliates in the International League. Their roster included Rich Dauer, Kiko Garcia, Larry Harlow, Dave Skaggs, Andres Mora and a 20-year old named Eddie Murray. Their pitching staff included Mike Flanagan, Dennis Martinez ande Scott McGregor. It wasn't long before those guys were in Baltimore.

Same with the 1970 Spokane Indians. The Indians had on their roster future major leaguers Doyle Alexander, Bill Buckner, Steve Garvey, Charlie Hough, Tom Hutton, Von Joshua, Davey Lopes, Tom Paciorek, Bill Russell, Bob Stinson, Bobby Valentine, Geoff Zahn and several others. Tommy Lasorda's club demolished their PCL competition, going 94-52. Against A.L. AAA clubs they were less spectacular at 34-28. Might the 1970 AAA records indicate the A.L. was starting to catch up? Perhaps. A.L. AAA affiliates had the edge, 495-440. That translates into a 86-76 record over 162 games. I do prefer using head-to-head winning percentages because the PCL had a very unbalanced schedule with their two divisions in those years. In 1970, it doesn't matter. The A.L.'s AAA winning percentage is essentially the same, translating into that same 86-76 record.
What changed from 1969, when the N.L., when the N.L. AAA affiliates went 392-326 vs. the A.L.?

The Expos and Padres; that's what happened. The extremely poor first-year AAA performance of the N.L. expansion teams (the four 1969 expansion clubs didn't field AAA clubs until 1970) dragged the whole N.L. AAA affiliates' performance down. Not only that, the Omaha Royals did well for their first year, 29-29, vs. their N.L. competition. If you toss out the results of those three plus Milwaukee's affiliate, the N.L. would have had the slight edge, 320-319.

But the Expos and Padres were major league clubs. And their poor performance enhanced that of their competitors and contributed to the N.L. being weaker for a year or two than they were prior to expansion. When you have a ballclub in such financial difficulty that it couldn't afford to sign an 18th round draft pick out of high school who declined an offer for $4,000 but would have signed for $6,000 so he could play 130 miles from where he grew up, it's not a wonder why the A.L. might have been gaining ground or perhaps passing the N.L. in quality of play. And so Doug DeCinces went to college and got drafted the following year by the Baltimore Orioles.

Major League Equivalencies

One way to compliment the number of league-switchers might be to look at MLEs (major league equivalency) of AAA ballplayers who come up in the opposite league.

One could calculate the MLE of, say, Jack Perconte for 1981 and 1982.

.346 .447 .438 .884 with the 1981 Albuquerque Dukes (Dodgers AAA affiliate in the PCL)
.237 .303 .292 .596 with the 1982 Cleveland Indians

But instead of proceeding with the MLE for the team the player performed for, let's NOT apply the ballpark factors for Cleveland. Instead, create a stat line for the player in a neutral American League park.

Then it's a matter of looking at NL call-ups and AL call-ups and seeing if the players of one league are over or under performed their expectations.
Ideally, the minor league ballpark factors would be more accurate than the substitution used in the original MLE formula (a AAA team's runs for/runs against data).

(Why there hasn't yet been created a simple line-by-line listing of every AAA game score for a given season that could then be easily imported into a database after all these decades is perplexing.)

The Amateur Draft

Beginning in 1965, the amateur draft may be another reason that helped the A.L. catch up.

If the American League did rapidly close the gap to achieve parity (and even pass the N.L.) by the early '70s, the amateur draft would be a key reason.

For example:

First Round, 1965:
A.L.: Rick Monday, Joe Coleman, Billy Conigliaro, Ray Fosse, Eddie Leon, Jim Spencer
N.L.: Al Gallagher, Bernie Carbo

First Round, 1966:
A.L.: Reggie Jackson, Ken Brett, Tom Grieve, John Curtis, Carlos May
N.L.: Wayne Twitchell, Leron Lee, Gary Nolan, Richie Hebner

I'd better stop here before I hit the max characters for a post.
6:22 AM Jun 6th
There were trades between the AL and NL before 1958, just not a lot of them. Partly because the players had to clear waivers, partly a cultural thing.
10:17 PM Jun 4th
Brock Hanke
This article is just fascinating to me. THANK YOU! I have the background (Math degree from Vanderbilt, with about half the classes in the Engineering Math department, 30 years work in computers) to understand why this would be hard to do, but not enough to actually try to do it - to attempt to fix the scale of every indicator, and to try to adjust for everything you'd need to. The idea that it would take YOU three years to come up with just the data to start the analysis pretty much makes it impossible for me to do.

So, I have to kibitz. 1) I've known forever that there just weren't any interleague trades in the first half of the 20th century, but I'd never known WHY. Thanks for telling me.

2) NONE of the Red Sox scouts could figure out that you were giving them the Voice of Experience? How mentally inbred are those guys, anyway?

3) The information that the differences between leagues is very little compared to those between players is VERY useful.

4) Mulling it over, the only tool I encountered in my math classes that seems like it might be up to the job of combining multiple indicators, which may be on completely different scales from each other, is "multiple regression analysis." I don't like regression analysis (I do the problems as matrices) but multiple regression does have that width. Do you know if the few people who can actually do this are using that tool, or a different one?

1:28 AM Jun 4th
When I attend minor league games, I always notice the difference in fielding quality. You see a ball hit, and your pattern recognizer says, "That's an out!". Then the fielder goes to the backhand, and boots the ball, when the MLB fielder makes the play look easy. I don't believe quality of the field has all that much to do with it.
7:42 PM Jun 3rd
©2024 Be Jolly, Inc. All Rights Reserved.|Powered by Sports Info Solutions|Terms & Conditions|Privacy Policy