SuperMario wroteCOLONI have a big issue with the underlined statement.
Before I start - Kareem just took a shot at me above. Let's stop that. Moving on:
Here is my issue:
What we "know" is from how we watch the game. A good example is a fight. Conventional wisdom says:
1. A fight can turn momentum
2. Fight when you're down, and pump up your team
3. Fight when you're up, and you might help the other team
Now I have NO idea whether or not this is right, I'm just using it as an example here.
To evaluate this conventional wisdom, stats can play a huge role. Using Corsi as an indicator, for example: Does a team's SOG differential/60 minutes increase after a fight? Does the SOG differential/60 increase more when it the game is close?
What we "know" is not CLOSE to being verified. We do not know how much value a fight has. A lot of what you mentioned is subjective. Does a guy like Burrows throwing a hit help you more than a guy that spends most of his shift in the offensive zone?
This is one argument or rather, type of argument, that I take issue with. We have decades worth of personal accounts that insist that fighting has an impact on the players in the game. Though it may be true that fighting has more or less of an impact in certain situations on aggregate, it does not follow that the impact of fighting is overstated. To do so would be to completely neglect the importance of our sensory perceptions, and there are alot of skeptical arguments that would follow from that. This is counter-intuitive, and one example where I believe that stats over-extend their boundaries.
To use a similar example against qualitative analysis, I do not believe it is possible to accurately judge a defensive players value relative to the rest of the league. There are just far too many plays, and far too many players to assess with the naked eye and cross analyze. That isn't to say that we can't assess the strength of a good defenceman vs a bad defenceman, I think there are always extreme cases that are really easy to assess. Instead, I think the problem occurs with assessing that middle tier (the vast majority of players), and cross analyzing them with different players. I also believe this is one of the reasons that so many peoples list of top defenceman differ so greatly. The only way I can think of solving this problem is by creating an objective criteria for defenseman, and applying it equally among all defenceman, much like defensive metrics in baseball. The human eye just isn't very good at accomplishing this task.