Earlier this month Net Applications released browser share numbers showing a slight decline for Internet Explorer, but still nearly three times the market share of second place Firefox. New numbers from StatCounter, however, show Internet Explorer below 50 per cent and Firefox with more than 30 per cent. Monthly browser market share results can be fickle and vary widely depending on the methodology behind the statistics.
How can one source claim that Internet Explorer is essentially at 60 per cent market share, while another source claims a week later that it has fallen below 50 per cent? How can one statistic show that Firefox has just under 23 per cent, while another statistic places Firefox market share at 31.5 per cent? Comparing the gap between the number one and number two browsers from the two sources shows dramatically different results -- slashing the market share gap in half.
The answer is that statistics rely on the information used to gather the data, and that results can be molded to produce a desired effect. Sources such as Net Applications and StatCounter have to apply certain rules and filters to the data that is collected in order to get an accurate picture of browser usage. The confusing part is that the two -- and any other browser statisticians out there -- don't necessarily agree on what those filters should be, or what an accurate measure should include.
Net Applications explains its methodology in an FAQ on the site. "We collect data from the browsers of site visitors to our exclusive on-demand network of HitsLink Analytics and SharePost clients. The network includes over 40,000 websites, and spans the globe. We 'count' unique visitors to our network sites, and only count one unique visit to each network site per day. This is part of our quality control process to prevent fraud, and ensure the most accurate portrayal of Internet usage market share."
I did not find any similar explanation of browser market methodology on the StatCounter site, so I can't provide any details behind where its results come from, or analyze how they are different from Net Applications' methods. Suffice it to say that the methodology is obviously different or there wouldn't be such a significant variance in the results reported.
Methodology is subjective and its hard to argue whether one method is "right" or "better" than another. One thing that can help browser share stats and trends seem more consistent is if you don't try to compare them to each other. While the difference in methodology for collecting and analyzing browser usage data may vary between Net Applications and StatCounter, Net Applications uses the same methodology each month, as does StatCounter. You can pick your poison for which methodology you feel is most valid, but then stick with it in order to follow a valid usage trend month to month.