Neven and Wipneus,
I have been following the ice concentration numbers for a while now, and I noticed that the higher resolution observations tend to show a record, while the lower resolution numbers do not.
For example, from the "Home brewed AMSR2" thread, Wipneus reported again that 2016 is in the lead for ice concentration (with only 2015 being almost equal in concentration, but 700k behind in extent) :
Extent: -92.6 (-700k vs 2015, -215k vs 2014, -401k vs 2013, +36k vs 2012)
Area: -220.4 (-667k vs 2015, -505k vs 2014, -641k vs 2013, -129k vs 2012)
The ice concentration map from Wipneus tends to show the same thing :
https://sites.google.com/site/arctischepinguin/home/amsr2/grf/amsr2-compact-compare.pngHigh resolution (AMSR2 3.125 km resolution) is in the lead for 2016, while lower resolution (Bootstrap AMSR2 10 km, or NASA team SSIMS 25 km resolution) is lagging behind, and not showing 2016 in the lead (yet
).
I've been thinking about that difference, and it seems to me that if high resolution shows lower ice concentration than lower resolution, considering that ice concentration in the main pack should not matter at which resolution it is measured, that maybe the ice edge is smaller (less fragmented) this year than in other years. But considering the fragmented ice edge in areas like the Beaufort, that explanation is not satisfying.
[edit] Come to think of it, if the ice edge is highly fragmented at the smaller than 25 km (or even 10 km) resolution, then highest resolution (3.125 km) will show more high-resolution pixels with low ice concentration than low-resolution pixels with low ice concentration, which will lower the high-resolution ice concentration and explain the observations.
Is that what is going on ?
Wondering if you have any rational explanation for this apparent difference in ice concentration from the different resolution observations.