Chris Reynolds -- I think the argument people are making is whether your statistical dataset is actually still valid under the conditions that are now present. I can actually argue either point of view, but I do think you guys all wind up generally talking right past each other on that front -- you are using statistical work based on data that has, at its core, an assumption that area/volume models/extent/whatever are going to move in _relatively_ consistent ways over time in a given weather regime. That is, that none of the physical processes acting to melt the ice are themselves massively changing in ways we can't capture in that data.
That's the data we have. But it doesn't actually _capture_ the mechanistic processes involved with the melt. It captures the end results of those processes from the top, from satellite, with flaws, and with resolution that doesn't always tell as much as we'd like to know.
I don't think you're wrong to work with the data we have, I do think it's important to realize that it has some limitations in what it can tell us about the actual underlying processes going on during the melt. I think you're _likely_ to wind up right about the numbers, but statistical manipulation based on historical data assumes that the various factors involved line up now in their effects roughly the same ways they have in the past. Given the curves we see, it's not silly to do that, I just lack your confidence that the ocean heat and atmospheric dynamics and actual molecular-level ice structure all are doing roughly the same things. I think this system _could_ do any number of things, and data from most of the measured past may not actually help now. "It has worked like this" works great, until the day that it suddenly doesn't work at all.
I personally suspect that we'll either see the much-anticipated crash very soon, or that we won't but that then this year is maybe the perfect setup for an absolute catastrophe of ice loss next summer, structurally. The swaths of lower concentration ice are huge and hitting deeper than they really have at this scale in the past, that will effect the whole process from here; every hour is, to some degree, setting up the dynamics of all of the hours that come after. The variables involved in rates here, the underlying factors that go into the data we have, are not all things we can measure spectacularly well.
Again, not really trying to step into the fray, I just often find these arguments to be _all_ perfectly correct and solid from their own terms, but usually totally missing that they're working from different basic frameworks -- "Here's the data, in context!" vs. "What made that data, and is _that_ still working the same as it did in that context?" -- when they interact.