(...) what is the accuracy of these measurements. Down to 1km2 or even 1000km2 obviously spurious accuracy ?
I would rather not call these measurements. The satellite sensor measures microwave brightness's. These are inputs to a calculation (perhaps involving other inputs) that delivers sea ice concentration on a grid with a certain cell size. Sometimes it stops there, sometimes extent is calculated (possibly with more corrections). In only a few cases area is calculated.
The first step, satellite measurements, have error levels specified.
The second only in a few cases. Jaxa is close to 10%, ASI (Hamburg, Bremen) may be 15%. Figures from the top of my head.
I cannot remember seeing extent or even area accuracy being estimated. Full error bars would probably measured in 100,000's of sqkm. Mostly systematic (gridsize etc.) , some part random noise.
The day-to-day changes (sticking to one calculation e.g. Jaxa) will be more reliable. Maybe a just a few k in case of the Antarctic minimum as crandles observed.
With this in mind, I think the conclusion that the minimum can be called "rather late" is robust (with the possibility that it may later become "very late").
For year to year comparisons you need a time series that has been carefully compiled for this purpose. Only NSIDC can really be used. The IFREMER series uses the same satellite sensors as NSIDC (but other microwave bands) is available since 1992. From those I would say that lowest extent minimum since 1979 seems also robust. 1993 seems an odd year as it has a relatively very low area minimum but not by extent.