Thx for the link! (Also at
https://arxiv.org/abs/2303.04405)
The authors put forth innovative ideas like dividing the future frame problem into an optical flow warp and a warped image refinement components. Training the deep-learning model on a human-centric view of the RGB channel is an attractive concept. As they note, that doesn't work with grayscale instruments such as Ascat.
Meanwhile OsiSaf offers us two-day gridded ice displacement vectors. Thus a pixel at coordinate m,n moves to p,q forcing narrow quantization of both angle and distance. So the future frame - in the least imaginative approach Newton's 1st for a rigid pack - just double downs on that for p,q moving to r,s.
On a consistently windy week, for a 500x400 pixel depiction of the Arctic Ocean (eg at Ascat's resolution), a feature's net change in position m-p,n-q might be something like 5 pixels over and 12 pixels down for a displacement of 13, so the same for the future frame.
Stephan recently dug into the numeric backside of OsiSaf in the course of estimating Fram export. Those numbers could be regridded graphically to the resolution of Ascat as two floating layers that direct construction in a spreadsheet-like object of the warped future frame.
Since the ice rifts and rafts, not all the pixel are moving the same direction just in patches so many future frame pixels are superpositions. Future frame quality is easily tested by waiting a week and imaging subtracted pixel values ('grain extract' in Gimp).
Thus the shelf life of a future frame is very short. The main interest is
proving a persistent trend: the imaged future frame error will be much smaller than during chaotic periods.
Below you can a sense of what the calculation needs to do with May 10 and May 17 to get at May 24 from the roughness of the third (differencing) pane.