I am not sure these seasonal forecasts are good for anything....
If you are right about that, that's an awful lot of highly skilled time...wasted...
A couple of years ago I did some home-brew testing of the skill of short-term and "seasonal" (=3 month groupings) forecasts for temperature and precipitation. The criterion was how much improvement the forecasts/outlooks provided compared to a simple climatology estimate as to whether the period would be in the upper, middle, or lower third relative to <edit> the 30 year historical average
the ca. 100 years in the historical record. Thus the climatology estimate was a 33.3% chance of each. A 10% improvement would mean that the forecast tool picked the right tercile 36.6% of the time. A 20% improvement represents picking the right tercile almost 40% of the time.
Results are shown in charts below. Note that the testing is done on NOAA forecasts and outlooks for the Northeastern United States, not the Arctic. Also note that the scales on the two charts are different.
The first chart shows that at the 1-month range, forecast skill is down to around 10% improvement over climatology. The skill decay shown on the chart is relative to the very high skill for the short-range forecasts. So it is a glass half-full vs. half-empty situation. Short-range temperature forecasting is really good, so it is not surprising that longer-range outlooks have less skill.
I was surprised that the 1-3 and 2-4 month temperature outlooks were better than the 1-month outlooks. That may be a fluke, but it may be because they benefit from estimating temperature over a longer, and therefore less specific, time period.
If we consider 10% improvement over climatology as a threshold for useful improvement, then the first chart shows that temperature outlooks, with skill improvement of ca. 15-20% out to at least 2-to-4 months, have some long-range skill for the northeastern U.S. Conversely, precipitation forecasts run out of skill between 14 days and 1-month. That lines up with some studies I've seen that found long-range precip forecasting losing skill at about 3 weeks. But that's just me waving my arms about stuff I don't keep up with or in detail, so buyer beware.
The second chart (using a more compressed vertical axis scale) shows that within the multi-month outlooks, the temperature outlooks stayed above 10% improvement out to the 4-to-6 month range, whereas precipitation skill bumps around the floor of statistical noise at every range from 1-to-3 months and beyond, thus again indicating a lack of long-range precipitation forecast skill (but they at least avoided negative scores which were possible).
That uptick at 7-9 months for temperature forecasts is intriguing. It could just be statistical noise. But it could also reflect ENSO (El Nino/La Nina) forecasts actually having some slight (remember, we are talking about a mere 10% improvement over random guessing) skill at nudging the prediction in the right direction.
A few years ago at a climate modeling workshop in Florida I met lots of folks from the southeast U.S. and got a different view of things vs. my home turf where there seems to be little attention given to local impact of ENSO forecasts. But those southeastern U.S. folks absolutely worship the ENSO forecasts. With good reason, as there is much higher correlation for their region with regard to the ENSO effects on temperature, and especially precipitation, in the following months. That highlights the fact that my informal (not subjected to statistical significance) testing for the northeastern U.S. does not necessarily apply to even other U.S. regions, much less the entire planet.