Okay, here's the longer answer on why the April extent is not correlated with the September extent. This conversation started with April extent and September extent, and then DavidR started talking about the maximum and minimum (not sure if you mean daily or monthly). I'm going to split the difference and talk about March monthly extent and September monthly extent. All of my numbers come from the NSIDC (different sources will have different numbers, but will have similar results).
It (incorrectly) appears that there is a relationship between the March extent and the September extent. After all, in early years in the data, the March extent is large and the September extent is also large. In more recent years, the March extent is small and the September extent is small. So it's a reasonable first step to assume that the March extent predicts the September extent. Let's analyze this, then show that this conclusion is not correct.
We can test this by building a linear model. Sure enough, the equation we get is
Sept = 1.531 * March - 17.315. (all units are million sq km)
(My numbers are close to DavidR's numbers, but not the same. I assume the difference is because he is using the daily maximum and minimum, and I'm using the monthly numbers.) We have a nice linear relationship. If we want to test whether the relationship is statistically significant, we should check the p-value. The p-value reported by R for this linear regression is 10^-6. p-values close to zero indicate significance, so March extent, considered alone, is a statistically significant predictor of September extent.
Going further, using the March value of 14.39, the predicted September extent is 4.72. (I don't know how DavidR computed 2.555. Either he is not computing the NSIDC extent, or there was a computation error.) However, the standard error is 0.7673, which means that the 95% prediction interval is 3.04 - 6.40. This prediction interval is so wide as to be almost useless. (Phrased another way, this prediction interval is saying that the September extent this year is highly likely to be lower than the September extent in 2001, but it could plausibly be higher than the extent any year since then.)
So let's go back to the original observation. We observed that the extent was high in early years and was lower more recently. So maybe we should perform regression on both the year and March extent. Numbering the years with 1979 = 1 for convenience, we get
Sept = -0.088 * March - 0.090 * year + 9.412
Notice that not only is the coefficient on the March extent much smaller, but it's actually changed sign. This means that when we use the year as a predictor, as the March extent decreases, the September extent actually increases. The general declining trend in September extent is due to the year, not the March extent.
We should perform a statistical analysis for significance. The p-value for March extent is 0.809. This is about as large as a p-value can get. The clear conclusion is that after controlling for the year, the March extent is not a statistically significant predictor of September extent.
Using this linear model to predict the September extent this year, we get 4.80. But now the standard error is smaller, at 0.57, so the 95% confidence interval for the prediction is 3.54 - 6.06. That's still not very good, but it's an improvement over what we had before.
The conclusion is that first, we need to include the year when we do any analysis of sea ice extent. Second, if we are correctly using the year, March extent is not related to September extent. (Or more generally, the annual minimum is not related to the annual maximum.)
A couple of follow up points. First, when using earlier values in a time series to predict later values in the same series, detrending the data is essential. Adding the year as a covariate in the linear regression does not correctly detrend the data, but it shows the role that the year plays, so I considered it good enough for demonstration purposes.
Second, I haven't mention R-squared values at all. This is deliberate. R-squared is the wrong statistic to examine if you are trying to demonstrate statistically significant correlations. The correct statistic is a p-value, computed from either a t-statistic or an F-statistic.