Assessment of ENSO in the latest simulations (impact of new ocean physics and clubb diffusion) #162
Replies: 13 comments 4 replies
-
Beta Was this translation helpful? Give feedback.
-
|
CVDP comparison looking at 194, 198 and 199: 198 looks pretty good in various ENSO-related metrics, about in line with 194. The seasonal cycle of nino3.4 standard deviations shows 194 to be a bit strong compared to observations, although nowhere near the strength of 199: The ENSO spatial composites show the PSL response to the North Pacific looking good for 198/199, while the latter 100 years of 194 show the response to be weaker than seen in an earlier timeslice. The temperature response over North America/Eurasia looks good for 194/198, but is a bit off from obs for 199. The SSTs of 199 are too strong in the Tropical Pacific, but 198 and 194 look good relative to obs. El Nino hovmollers look decent in terms of pattern and amplitude for 198/194, but there is a regular lack of transition to La Nina's seen in 199. La Nina hovmollers look good (but a bit stronger than obs) across all three runs: |
Beta Was this translation helpful? Give feedback.
-
|
Here's an update on a couple of the ENSO metrics with the latest diffusion tests.
|
Beta Was this translation helpful? Give feedback.
-
|
I'm encouraged by the CMAT scores I'm seeing in 213 - the best we've seen in a while. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
How much do you think the scores are impacted by the magnitude of the ENSO
variability? That is, do we tend to see better pattern correlation
teleconnection scores with higher amplitude ENSO variability (stronger
signal). I think 213 is showing more ENSO power than observed if I am
remembering correctly.
…On Thu, Sep 18, 2025 at 9:39 AM jfasullo ***@***.***> wrote:
There is a lot that goes into CMAT scores so they can be bewildering but
the Z500 ENSO teleconnections offer a nice summary. Here 213 is shown on
the top left along with ERA5 (middle) and bias (bottom) - it has a pattern
correlation of 0.87 (CESM2=0.90). Compare these against 192 (right column)
which has a pattern correlation of 0.56 and nearly twice the RMS bias.
These differences are fundamental to the range of scores in the table above.
Screenshot.2025-09-18.at.9.35.54.AM.png (view on web)
<https://github.com/user-attachments/assets/9eb7f996-88e2-4aa9-bfa5-293421b502ee>
—
Reply to this email directly, view it on GitHub
<#162 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AFABYVG43DSIGVH3IFY63XL3TLG43AVCNFSM6AAAAACGCFWGOGVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTINBUGU2DGNQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Yeah Dave, 213 is off the charts compare to any recent simulation. A ton of strong low frequency >6yrs variability. |
Beta Was this translation helpful? Give feedback.
-
|
Yes, that was a tendency we saw in CESM2 development as well. The model versions with greater power tended to have teleconnections more consistent with observations. With CESM2 we saw that spectra, even based on an 100 year record, exhibit substantial internal variability so we chose the patterns as the guiding metric. |
Beta Was this translation helpful? Give feedback.
-
|
@jfasullo what years are the run 213 CMAT scores based on? Your Z500 teleconnections plot indicates years 25-44 -- how robust are these results to using longer time-samples? |
Beta Was this translation helpful? Give feedback.
-
|
Here's a comparison of El Nino teleconnections between 213 and 198. I'm using DJF averages, defining El Nino events as when the detrended DJF averaged Nino3.4 index is above 1 standard deviations and La Nina events as when the detrended DJF averaged Nino3.4 index is less than -1 standard deviations. Here's the SLP anomalies for El Nino winters minus La Nina winters with the following number of events in each case: Obs (EN = 20, LN = 17), 213 (EN=17, LN=19), 198 (EN=20, LN=19).
The following is using the North Pacific Index anomalies (DJF averaged SLP averaged over the black box in the North Pacific) and is showing El Nino minus La Nina events. As in Deser et al (2016) I've produced a range of El Nino minus La Nina differences from observations by bootstrapping with replacement 20 El Nino years from the El Nino years available and 17 La Nina years from the 17 La Nina years available. Both 213 and 198 sit within the observational uncertainty range, and given that a similar uncertainty will exist for both 213 and 198, I don't think we can conclude that 213 and 198 are different from each other.
So, to follow on from the discussion this morning in the CAM7 meeting, from the perspective of the ENSO teleconnection to the North Pacific, I don't see any reason to think that 213 is better. |
Beta Was this translation helpful? Give feedback.
-
|
Yes, thanks for this analysis. Do you know what years you used? Seems that you used all of them, which is good. The main additional perspective I'd add is that to the extent possible we should avoid using metrics that are inherently noisy and ENSO composites are a poster-child for this and a huge challenge to evaluate. This was the main motivation for using regressions based on July-June averages in CMAT, as it knocked down the noise considerably. Of course getting the right answer for the wrong reasons isn't comforting either so I'll defer to your judgement on that! |
Beta Was this translation helpful? Give feedback.
-
|
Thanks - yes I think that kind of regional feature is going to have a lot of inherent noise as opposed to the global pattern correlations (which already have a lot, for ENSO at least) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.














Uh oh!
There was an error while loading. Please reload this page.
-
Here we can gather analysis on the current set of simulations to look at the impact of the new ocean physics in 198 compared to 194 (or 192 which I think is the same as 194 but different ocean ICs). Also the simulations with clubb implicit diffusion turned off (199, 202) - both these have the old ocean physics.
Here are a few plots:
(1) Nino3.4 autocorrelation. Turning off club diffusion (199) does bad things to the Nino3.4 autocorrelation. The sea-salt tuning in 202 has changed that a bit, but it still doesn't look very good. New ocean settings in 198 are OK, but the timescale is long. 192 is the best looking in this metric although it may be hard to distignuish 192 from 198 given the uncertainties. e.g., 192 and 194 look different from each other but they only have different ocean IC's. I removed the first 20 years for this analysis.
(2) Monthly deseasonalized SST variance (after removing a linear trend). New ocean physics are increasing the SST variance. We've persistently had too much variance and a relatively disproportionate amount of variance there compared to the west. That is still true with the new ocean settings. Clubb diffusion off does crazy things to SST variance in the tropical Pacific in 199 and 202.
(3) Here is a plot of the SST variance averaged over 3S to 3N. It would be nice if we could figure out a way to eliminate the localized bump in variance in the East Pacific and have it spread out in a more reasonable way to the west.
@phillips-ad, @jfasullo - perhaps add anything from CVDP or CMAT here?
Beta Was this translation helpful? Give feedback.
All reactions