The Azimuth Project
Experiments with varieties of link strength for El Niño prediction (Rev #1, changes)

Showing changes from revision #0 to #1: Added | Removed | Changed


Ludescher et al use a link strength defined in Experiments in El Niño analysis and prediction. The aim of this page is to explore related notions, and especially to find more comprehensible definitions with similar bahaviour.

Peculiarities of the Ludescher definition

If independent gaussian noise is added to the temperatures for each day and for each grid point, the signal strength as calculated by Ludescher et al increases. Roughly speaking, adding noise with an standard deviation of 0.05C adds about 0.05 to the average link strength. This is very counter-intuitive behaviour for a “link strength” which is supposed to measure correlation.

The following graphs are all based on simulated data. There are two time series of length 565, called “signal 1” and “signal 2” in the graphs, which consist of quadratics q 1q_1 and q 2q_2 plus independent gaussian noise. The noise has the same amplitude (standard deviation) in all cases, but q 1q_1 and q 2q_2 are multiplied by 1000 (leftmost column), 9 (second column), 3 (third column) and 1 (fourth column).

Examples of the signals themselves are shown in the top two rows, the value of c i,j (t)(τ) c^{(t)}_{i,j}(\tau) is in the third row, and the fourth row shows an estimated density of the link strength derived from 100 replicates (different samplings of noise).

In the first column, the q 1q_1 and q 2q_2 overwhelm the guassian noise, so you can see their shapes. In particular, note that have positive correlation for all delays: it varies between about 0.87 and 0.97. The other three columns are intended to be more realistic signals which roughly resemble climate data. One would expect that as the multiplier for q 1q_1 and q 2q_2 decreases, the link strength would also decrease, but the opposite is the case.

Simulated link strengths

The code is below.

signal1 <- function(period) {
  x <- period / length(period)
  x + (x-.5)^2

signal2 <- function(period) {
  x <- period / length(period)
  x - (x-.5)^2

period <- 1:566
tau.max <- 200
tau.range <- -tau.max :tau.max 
cperiod <- 365

make.Csamples <- function(nreps, scale) {
  LSs <- rep(0, nreps)
  C <- rep(0, length(tau.range))
  for (r in 1:nreps) {
    t1 <- scale * signal1(period) + rnorm(length(period))
    t2 <- scale * signal2(period) + rnorm(length(period))
    for (tau in tau.range) {
      if (tau <= 0) {
        x <- t1[1:cperiod]
        y <- t2[(-tau+1):(-tau+cperiod)]
      } else {
        x <- t1[(tau+1):(tau+cperiod)]
        y <- t2[1:cperiod]
      C[tau.max+1+tau] <- abs(cor(x,y))  
    LSs[r] <- (max(C)-mean(C))/ sd(C)  
  qauntiles <- quantile(LSs, probs=c(.05,.25,.5,.75,.95))
  list(C=C, LSs=LSs, t1=t1, t2=t2, qauntiles = round(qauntiles, digits=2))  

op <- par(mfcol=c(4,4), mar=c(4,5,1,1))
for (s in 1:4) {
  scaling <- c(1000,9,3,1)[s]
  dsmps <- make.Csamples(100,scaling)
  maintxt <- paste0("signal scaled by ", scaling)
  plot(period, dsmps$t1, type='l', ylab="signal 1", xlab="days")
  plot(period, dsmps$t2, type='l', ylab="signal 2", xlab="days")
  plot(tau.range, dsmps$C, type='l', ylab="C(tau)", xlab="tau")
  dens <- density(dsmps$LSs)
  plot(dens$x, dens$y, type='l', xlab="signal strength", ylab="density")

An alternative