The negative and positive correlations of G1 and G2 with the change score, G2 - G1, would happen with random data and are generically a result of regression to the mean.
It looks to me from your plots that G1 and G2 have about the same variance, and lets pretend they are mean centered as well. What is the expected covariance between the variables in levels and the change scores?
Cov(G2-G1,G2) = E[(G2 - G1)*(G2)] //E[?] is the expectation of
= E[(G2*G2) - (G1*G2)]
= E[(G2*G2)] - E[(G1*G2)] //Bilinearity of expectation
= Var(G2) - Cov(G1,G2)
Assuming a stationary series, the covariance will always be less than the variance, and so Var(G2) - Cov(G1,G2)
should be positive, even with random data (i.e. if the covariance between the levels were zero)!
The same exercise with Cov(G2-G1,G1) produces the result, Cov(G1,G2) - Var(G1)
. So again, even with random data the covariance between G2-G1 and G1 would be negative (as the covariance should always be less than the variance). Using synonymous logic I show here why differencing a time series will typically introduce a negative autocorrelation.
Campbell and Kenny's book A Primer on Regression Artifacts is really a book about regression to the mean. They may not have this exact example, but it is largely applicable to evaluations of observational panel data designs.