gsg faq

Q: Do I have to double quadratic selection gradients?

A: No. This is inherent to the calculations of the average second partial derivatives in the guts of what gam.gradients() does. Even if you did or did not include a factor of one half (e.g., included, or not, something like ...+I(0.5*myTrait^2)... in the gam model that gets passed to gam.gradients()), the same correct estimate of gamma will result.

Q: Does it assume normal phenotype?

A: Short answer: No. The selection gradients returned by gam.gradients() will work with the Lande equation, i.e.,

    \[ \Delta\bar{z} = \mathbf{G}\boldsymbol{\beta} \]


    \[ \Delta\bar{\mathbf{G}} = \mathbf{G}'\boldsymbol{\gamma}\mathbf{G} - \mathbf{G}^2\boldsymbol{\beta}^2 \]

(note the second expression does not apply between generations – it is the change in \mathbf{G} due to selection, but before recombination) under the assumption that breeding values are multivariate normal. It is worth keeping in mind that non-normal phenotype may be a strong hint that genetic values are also non-normal.

For me at least, the main ‘test’ of whether a selection gradient is right is whether or not it works with quantitative genetic theory, as above. Note though that selection gradients as defined here do not necessarily reflect the coefficients of the quadratic approximation of the relative fitness function, when phenotype is non-normal.

Q: Doesn’t the link function in a glm or gam influence the selection gradients?

A: Generally, no. If the glm is strictly linear (no quadratic or otherwise curved term), then gamma will simply reflect the link function. If there is a quadratic (or spline) term in the glm/gam, then the link function has at most a trivial effect on the selection gradient estimates.

Q: I have estimated a fitness function with something other than a glm/gam fitted with mgcv.  How to I make gsg work?

You can make the gsg algorithm work in the following way:

(i) make an R function that represents your fitness function

# this is totally arbitrary - maybe it is estimated from data, 
# maybe it is a theoretical construct

   W[which(z>0)]<-1; W[which(z>1)]<-3;

(ii) make an R function to calculate mean absolute fitness:


(iii) calculate the first and second derivatives of mean absolute fitness with respect to mean phenotype:

# assume some data for the distribution of phenotype (perhaps from the same individuals as were used to estimate the fitness function, but not necessarily) are available


# finite differences to closely approximate the derivatives




firstDeriv<- (W1-W2)/0.02

secondDeriv<- ( (W1-W0)/0.01  - (W0-W2)/0.01 )/0.01

(iiv) scale to relative fitness:

# beta
firstDeriv / W0

# gamma
secondDeriv /W0

The method for obtaining standard errors will depend on the details of the sample data and the analysis from which the fitness function was obtained.  Case bootstrapping is easy and very general.  Multivariate gradients are a matter of applying exactly the same methods, but obtaining partial derivatives in step (iii)