Title: | Repeated Measurement Models for Discrete Times |
---|---|
Description: | Companion R package for the course "Statistical analysis of correlated and repeated measurements for health science researchers" taught by the section of Biostatistics of the University of Copenhagen. It implements linear mixed models where the model for the variance-covariance of the residuals is specified via patterns (compound symmetry, toeplitz, unstructured, ...). Statistical inference for mean, variance, and correlation parameters is performed based on the observed information and a Satterthwaite approximation of the degrees of freedom. Normalized residuals are provided to assess model misspecification. Statistical inference can be performed for arbitrary linear or non-linear combination(s) of model coefficients. Predictions can be computed conditional to covariates only or also to outcome values. |
Authors: | Brice Ozenne [aut, cre] |
Maintainer: | Brice Ozenne <[email protected]> |
License: | GPL-3 |
Version: | 1.2.0 |
Built: | 2025-02-17 02:46:08 UTC |
Source: | https://github.com/bozenne/lmmstar |
Extract data from a longitudinal case control study including 87 patients newly diagnosed with bipolar disorder and 44 age and sex matched healthy controls. Contains demographic data and lifestyle factors at baseline, as well as measures of psychosocial functioning at baseline and 1 year follow-up. This dataset is in the long format (i.e. one line per measurement).
id
: study participant.
sex
: male (M) or female (F).
age
: age in years.
group
: bipolar disorder (BD) or healthy control (HC).
episode
: whether the patient experience an affective episode during follow-up.
visit
: index of time at which pss, fast, and qol measurements where performed.
year
: time at which pss, fast, and qol measurements where performed.
pss
: perceived stress score.
fast
: functioning assessment short test.
qol
: WHO quality of life score.
educationyears
: years of education including basic school.
alcohol
: daily alcohol consumption.
missingreason
: reason of drop out or missed visit.
data(abetaL)
data(abetaL)
Pech, Josefine, et al. The impact of a new affective episode on psychosocial functioning, quality of life and perceived stress in newly diagnosed patients with bipolar disorder: A prospective one-year case-control study.Journal of Affective Disorders 277 (2020): 486-494.
Extract data from a longitudinal case control study including 87 patients newly diagnosed with bipolar disorder and 44 age and sex matched healthy controls. Contains demographic data and lifestyle factors at baseline, as well as measures of psychosocial functioning at baseline and 1 year follow-up. This dataset is in the wide format (i.e. one line per participant).
id
: study participant.
sex
: male (M) or female (F).
age
: age in years.
group
: bipolar disorder (BD) or healthy control (HC).
episode
: whether the patient experience an affective episode during follow-up.
fast0
,fast1
: functioning assessment short test at baseline and follow-up.
qol0
,qol1
: WHO quality of life score at baseline and follow-up.
pss0
,pss1
: perceived stress score at baseline and follow-up.
educationyears
: years of education including basic school.
alcohol
: daily alcohol consumption.
missingreason
: reason of drop out or missed visit.
data(abetaW)
data(abetaW)
Pech, Josefine, et al. "The impact of a new affective episode on psychosocial functioning, quality of life and perceived stress in newly diagnosed patients with bipolar disorder: A prospective one-year case-control study."Journal of Affective Disorders 277 (2020): 486-494.
Auxiliary function that can be used when specifying the argument columns
(e.g. calling confint.lmm
) to add columns.
add(...)
add(...)
... |
[character vector] name of the columns to be added to the default output. |
A character vector
set.seed(10) dW <- sampleRem(25, n.times = 1, format = "long") e.lmm <- lmm(Y~X1, data = dW) confint(e.lmm, columns = add("statistic"))
set.seed(10) dW <- sampleRem(25, n.times = 1, format = "long") e.lmm <- lmm(Y~X1, data = dW) confint(e.lmm, columns = add("statistic"))
Test linear combinations of parameters from a linear mixed model using Wald test or Likelihood Ratio Test (LRT).
## S3 method for class 'lmm' anova( object, effects = NULL, rhs = NULL, robust = NULL, df = NULL, univariate = TRUE, multivariate = TRUE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, simplify = NULL, ... )
## S3 method for class 'lmm' anova( object, effects = NULL, rhs = NULL, robust = NULL, df = NULL, univariate = TRUE, multivariate = TRUE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, simplify = NULL, ... )
object |
a |
effects |
[character or numeric matrix] Should the Wald test be computed for all variables ( |
rhs |
[numeric vector] the right hand side of the hypothesis. Only used when the argument |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
df |
[logical] Should degrees-of-freedom be estimated using a Satterthwaite approximation? If yes F-distribution (multivariate) and Student's t-distribution (univariate) are used. Other chi-squared distribution and normal distribution are used. |
univariate |
[logical] Should an estimate, standard error, confidence interval, and p-value be output for each hypothesis? |
multivariate |
[logical] Should all hypotheses be simultaneously tested using a multivariate Wald test? |
transform.sigma , transform.k , transform.rho
|
are passed to the |
simplify |
[logical] should only the estimates, variance-covariance with its gradient, and degrees-of-freedom relative to the parameters involved in the Wald test be stored (TRUE) or relative to all model parameters (0.5) along with their iid decomposition (FALSE). |
... |
Not used. For compatibility with the generic method. |
By default adjustment of confidence intervals and p-values for multiple comparisons is based on the distribution of the maximum-statistic.
This is refered to as a single-step Dunnett multiple testing procedures in table II of Dmitrienko et al. (2013).
It is performed using the multcomp package with the option test = adjusted("single-step")
with equal degrees-of-freedom
or by simulation using a Student's t copula with unequal degrees-of-freedom (more in the note of the details section of confint.Wald_lmm
).
A data.frame (LRT) or a list of containing the following elements (Wald):
multivariate
: data.frame containing the multivariate Wald test.
The column df.num
refers to the degrees-of-freedom for the numerator (i.e. number of hypotheses)
wherease the column df.denum
refers to the degrees-of-freedom for the denominator (i.e. Satterthwaite approximation).
univariate
: data.frame containing each univariate Wald test.
glht
: used internally to call functions from the multcomp package.
object
: list containing key information about the linear mixed model.
args
: list containing argument values from the function call.
Dmitrienko, A. and D'Agostino, R., Sr (2013), Traditional multiplicity adjustment methods in clinical trials. Statist. Med., 32: 5172-5218. https://doi.org/10.1002/sim.5990.
summary.Wald_lmm
or confint.Wald_lmm
for a summary of the results. autoplot.Wald_lmm
for a graphical display of the results. rbind.Wald_lmm
for combining result across models and adjust for multiple comparisons.
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### fit Linear Mixed Model #### eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) #### Multivariate Wald test #### ## F-tests anova(eUN.lmm) anova(eUN.lmm, effects = "all") anova(eUN.lmm, robust = TRUE, df = FALSE) summary(anova(eUN.lmm), method = "bonferroni") ## user defined F-test summary(anova(eUN.lmm, effects = c("X1=0","X2+X5=10"))) ## chi2-tests anova(eUN.lmm, df = FALSE) ## with standard contrast if(require(multcomp)){ amod <- lmm(breaks ~ tension, data = warpbreaks) e.amod <- anova(amod, effect = mcp(tension = "Tukey")) summary(e.amod) } #### Likelihood ratio test #### eUN0.lmm <- lmm(Y ~ X1 + X2, repetition = ~visit|id, structure = "UN", data = dL) anova(eUN.lmm, eUN0.lmm) eCS.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "CS", data = dL) anova(eUN.lmm, eCS.lmm)
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### fit Linear Mixed Model #### eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) #### Multivariate Wald test #### ## F-tests anova(eUN.lmm) anova(eUN.lmm, effects = "all") anova(eUN.lmm, robust = TRUE, df = FALSE) summary(anova(eUN.lmm), method = "bonferroni") ## user defined F-test summary(anova(eUN.lmm, effects = c("X1=0","X2+X5=10"))) ## chi2-tests anova(eUN.lmm, df = FALSE) ## with standard contrast if(require(multcomp)){ amod <- lmm(breaks ~ tension, data = warpbreaks) e.amod <- anova(amod, effect = mcp(tension = "Tukey")) summary(e.amod) } #### Likelihood ratio test #### eUN0.lmm <- lmm(Y ~ X1 + X2, repetition = ~visit|id, structure = "UN", data = dL) anova(eUN.lmm, eUN0.lmm) eCS.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "CS", data = dL) anova(eUN.lmm, eCS.lmm)
Evaluate the Area Under the Curve (AUC) based on discrete observations y = f(x), using interpolation of order 0 (step), 1 (trapezoidal), or 3 (natural cubic splines). The AUC is the integral of a function over an interval [from,to].
approxAUC( x, y, from, to, method = "trapezoid", subdivisions = 100, name = "AUC", na.rm = FALSE )
approxAUC( x, y, from, to, method = "trapezoid", subdivisions = 100, name = "AUC", na.rm = FALSE )
x |
[numeric vector] x-values. |
y |
[numeric vector] image of the x-values by the function. |
from |
[numeric] lower border of the intergration domain |
to |
[numeric] upper border of the intergration domain |
method |
[character] the type of interpolation: |
subdivisions |
[integer] number of subdivisions to be used when performing numerical integration of the spline.
Only relevant when |
name |
[character] how to name the output. Can be set to |
na.rm |
[logical] in presence of missing values, should complete.cases of |
This function is a simplified version of the AUC
function from the DescTools package.
a numeric value.
## same examples as DescTools::AUC approxAUC(x=c(1,3), y=c(1,1), from = 1, to = 3) approxAUC(x=1:3, y=c(1,2,4), from = 1, to = 3) approxAUC(x=1:3, y=c(1,2,4), from = 1, to = 3, method = "step") x <- seq(0, pi, length.out=200) approxAUC(x=x, y=sin(x), from = 0, to = pi) approxAUC(x=x, y=sin(x), from = 0, to = pi, method = "spline")
## same examples as DescTools::AUC approxAUC(x=c(1,3), y=c(1,1), from = 1, to = 3) approxAUC(x=1:3, y=c(1,2,4), from = 1, to = 3) approxAUC(x=1:3, y=c(1,2,4), from = 1, to = 3, method = "step") x <- seq(0, pi, length.out=200) approxAUC(x=x, y=sin(x), from = 0, to = pi) approxAUC(x=x, y=sin(x), from = 0, to = pi, method = "spline")
Graphical representation for correlation matrix
## S3 method for class 'correlate' autoplot( object, index, size.text = 16, name.legend = "Correlation", title = NULL, scale = ggplot2::scale_fill_gradient2, type.cor = "both", low = "blue", high = "red", mid = "white", midpoint = mean(limits), limits = c(-1, 1), ... ) ## S3 method for class 'correlate' plot(x, ...)
## S3 method for class 'correlate' autoplot( object, index, size.text = 16, name.legend = "Correlation", title = NULL, scale = ggplot2::scale_fill_gradient2, type.cor = "both", low = "blue", high = "red", mid = "white", midpoint = mean(limits), limits = c(-1, 1), ... ) ## S3 method for class 'correlate' plot(x, ...)
object |
[correlate] list of list of correlation matrix |
index |
[character vector] optional vector used to select some of the correlation matrix. |
size.text |
[numeric, >0] size of the font used to display text. |
name.legend |
[character] title for the color scale. |
title |
[character] title for the graph. |
scale |
[function] color scale used for the correlation. |
type.cor |
[character] should the whole correlation matrix be displayed ( |
low |
[character] color used to represent low correlation. |
high |
[character] color used to represent high correlation. |
mid |
[character] color used to represent moderate correlation. |
midpoint |
[numeric] value defining a modereate correlation. |
limits |
[numeric vector of length 2] values defining a low and high correlation. |
plot(correlate)
: Graphical Display For Correlation Matrix
Display fitted values or residual plot for the mean, variance, and correlation structure. Can also display quantile-quantile plot relative to the normal distribution.
## S3 method for class 'lmm' autoplot( object, type = "fit", type.residual = NULL, obs.alpha = 0, obs.size = NULL, facet = NULL, facet_nrow = NULL, facet_ncol = NULL, scales = "fixed", labeller = "label_value", at = NULL, time.var = NULL, color = NULL, position = NULL, ci = TRUE, ci.alpha = NULL, ylim = NULL, mean.size = c(3, 1), size.text = 16, position.errorbar = "identity", ... ) ## S3 method for class 'lmm' plot(x, ...)
## S3 method for class 'lmm' autoplot( object, type = "fit", type.residual = NULL, obs.alpha = 0, obs.size = NULL, facet = NULL, facet_nrow = NULL, facet_ncol = NULL, scales = "fixed", labeller = "label_value", at = NULL, time.var = NULL, color = NULL, position = NULL, ci = TRUE, ci.alpha = NULL, ylim = NULL, mean.size = c(3, 1), size.text = 16, position.errorbar = "identity", ... ) ## S3 method for class 'lmm' plot(x, ...)
object , x
|
a |
type |
[character] the type of plot
|
type.residual |
[character] the type of residual to be used or,
when |
obs.alpha |
[numeric, 0-1] When not NA, transparency parameter used to display the original data by cluster. |
obs.size |
[numeric vector of length 2] size of the point and line for the original data. |
facet |
[formula] split the plot into a matrix of panels defined by the variables in the formula.
Internally it calls |
facet_nrow |
[integer] number of rows of panels in the graphical display. |
facet_ncol |
[integer] number of columns of panels in the graphical display. |
scales , labeller
|
[character] Passed to |
at |
[data.frame] values for the covariates at which to evaluate the fitted values or partial residuals. |
time.var |
[character] x-axis variable for the plot. |
color |
[character] name of the variable in the dataset used to color the curve. No color is used when set to |
position |
[character] relative position of the points when colored according to a variable. |
ci |
[logical] should confidence intervals be displayed? |
ci.alpha |
[numeric, 0-1] When not NA, transparency parameter used to display the confidence intervals. |
ylim |
[numeric vector of length 2] the lower and higher value of the vertical axis, or the range of the color scale for correlation or covariance plot. |
mean.size |
[numeric vector of length 2] size of the point and line for the mean trajectory. |
size.text |
[numeric, >0] size of the font used to display text. |
position.errorbar |
[character] relative position of the errorbars. |
... |
arguments passed to the |
A list with two elements
data
: data used to create the graphical display.
plot
: ggplot object.
plot(lmm)
: Graphical Display For a Linear Mixed Model
plot.lmm
for other graphical display (residual plots, partial residual plots).
if(require(ggplot2)){ #### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$X1 <- as.factor(dL$X1) #### fit Linear Mixed Model #### eCS.lmm <- lmm(Y ~ visit + X1 + X6, repetition = ~visit|id, structure = "CS", data = dL, df = FALSE) #### model fit #### plot(eCS.lmm, type = "fit", facet =~X1) ## customize display gg <- autoplot(eCS.lmm, type = "fit", facet =~X1)$plot gg + coord_cartesian(ylim = c(0,6)) ## restrict to specific covariate value plot(eCS.lmm, type = "fit", at = data.frame(X6=1), color = "X1") #### qqplot #### plot(eCS.lmm, type = "qqplot") plot(eCS.lmm, type = "qqplot", engine.qqplot = "qqtest") #### residual correlation #### plot(eCS.lmm, type = "correlation") #### residual trend #### plot(eCS.lmm, type = "scatterplot") #### residual heteroschedasticity #### plot(eCS.lmm, type = "scatterplot2") #### partial residuals #### plot(eCS.lmm, type = "partial", type.residual = "visit") plot(eCS.lmm, type = "partial", type.residual = c("(Intercept)","X1","visit")) plot(eCS.lmm, type = "partial", type.residual = c("(Intercept)","X1","visit"), facet = ~X1) }
if(require(ggplot2)){ #### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$X1 <- as.factor(dL$X1) #### fit Linear Mixed Model #### eCS.lmm <- lmm(Y ~ visit + X1 + X6, repetition = ~visit|id, structure = "CS", data = dL, df = FALSE) #### model fit #### plot(eCS.lmm, type = "fit", facet =~X1) ## customize display gg <- autoplot(eCS.lmm, type = "fit", facet =~X1)$plot gg + coord_cartesian(ylim = c(0,6)) ## restrict to specific covariate value plot(eCS.lmm, type = "fit", at = data.frame(X6=1), color = "X1") #### qqplot #### plot(eCS.lmm, type = "qqplot") plot(eCS.lmm, type = "qqplot", engine.qqplot = "qqtest") #### residual correlation #### plot(eCS.lmm, type = "correlation") #### residual trend #### plot(eCS.lmm, type = "scatterplot") #### residual heteroschedasticity #### plot(eCS.lmm, type = "scatterplot2") #### partial residuals #### plot(eCS.lmm, type = "partial", type.residual = "visit") plot(eCS.lmm, type = "partial", type.residual = c("(Intercept)","X1","visit")) plot(eCS.lmm, type = "partial", type.residual = c("(Intercept)","X1","visit"), facet = ~X1) }
Display estimated linear contrasts applied on parameters from subgroup-specific linear mixed models.
## S3 method for class 'mlmm' autoplot(object, type = "forest", ...) ## S3 method for class 'mlmm' plot(x, facet_nrow = NULL, facet_ncol = NULL, labeller = "label_value", ...)
## S3 method for class 'mlmm' autoplot(object, type = "forest", ...) ## S3 method for class 'mlmm' plot(x, facet_nrow = NULL, facet_ncol = NULL, labeller = "label_value", ...)
plot(mlmm)
: Graphical Display For Multiple Linear Mixed Models
Extract and display the correlation modeled via the linear mixed model.
## S3 method for class 'partialCor' autoplot( object, size.text = 16, limits = c(-1, 1.00001), low = "blue", mid = "white", high = "red", midpoint = 0, ... ) ## S3 method for class 'partialCor' plot(x, ...)
## S3 method for class 'partialCor' autoplot( object, size.text = 16, limits = c(-1, 1.00001), low = "blue", mid = "white", high = "red", midpoint = 0, ... ) ## S3 method for class 'partialCor' plot(x, ...)
object , x
|
a |
size.text |
[numeric, >0] size of the font used to display text. |
limits |
[numeric vector of length 2] minimum and maximum value of the colorscale relative to the correlation. |
low , mid , high
|
[character] color for the the colorscale relative to the correlation. |
midpoint |
[numeric] correlation value associated with the color defined by argument |
... |
Not used. For compatibility with the generic method. |
A list with two elements
data
: data used to create the graphical display.
plot
: ggplot object.
plot(partialCor)
: Graphical Display For Partial Correlation
if(require(ggplot2)){ data(gastricbypassL, package = "LMMstar") e.pCor <- partialCor(c(weight,glucagonAUC)~time, repetition = ~visit|id, data = gastricbypassL) plot(e.pCor) }
if(require(ggplot2)){ data(gastricbypassL, package = "LMMstar") e.pCor <- partialCor(c(weight,glucagonAUC)~time, repetition = ~visit|id, data = gastricbypassL) plot(e.pCor) }
Graphical representation of the profile likelihood from a linear mixed model
## S3 method for class 'profile_lmm' autoplot( object, type = "logLik", quadratic = TRUE, ci = FALSE, size = c(3, 2, 1, 1), linetype = c("dashed", "dashed", "dashed"), shape = 19, scales = "free", nrow = NULL, ncol = NULL, ... ) ## S3 method for class 'profile_lmm' plot(x, ...)
## S3 method for class 'profile_lmm' autoplot( object, type = "logLik", quadratic = TRUE, ci = FALSE, size = c(3, 2, 1, 1), linetype = c("dashed", "dashed", "dashed"), shape = 19, scales = "free", nrow = NULL, ncol = NULL, ... ) ## S3 method for class 'profile_lmm' plot(x, ...)
object , x
|
an object of class |
type |
[character] Should the log-likelihood ( |
quadratic |
[logical] Should a quadratic approximation of the likelihood be displayed? |
ci |
[logical] Should a 95% confidence intervals obtained from the Wald test (vertical lines) and Likelihood ratio test (horizontal line) be displayed? |
size |
[numeric vector of length 4] Size of the point for the MLE, width of the line representing the likelihood, width of the corresponding quadratic approximation, and width of the line representing the confidence intervals. |
linetype |
[integer vector of length 2] type of line used to represent the quadratic approximation of the likelihood and the confidence intervals. |
shape |
[integer, >0] type of point used to represent the MLE. |
scales , nrow , ncol
|
argument passed to |
... |
Not used. For compatibility with the generic method. |
A list with three elements
data.fit
: data containing the quadratice approximation of the log-likelihood
data.ci
: data containing the confidence intervals.
plot
: ggplot object.
plot(profile_lmm)
: Display Contour of the log-Likelihood
Graphical representation of the residuals from a linear mixed model.
Require a long format (except for the correlation where both format are accepted) and having exported the dataset along with the residual (argument keep.data
when calling residuals.lmm
).
## S3 method for class 'residuals_lmm' autoplot( object, type = NULL, type.residual = NULL, time.var = NULL, facet = NULL, facet_nrow = NULL, facet_ncol = NULL, scales = "fixed", labeller = "label_value", engine.qqplot = "ggplot2", add.smooth = TRUE, digits.cor = 2, size.text = 16, color = NULL, obs.size = NULL, mean.size = c(3, 1), ci.alpha = 0.25, ylim = NULL, position = NULL, ... ) ## S3 method for class 'residuals_lmm' plot(x, ...)
## S3 method for class 'residuals_lmm' autoplot( object, type = NULL, type.residual = NULL, time.var = NULL, facet = NULL, facet_nrow = NULL, facet_ncol = NULL, scales = "fixed", labeller = "label_value", engine.qqplot = "ggplot2", add.smooth = TRUE, digits.cor = 2, size.text = 16, color = NULL, obs.size = NULL, mean.size = c(3, 1), ci.alpha = 0.25, ylim = NULL, position = NULL, ... ) ## S3 method for class 'residuals_lmm' plot(x, ...)
object , x
|
an object of class |
type |
[character] Should a qqplot ( |
type.residual |
[character] Type of residual for which the graphical representation should be made. |
time.var |
[character] x-axis variable for the plot.
Only relevant when argument type is one of |
facet |
[formula] split the plot into a matrix of panels defined by the variables in the formula.
Internally it calls |
facet_nrow |
[integer] number of rows of panels in the graphical display. |
facet_ncol |
[integer] number of columns of panels in the graphical display. |
scales , labeller
|
[character] Passed to |
engine.qqplot |
[character] Should ggplot2 or qqtest be used to display quantile-quantile plots?
Only used when argument |
add.smooth |
[logical] should a local smoother be used to display the mean of the residual values across the fitted values.
Only relevant for when argument |
digits.cor |
[integer, >0] Number of digit used to display the correlation coefficients?
No correlation coefficient is displayed when set to 0. Only used when argument |
size.text |
[numeric, >0] Size of the font used to displayed text when using ggplot2. |
color |
[character] color of the dots representing the observations. When displaying partial residuals, should contain a second color indicating how to display the model fit. |
obs.size |
[numeric vector] size of the dots representing the observations. |
mean.size |
[numeric vector of length 2] size of the point and line for the mean trajectory. |
ci.alpha |
[numeric, 0-1] When not NA, transparency parameter used to display the confidence intervals. |
ylim |
[numeric vector of length 2] the lower and higher value of the vertical axis, or the range of the color scale for correlation or covariance plot. |
position |
[character] relative position of the points when colored according to a variable. |
... |
Not used. For compatibility with the generic method. |
A list with two elements
data
: data used to generate the plot.
plot
: ggplot object.
plot(residuals_lmm)
: Graphical Display of the Residuals
Graphical representation of the descriptive statistics.
## S3 method for class 'summarize' autoplot( object, type = "mean", variable = NULL, size.text = 16, linewidth = 1.25, size = 3, ... ) ## S3 method for class 'summarize' plot(x, ...)
## S3 method for class 'summarize' autoplot( object, type = "mean", variable = NULL, size.text = 16, linewidth = 1.25, size = 3, ... ) ## S3 method for class 'summarize' plot(x, ...)
object , x
|
an object of class |
type |
[character] the summary statistic that should be displayed: |
variable |
[character] type outcome relative to which the summary statistic should be displayed.
Only relevant when multiple variables have been used on the left hand side of the formula when calling |
size.text |
[numeric, >0] size of the text in the legend, x- and y- labels. |
linewidth |
[numeric, >0] thickness of the line connecting the points. |
size |
[numeric, >0] width of the points. |
... |
additional arguments passed to .ggHeatmap when displaying the correlation:
|
A list with two elements
data
: data used to generate the plot.
plot
: ggplot object.
plot(summarize)
: Graphical Display of Missing Data Pattern
data(gastricbypassL, package = "LMMstar") dtS <- summarize(weight ~ time, data = gastricbypassL) plot(dtS) dtS <- summarize(glucagonAUC + weight ~ time|id, data = gastricbypassL, na.rm = TRUE) plot(dtS, variable = "glucagonAUC") plot(dtS, variable = "glucagonAUC", type = "correlation", size.text = 1)
data(gastricbypassL, package = "LMMstar") dtS <- summarize(weight ~ time, data = gastricbypassL) plot(dtS) dtS <- summarize(glucagonAUC + weight ~ time|id, data = gastricbypassL, na.rm = TRUE) plot(dtS, variable = "glucagonAUC") plot(dtS, variable = "glucagonAUC", type = "correlation", size.text = 1)
Graphical representation of the possible missing data patterns in the dataset.
## S3 method for class 'summarizeNA' autoplot( object, variable = NULL, size.text = 16, add.missing = " missing", order.pattern = NULL, decreasing.order = FALSE, title = NULL, labeller = "label_value", ... ) ## S3 method for class 'summarizeNA' plot(x, ...)
## S3 method for class 'summarizeNA' autoplot( object, variable = NULL, size.text = 16, add.missing = " missing", order.pattern = NULL, decreasing.order = FALSE, title = NULL, labeller = "label_value", ... ) ## S3 method for class 'summarizeNA' plot(x, ...)
object , x
|
a |
variable |
[character] variable for which the missing patterns should be displayed.
Only required when the argument |
size.text |
[numeric, >0] size of the font used to display text. |
add.missing |
[logical] should the number of missing values per variable be added to the x-axis tick labels. |
order.pattern |
[numeric vector or character] in which order the missing data pattern should be displayed. Can either be a numeric vector indexing the patterns or a character refering to order the patterns per number of missing values ( |
title |
[character] title of the graphical display. Passed to |
labeller |
[character] how to label each facet: only the value ( |
... |
Not used. For compatibility with the generic method. |
decreasing |
[logical] when ordering the missing data pattern (see argument |
A list with two elements
data
: data used to create the graphical display.
plot
: ggplot object.
plot(summarizeNA)
: Graphical Display of Missing Data Pattern
Display estimated linear contrast applied on parameters from a linear mixed model.
## S3 method for class 'Wald_lmm' autoplot(object, type = "forest", size.text = 16, add.args = NULL, ...) ## S3 method for class 'Wald_lmm' plot(x, ...)
## S3 method for class 'Wald_lmm' autoplot(object, type = "forest", size.text = 16, add.args = NULL, ...) ## S3 method for class 'Wald_lmm' plot(x, ...)
object , x
|
a |
type |
[character] what to display: a forest plot ( |
size.text |
[numeric, >0] size of the font used to display text. |
add.args |
[list] additional arguments used to customized the graphical display. Must be a named list. See details. |
... |
arguments passed to the confint method. |
Argument add.args: parameters specific to the forest plot:
color
: [logical] should the estimates be colored by global null hypothesis, e.g. when testing the effect of a 3 factor covariate, the two corresponding coefficient will have the same color. Alternatively a vector of positive integers giving the color with which each estimator should be displayed.
color
: [logical] should the estimates be represented by a different shape per global null hypothesis, e.g. when testing the effect of a 3 factor covariate, the two corresponding coefficient will have the same type of point. Alternatively a vector of positive integers describing the shape to be used for each estimator.
ci
: [logical] should confidence intervals be displayed?
size.estimate
: [numeric, >0] size of the dot used to display the estimates.
size.ci
: [numeric, >0] thickness of the line used to display the confidence intervals.
width.ci
: [numeric, >0] width of the line used to display the confidence intervals.
size.null
: [numeric, >0] thickness of the line used to display the null hypothesis.
Parameters specific to the heatmap plot:
limits
: [numeric vector of length 2] minimum and maximum value of the colorscale relative to the correlation.
low
, mid
, high
: [character] color for the the colorscale relative to the correlation.
midpoint
: [numeric] correlation value associated with the color defined by argument mid
A list with two elements
data
: data used to create the graphical display.
plot
: ggplot object.
plot(Wald_lmm)
: Graphical Display For Wald Tests Applied to a Linear Mixed Model
## From the multcomp package if(require(datasets) && require(ggplot2)){ ## only tests with 1 df ff <- Fertility ~ Agriculture + Examination + Education + Catholic + Infant.Mortality e.lmm <- lmm(ff, data = swiss) e.aovlmm <- anova(e.lmm) autoplot(e.aovlmm, type = "forest") autoplot(e.aovlmm, type = "heat") ## 3 color gradient autoplot(e.aovlmm, type = "heat", add.args = list(mid = NULL)) ## 2 color gradient ## test with more than 1 df e.lmm2 <- lmm(breaks ~ tension + wool, data = warpbreaks) e.aovlmm2 <- anova(e.lmm2) autoplot(e.aovlmm2) autoplot(e.aovlmm2, add.args = list(color = FALSE, shape = FALSE)) }
## From the multcomp package if(require(datasets) && require(ggplot2)){ ## only tests with 1 df ff <- Fertility ~ Agriculture + Examination + Education + Catholic + Infant.Mortality e.lmm <- lmm(ff, data = swiss) e.aovlmm <- anova(e.lmm) autoplot(e.aovlmm, type = "forest") autoplot(e.aovlmm, type = "heat") ## 3 color gradient autoplot(e.aovlmm, type = "heat", add.args = list(mid = NULL)) ## 2 color gradient ## test with more than 1 df e.lmm2 <- lmm(breaks ~ tension + wool, data = warpbreaks) e.aovlmm2 <- anova(e.lmm2) autoplot(e.aovlmm2) autoplot(e.aovlmm2, add.args = list(color = FALSE, shape = FALSE)) }
Create a new variable based on a time variable and a group variable where groups are constrained to be equal at specific timepoints.
baselineAdjustment( object, variable, repetition, constrain, new.level = NULL, collapse.time = NULL )
baselineAdjustment( object, variable, repetition, constrain, new.level = NULL, collapse.time = NULL )
object |
[data.frame] dataset |
variable |
[character] Column in the dataset to be constrained at specific timepoints. |
repetition |
[formula] Time and cluster structure, typically |
constrain |
[vector] Levels of the time variable at which the variable is constained. |
new.level |
[character or numeric] Level used at the constraint. If |
collapse.time |
[character] When not |
A vector of length the number of rows of the dataset.
data(ncgsL, package = "LMMstar") ncgsL$group <- relevel(ncgsL$group, "placebo") ## baseline adjustment 1 ncgsL$treat <- baselineAdjustment(ncgsL, variable = "group", repetition= ~ visit|id, constrain = 1) table(treat = ncgsL$treat, visit = ncgsL$visit, group = ncgsL$group) ncgsL$treattime <- baselineAdjustment(ncgsL, variable = "group", repetition= ~ visit|id, constrain = 1, collapse.time = ".") table(treattime = ncgsL$treattime, visit = ncgsL$visit, group = ncgsL$group) ## baseline adjustment 2 ncgsL$treat2 <- baselineAdjustment(ncgsL, variable = "group", new.level = "baseline", repetition= ~ visit|id, constrain = 1) table(treat = ncgsL$treat2, visit = ncgsL$visit, group = ncgsL$group) ncgsL$treattime2 <- baselineAdjustment(ncgsL, variable = "group", new.level = "baseline", repetition= ~ visit|id, constrain = 1, collapse.time = ".") table(treattime = ncgsL$treattime2, visit = ncgsL$visit, group = ncgsL$group)
data(ncgsL, package = "LMMstar") ncgsL$group <- relevel(ncgsL$group, "placebo") ## baseline adjustment 1 ncgsL$treat <- baselineAdjustment(ncgsL, variable = "group", repetition= ~ visit|id, constrain = 1) table(treat = ncgsL$treat, visit = ncgsL$visit, group = ncgsL$group) ncgsL$treattime <- baselineAdjustment(ncgsL, variable = "group", repetition= ~ visit|id, constrain = 1, collapse.time = ".") table(treattime = ncgsL$treattime, visit = ncgsL$visit, group = ncgsL$group) ## baseline adjustment 2 ncgsL$treat2 <- baselineAdjustment(ncgsL, variable = "group", new.level = "baseline", repetition= ~ visit|id, constrain = 1) table(treat = ncgsL$treat2, visit = ncgsL$visit, group = ncgsL$group) ncgsL$treattime2 <- baselineAdjustment(ncgsL, variable = "group", new.level = "baseline", repetition= ~ visit|id, constrain = 1, collapse.time = ".") table(treattime = ncgsL$treattime2, visit = ncgsL$visit, group = ncgsL$group)
Data From The Bland Altman Study where two methods to measure the peak expiratory flow rate (PEFR) where compared. This dataset is in the long format (i.e. one line per measurement).
id
: patient identifier.
replicate
: index of the measurement (first or second).
method
: device used to make the measurement (Wright peak flow meter or mini Wright peak flow meter).
pefr
: measurement (peak expiratory flow rate).
data(blandAltmanL)
data(blandAltmanL)
Bland & Altman, Statistical methods for assessing agreement between two methods of clinical measurement, Lancet, 1986; i: 307-310.
Data From The Bland Altman Study where two methods to measure the peak expiratory flow rate (PEFR) where compared. This dataset is in the wide format (i.e. one line per patient).
id
: patient identifier.
wright1
: first measurement made with a Wright peak flow meter.
wright2
: second measurement made with a Wright peak flow meter.
mini1
: first measurement made with a mini Wright peak flow meter.
mini2
: second measurement made with a mini Wright peak flow meter.
data(blandAltmanW)
data(blandAltmanW)
Bland & Altman, Statistical methods for assessing agreement between two methods of clinical measurement, Lancet, 1986; i: 307-310.
Data from a cross-over trial comparing the impact of three formulations of a drug on the blood pressure. The study was conducted on 12 male volunteers randomly divided into tree groups and receiving each of the three formulations with a wash-out period of one week.
id
: patient identifier.
sequence
: sequence of treatment .
treatment
: formulation of the treatment
A (50 mg tablet)
B (100 mg tablet)
C (sustained-release formulation capsule)
period
: time period (in weeks).
duration
: duration of the drug (in hours).
data(bloodpressureL)
data(bloodpressureL)
TO ADD
Data from a randomized study including 112 girls at age 11 investigate the effect of a calcium supplement (n=55) vs. placebo (n=57) on bone mineral density over a 2 year follow-up. The clinical question is: does a calcium supplement help to increase bone gain in adolescent women? This dataset is in the long format (i.e. one line per measurement).
girl
: patient identifier.
grp
: treatment group: calcium supplement (coded C
) or placebo (coded P
).
visit
: visit index.
bmd
: bone mineral density (mg/cm3).
time.obs
: visit time (in years).
time.num
: scheduled visit time (numeric variable, in years).
time.fac
: scheduled visit time (factor variable).
data(calciumL)
data(calciumL)
TO ADD
Data from a randomized study including 112 girls at age 11 investigate the effect of a calcium supplement (n=55) vs. placebo (n=57) on bone mineral density over a 2 year follow-up. The clinical question is: does a calcium supplement help to increase bone gain in adolescent women? This dataset is in the wide format (i.e. one line per patient).
girl
: patient identifier
grp
: treatment group: calcium supplement (coded C
) or placebo (coded P
).
obstime1
: time after the start of the study at which the first visit took place (in years).
obstime2
: time after the start of the study at which the second visit took place (in years).
obstime3
: time after the start of the study at which the third visit took place (in years).
obstime4
: time after the start of the study at which the fourth visit took place (in years).
obstime5
: time after the start of the study at which the fifth visit took place (in years).
bmd1
: bone mineral density measured at the first visit (in mg/cm3).
bmd2
: bone mineral density measured at the second visit (in mg/cm3).
bmd3
: bone mineral density measured at the third visit (in mg/cm3).
bmd4
: bone mineral density measured at the fourth visit (in mg/cm3).
bmd5
: bone mineral density measured at the fifth visit (in mg/cm3).
data(calciumW)
data(calciumW)
Vonesh and Chinchilli 1997. Linear and Nonlinear models for the analysis of repeated measurement (Table 5.4.1 on page 228). New York: Marcel Dekker.
TODO
id
: patient identifier.
allocation
:
sex
:
age
:
visit
:
time
:
pwv
:
aix
:
dropout
:
data(ckdL)
data(ckdL)
TO ADD
TODO
id
: patient identifier.
allocation
:
sex
:
age
:
pwv0
:
pwv12
:
pwv24
:
aix0
:
aix12
:
aix24
:
dropout
:
data(ckdW)
data(ckdW)
TO ADD
Extract estimated parameters from a linear mixed model.
## S3 method for class 'lmm' coef( object, effects = NULL, p = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = "none", transform.names = TRUE, simplify = TRUE, ... )
## S3 method for class 'lmm' coef( object, effects = NULL, p = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = "none", transform.names = TRUE, simplify = TRUE, ... )
object |
a |
effects |
[character] Should all coefficients be output ( |
p |
[numeric vector] value of the model coefficients to be used. Only relevant if differs from the fitted values. |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
simplify |
[logical] Omit from the output the attribute containing the type of each parameter (mu/sigma/k/rho) and the corresponding variance parameters (sigma/k.x/k.y). |
... |
Not used. For compatibility with the generic method. |
transform.sigma:
"none"
ouput residual standard error.
"log"
ouput log-transformed residual standard error.
"square"
ouput residual variance.
"logsquare"
ouput log-transformed residual variance.
transform.k:
"none"
ouput ratio between the residual standard error of the current level and the reference level.
"log"
ouput log-transformed ratio between the residual standard errors.
"square"
ouput ratio between the residual variances.
"logsquare"
ouput log-transformed ratio between the residual variances.
"sd"
ouput residual standard error of the current level.
"logsd"
ouput residual log-transformed standard error of the current level.
"var"
ouput residual variance of the current level.
"logvar"
ouput residual log-transformed variance of the current level.
transform.rho:
"none"
ouput correlation coefficient.
"atanh"
ouput correlation coefficient after tangent hyperbolic transformation.
"cov"
ouput covariance coefficient.
A vector with the value of the model coefficients.
When using simplify=FALSE
the character strings in attribute "type"
refer to:
"mu"
: mean parameters.
"sigma"
: standard deviation parameters,
"k"
: ratio between standard deviation,
"rho"
: correlation parameter
The character strings in attribute "sigma"
refer, for each parameter, to a possible corresponding standard deviation parameter.
Those in attribute "k.x"
and "k.y"
refer to the ratio parameter.
NA
indicates no corresponding standard deviation or ratio parameter.
confint.lmm
or model.tables.lmm
for a data.frame containing estimates with their uncertainty.
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") ## fit linear mixed model eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL, df = FALSE) ## output coefficients coef(eUN.lmm) coef(eUN.lmm, effects = "variance", transform.k = "sd") coef(eUN.lmm, effects = "all", simplify = FALSE)
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") ## fit linear mixed model eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL, df = FALSE) ## output coefficients coef(eUN.lmm) coef(eUN.lmm, effects = "variance", transform.k = "sd") coef(eUN.lmm, effects = "all", simplify = FALSE)
Combine estimated parameters or linear contrasts applied on parameters from group-specific linear mixed models.
## S3 method for class 'mlmm' coef( object, effects = "Wald", method = "none", p = NULL, ordering = "model", backtransform = object$args$backtransform, transform.sigma = "none", transform.k = "none", transform.rho = "none", transform.names = TRUE, simplify = TRUE, ... )
## S3 method for class 'mlmm' coef( object, effects = "Wald", method = "none", p = NULL, ordering = "model", backtransform = object$args$backtransform, transform.sigma = "none", transform.k = "none", transform.rho = "none", transform.names = TRUE, simplify = TRUE, ... )
object |
a |
effects |
[character] By default will output the estimates relative to the hypotheses being tested ( |
method |
[character vector] should the estimated value for the linear contrasts be output (one of |
p |
[list of numeric vector] values for the model parameters to be used to evaluate the estimates relative to the hypotheses being tested. Only relevant if differs from the fitted values. |
ordering |
[character] should the output be ordered by name of the linear contrast ( |
backtransform |
[logical] should the estimate be back-transformed?
Only relevant when |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used?
Ignored when |
simplify |
[logical] should the output be a vector or a list with one element specific to each possible ordering (i.e. contrast or model).
Only relevant when argument |
... |
Not used. For compatibility with the generic method. |
Combine estimated values across linear contrasts applied on parameters from different linear mixed models.
## S3 method for class 'rbindWald_lmm' coef( object, effects = "Wald", method = "none", ordering = NULL, transform.names = TRUE, backtransform = NULL, simplify = TRUE, ... )
## S3 method for class 'rbindWald_lmm' coef( object, effects = "Wald", method = "none", ordering = NULL, transform.names = TRUE, backtransform = NULL, simplify = TRUE, ... )
object |
a |
effects |
[character] should the linear contrasts involved in the Wald test be output ( |
method |
[character vector] should the estimated value for the linear contrasts be output (one of |
ordering |
[character] should the output be ordered by name of the linear contrast ( |
transform.names |
[logical] should the name of the coefficients be updated to reflect the transformation that has been used?
Only relevant when |
backtransform |
[logical] should the estimates be back-transformed? |
simplify |
[logical] should the output be a vector or a list with one element specific to each possible ordering (i.e. contrast or model).
Only relevant when argument |
... |
Not used. For compatibility with the generic method. |
Argument effects: when evaluating the proportion of rejected hypotheses (effects="p.rejection"
)
a "single-step"
method will be used by default to evaluate the critical quantile.
This can be changed by adding adjustment method, e.g. effects=c("bonferronin","p.rejection"
, in the argument.
Extract estimated value of linear contrasts applied on parameters from a linear mixed model.
## S3 method for class 'Wald_lmm' coef( object, effects = "Wald", backtransform = NULL, transform.names = TRUE, simplify = TRUE, ... )
## S3 method for class 'Wald_lmm' coef( object, effects = "Wald", backtransform = NULL, transform.names = TRUE, simplify = TRUE, ... )
object |
a |
effects |
[character] should the linear contrasts involved in the Wald test be output ( |
backtransform |
[logical] should the estimates be back-transformed? |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used?
Only relevant when |
simplify |
[logical] omit from the output the attribute containing the type of each parameter or contrast (mu/sigma/k/rho). |
... |
Not used. For compatibility with the generic method. |
Compute confidence intervals (CIs) relative to parameters from a linear mixed model.
## S3 method for class 'lmm' confint( object, parm = NULL, level = 0.95, effects = NULL, robust = FALSE, null = NULL, columns = NULL, df = NULL, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, backtransform = NULL, ... )
## S3 method for class 'lmm' confint( object, parm = NULL, level = 0.95, effects = NULL, robust = FALSE, null = NULL, columns = NULL, df = NULL, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, backtransform = NULL, ... )
object |
a |
parm |
Not used. For compatibility with the generic method. |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
effects |
[character] Should the CIs/p-values for all coefficients be output ( |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
null |
[numeric vector] the value of the null hypothesis relative to each coefficient. |
columns |
[character vector] Columns to be output.
Can be any of |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the coefficient. Otherwise a normal distribution is used. |
type.information , transform.sigma , transform.k , transform.rho , transform.names
|
are passed to the |
backtransform |
[logical] should the variance/covariance/correlation coefficient be backtransformed? |
... |
Not used. For compatibility with the generic method. |
A data.frame containing some of the following coefficient (in rows):
column estimate: the estimate.
column se: the standard error.
column statistic: the test statistic.
column df: the degrees-of-freedom.
column lower: the lower bound of the confidence interval.
column upper: the upper bound of the confidence interval.
column null: the null hypothesis.
column p.value: the p-value relative to the null hypothesis.
the function anova
to perform inference about linear combinations of coefficients and adjust for multiple comparisons.
coef.lmm
for a simpler output (e.g. only estimates). model.tables.lmm
for a more detailed output (e.g. with p-value).
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### fit Linear Mixed Model #### eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) #### Confidence intervals #### ## based on a Student's t-distribution with transformation confint(eUN.lmm, effects = "all") ## based on a Student's t-distribution without transformation confint(eUN.lmm, effects = "all", transform.sigma = "none", transform.k = "none", transform.rho = "none") ## based on a Student's t-distribution transformation but not backtransformed confint(eUN.lmm, effects = "all", backtransform = FALSE) ## based on a Normal distribution with transformation confint(eUN.lmm, df = FALSE)
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### fit Linear Mixed Model #### eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) #### Confidence intervals #### ## based on a Student's t-distribution with transformation confint(eUN.lmm, effects = "all") ## based on a Student's t-distribution without transformation confint(eUN.lmm, effects = "all", transform.sigma = "none", transform.k = "none", transform.rho = "none") ## based on a Student's t-distribution transformation but not backtransformed confint(eUN.lmm, effects = "all", backtransform = FALSE) ## based on a Normal distribution with transformation confint(eUN.lmm, df = FALSE)
Compute pointwise or simultaneous confidence intervals relative to parameter or linear contrasts of parameters from group-specific linear mixed models. Can also output p-values (corresponding to pointwise confidence intervals) or adjusted p-values (corresponding to simultaneous confidence intervals).
## S3 method for class 'mlmm' confint( object, parm = NULL, level = 0.95, method = NULL, df = NULL, columns = NULL, backtransform = NULL, ordering = "parameter", ... )
## S3 method for class 'mlmm' confint( object, parm = NULL, level = 0.95, method = NULL, df = NULL, columns = NULL, backtransform = NULL, ordering = "parameter", ... )
object |
an |
parm |
Not used. For compatibility with the generic method. |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
method |
[character] Should pointwise confidence intervals be output ( |
ordering |
[character] should the output be ordered by type of parameter ( |
... |
other arguments are passed to |
Statistical inference following pooling is performed according to Rubin's rule whose validity requires the congeniality condition of Meng (1994). Pooling estimates: available methods are:
"average"
: average estimates
"pool.fixse"
: weighted average of the estimates, with weights being the inverse of the squared standard error. The uncertainty about the weights is neglected.
"pool.se"
: weighted average of the estimates, with weights being the inverse of the squared standard error. The uncertainty about the weights is computed under independence of the variance parameters between models.
"pool.gls"
: weighted average of the estimates, with weights being based on the variance-covariance matrix of the estimates. When this matrix is singular, its spectral decomposition is truncated when the correspodning eigenvalues are below . The uncertainty about the weights is neglected.
"pool.gls1"
: similar to "pool.gls"
but ensure that the weights are at most 1 in absolute value by shrinking toward the average.
"pool.rubin"
: average of the estimates and compute the uncertainty according to Rubin's rule (Barnard et al. 1999).
Meng X. L.(1994). Multiple-imputation inferences with uncongenial sources of input. Statist. Sci.9, 538–58.
Combine pointwise or simultaneous confidence intervals relative to linear contrasts of parameters from different linear mixed models. Can also output p-values (corresponding to pointwise confidence intervals) or adjusted p-values (corresponding to simultaneous confidence intervals).
## S3 method for class 'rbindWald_lmm' confint( object, parm, level = 0.95, df = NULL, method = NULL, columns = NULL, ordering = NULL, backtransform = NULL, ... )
## S3 method for class 'rbindWald_lmm' confint( object, parm, level = 0.95, df = NULL, method = NULL, columns = NULL, ordering = NULL, backtransform = NULL, ... )
object |
a |
parm |
Not used. For compatibility with the generic method. |
level |
[numeric, 0-1] nominal coverage of the confidence intervals. |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the Wald statistic. Otherwise a normal distribution is used. |
method |
[character] Should pointwise confidence intervals be output ( |
columns |
[character vector] Columns to be output.
Can be any of |
ordering |
[character] should the output be ordered by name of the linear contrast ( |
backtransform |
[logical] should the estimates be back-transformed? |
... |
Not used. For compatibility with the generic method. |
Argument method: the following pooling are available:
"average"
: average estimates
"pool.se"
: weighted average of the estimates, with weights being the inverse of the squared standard error.
"pool.gls"
: weighted average of the estimates, with weights being based on the variance-covariance matrix of the estimates. When this matrix is singular, the Moore–Penrose inverse is used which correspond to truncate the spectral decomposition for eigenvalues below .
"pool.gls1"
: similar to "pool.gls"
with weights shrinked toward the average whenever they exceed 1 in absolute value.
"pool.rubin"
: average of the estimates and compute the uncertainty according to Rubin's rule (Barnard et al. 1999). Validity requires the congeniality condition of Meng (1994).
"p.rejection"
: proportion of null hypotheses where there is evidence for an effect. By default the critical quantile (defining the level of evidence required) is evaluated using a "single-step"
method but this can be changed by adding adjustment method in the argument method
, e.g. effects=c("bonferronin","p.rejection")
.
Compute confidence intervals for linear mixed model using resampling (permutation or bootstrap).
## S3 method for class 'resample' confint( object, parm = NULL, null = NULL, level = 0.95, method = NULL, columns = NULL, correction = TRUE, ... )
## S3 method for class 'resample' confint( object, parm = NULL, null = NULL, level = 0.95, method = NULL, columns = NULL, correction = TRUE, ... )
object |
a |
parm |
Not used. For compatibility with the generic method. |
null |
[numeric vector] the value of the null hypothesis relative to each coefficient. Only relevant for when using bootstrap. |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
method |
[character] method used to compute the confidence intervals and p-values: |
columns |
[character vector] Columns to be output.
Can be any of |
correction |
[logical] correction to ensure non-0 p-values when using the percentile method, e.g. with permutations the p.value is evaluated as (#more extreme + 1)/(n.sample + 1) instead of (#more extreme)/(n.sample). |
... |
Not used. For compatibility with the generic method. |
Argument correction: if we denote by n.sample
the number of permutations that have been performed,
having the correction avoids p-values of 0 by ensuring they are at least as.numeric(correction)/(n.sample+as.numeric(correction))
,
e.g. at least 0.000999001 for a thousand permutations.
Argument correction (bootstrap): percentile confidence intervals are computed based on the quantile of the resampling distribution.
Gaussian confidence intervals and p-values are computed by assuming a normal distribution and estimating its variance based on the bootstrap distribution.
Studentized confidence intervals are computed using quantiles based on the boostrap distribution after it has been centered and rescaled by the point estimate and its standard error
Percentile and Studentized p-values are computed by finding the coverage level such that one of the bound of the confidence interval equals the null.
Argument correction (bootstrap):
Percentile p-values are computed based on the proportion of times the estimate was more extreme than the permutation estimates.
Gaussian p-values are computed by assuming a normal distribution and estimating its variance based on the permutation distribution.
Studentized p-values are computed based on the proportion of times the test statistic was more extreme than the permutation test statistic.
No confidence intervals are provided.
Compute pointwise or simultaneous confidence intervals relative to linear contrasts of parameters from a linear mixed model. Pointwise confidence intervals have nominal coverage w.r.t. a single contrast whereas simultaneous confidence intervals have nominal coverage w.r.t. to all contrasts. Can also output p-values (corresponding to pointwise confidence intervals) or adjusted p-values (corresponding to simultaneous confidence intervals).
## S3 method for class 'Wald_lmm' confint( object, parm, level = 0.95, df = NULL, method = NULL, columns = NULL, backtransform = NULL, ... )
## S3 method for class 'Wald_lmm' confint( object, parm, level = 0.95, df = NULL, method = NULL, columns = NULL, backtransform = NULL, ... )
object |
a |
parm |
Not used. For compatibility with the generic method. |
level |
[numeric, 0-1] nominal coverage of the confidence intervals. |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the Wald statistic. Otherwise a normal distribution is used. |
method |
[character] Should pointwise confidence intervals be output ( |
columns |
[character vector] Columns to be output.
Can be any of |
backtransform |
[logical] should the estimates, standard errors, and confidence intervals be backtransformed? |
... |
Not used. For compatibility with the generic method. |
Available methods are:
"none"
, "bonferroni"
, "single-step2"
"holm"
, "hochberg"
, "hommel"
, "BH"
, "BY"
, "fdr"
: adjustment performed by [stats::p.adjust()], no confidence interval is computed.
"single-step"
, "free"
, "Westfall"
, "Shaffer"
: adjustment performed by [multcomp::glht()], for all but the first method no confidence interval is computed.
Note: method "single-step"
adjusts for multiple comparisons using equicoordinate quantiles of the multivariate Student's t-distribution over all tests, instead of the univariate quantiles. It assumes equal degrees-of-freedom in the marginal and is described in section 7.1 of Dmitrienko et al. (2013) under the name single-step Dunnett procedure. The name "single-step"
is borrowed from the multcomp package. In the book Bretz et al. (2010) written by the authors of the package, the procedure is refered to as max-t tests which is the terminology adopted in the LMMstar package.
When degrees-of-freedom differs between individual hypotheses, method "single-step2"
is recommended. It simulates data using copula whose marginal distributions are Student's t-distribution (with possibly different degrees-of-freedom) and elliptical copula with parameters the estimated correlation between the test statistics (via the copula package). It then computes the frequency at which the simulated maximum exceed the observed maximum and appropriate quantile of simulated maximum for the confidence interval.
Barnard and Rubin, Small-sample degrees of freedom with multiple imputation. Biometrika (1999), 86(4):948-955.
Dmitrienko, A. and D'Agostino, R., Sr (2013), Traditional multiplicity adjustment methods in clinical trials. Statist. Med., 32: 5172-5218.
Frank Bretz, Torsten Hothorn and Peter Westfall (2010), Multiple Comparisons Using R, CRC Press, Boca Raton.
Compute correlation matrix for multiple variables and/or multiple groups.
correlate( formula, data, repetition, use = "everything", method = "pearson", collapse.value = ".", collapse.var = ".", na.rm )
correlate( formula, data, repetition, use = "everything", method = "pearson", collapse.value = ".", collapse.var = ".", na.rm )
formula |
[formula] on the left hand side the outcome(s) and on the right hand side the grouping variables. E.g. Y1+Y2 ~ Gender will compute for each gender the correlation matrix for Y1 and the correaltion matrix for Y2. |
data |
[data.frame] dataset containing the observations. |
repetition |
[formula] Specify the structure of the data: the time/repetition variable and the grouping variable, e.g. ~ time|id. |
use |
[character] method for computing correlation in the presence of missing values: |
method |
[character] type of correlation coefficient: |
collapse.value |
[character] symbol used to combine covariate values when using multiple grouping variables. |
collapse.var |
[character] symbol used to combine variable names to be pasted left of the covariate values when using multiple grouping variables.
Can be disabled setting it to |
na.rm |
[logical] not used. The user may expect this argument though so it is added to help the user with a message pointing toward the argument |
as.array
to convert the output to an array or as.matrix
to convert the output to a matrix (when a single outcome and no grouping variable). summarize
for other summary statistics (e.g. mean, standard deviation, ...).
#### simulate data (wide format) #### data(gastricbypassL, package = "LMMstar") ## compute correlations (composite time variable) e.S <- correlate(weight ~ 1, data = gastricbypassL, repetition = ~time|id) e.S as.matrix(e.S) e.S21 <- correlate(glucagonAUC + weight ~ 1, data = gastricbypassL, repetition = ~time|id, use = "pairwise") e.S21 as.array(e.S21) as.matrix(e.S21, index = "weight") gastricbypassL$sex <- as.numeric(gastricbypassL$id) %% 2 e.S12 <- correlate(weight ~ sex, data = gastricbypassL, repetition = ~time|id, use = "pairwise") e.S12 as.array(e.S12) e.S22 <- correlate(glucagonAUC + weight ~ sex, data = gastricbypassL, repetition = ~time|id, use = "pairwise") e.S22 as.array(e.S22)
#### simulate data (wide format) #### data(gastricbypassL, package = "LMMstar") ## compute correlations (composite time variable) e.S <- correlate(weight ~ 1, data = gastricbypassL, repetition = ~time|id) e.S as.matrix(e.S) e.S21 <- correlate(glucagonAUC + weight ~ 1, data = gastricbypassL, repetition = ~time|id, use = "pairwise") e.S21 as.array(e.S21) as.matrix(e.S21, index = "weight") gastricbypassL$sex <- as.numeric(gastricbypassL$id) %% 2 e.S12 <- correlate(weight ~ sex, data = gastricbypassL, repetition = ~time|id, use = "pairwise") e.S12 as.array(e.S12) e.S22 <- correlate(glucagonAUC + weight ~ sex, data = gastricbypassL, repetition = ~time|id, use = "pairwise") e.S22 as.array(e.S22)
Variance-covariance structure where the variance and correlation of the residuals is constant within covariate levels. Can be stratified on a categorical variable. The default has no covariate and therefore the variance and correlation are constant within cluster.
CS(formula, var.cluster, var.time, type = NULL, group.type = NULL, add.time)
CS(formula, var.cluster, var.time, type = NULL, group.type = NULL, add.time)
formula |
formula indicating on which variable to stratify the residual variance and correlation (left hand side) and variables influencing the residual variance and correlation (right hand side). |
var.cluster |
[character] cluster variable. |
var.time |
[character] time variable. |
type |
[character]
|
group.type |
[integer vector] grouping of the regressor for the correlation structure. A constant value corresponds to nested random effects (default) and a regressor-specific value to crossed random effects |
add.time |
not used. |
A typical formula would be ~1
, indicating a variance constant over time and the same correlation between all pairs of times.
An object of class CS
that can be passed to the argument structure
of the lmm
function.
## no covariates CS(~1, var.cluster = "id", var.time = "time") CS(gender~1, var.cluster = "id", var.time = "time") ## covariates CS(~time, var.cluster = "id", var.time = "time") CS(gender~time, var.cluster = "id", var.time = "time") CS(list(~time,~1), var.cluster = "id", var.time = "time") CS(list(gender~time,gender~1), var.cluster = "id", var.time = "time")
## no covariates CS(~1, var.cluster = "id", var.time = "time") CS(gender~1, var.cluster = "id", var.time = "time") ## covariates CS(~time, var.cluster = "id", var.time = "time") CS(gender~time, var.cluster = "id", var.time = "time") CS(list(~time,~1), var.cluster = "id", var.time = "time") CS(list(gender~time,gender~1), var.cluster = "id", var.time = "time")
Variance-covariance structure specified by the user.
CUSTOM( formula, var.cluster, var.time, FCT.sigma, dFCT.sigma = NULL, d2FCT.sigma = NULL, init.sigma, FCT.rho, dFCT.rho = NULL, d2FCT.rho = NULL, init.rho, add.time )
CUSTOM( formula, var.cluster, var.time, FCT.sigma, dFCT.sigma = NULL, d2FCT.sigma = NULL, init.sigma, FCT.rho, dFCT.rho = NULL, d2FCT.rho = NULL, init.rho, add.time )
formula |
formula indicating variables influencing the residual variance and correlation (right hand side). |
var.cluster |
[character] cluster variable. |
var.time |
[character] time variable. |
FCT.sigma |
[function] take as argument the model parameters, time, and design matrix. Output the vector of residuals standard deviations. |
dFCT.sigma |
[list of vectors] list whose elements are the first derivative of argument |
d2FCT.sigma |
[list of vectors] list whose elements are the second derivative of argument |
init.sigma |
[numeric vector] initial value for the variance parameters. |
FCT.rho |
[function] take as argument the model parameters, time, and design matrix. Output the matrix of residuals correlation. |
dFCT.rho |
[list of matrices] list whose elements are the first derivative of argument |
d2FCT.rho |
[list of matrices] list whose elements are the second derivative of argument |
init.rho |
[numeric vector] initial value for the correlation parameters. |
add.time |
not used. |
An object of class CUSTOM
that can be passed to the argument structure
of the lmm
function.
## Compound symmetry structure CUSTOM(~1, FCT.sigma = function(p,n.time,X){rep(p,n.time)}, init.sigma = c("sigma"=1), dFCT.sigma = function(p,n.time,X){list(sigma = rep(1,n.time))}, d2FCT.sigma = function(p,n.time,X){list(sigma = rep(0,n.time))}, FCT.rho = function(p,n.time,X){ matrix(p,n.time,n.time)+diag(1-p,n.time,n.time) }, init.rho = c("rho"=0.5), dFCT.rho = function(p,n.time,X){ list(rho = matrix(1,n.time,n.time)-diag(1,n.time,n.time)) }, d2FCT.rho = function(p,n.time,X){list(rho = matrix(0,n.time,n.time))} ) ## 2 block structure rho.2block <- function(p,n.time,X){ rho <- matrix(0, nrow = n.time, ncol = n.time) rho[1,2] <- rho[2,1] <- rho[4,5] <- rho[5,4] <- p["rho1"] rho[1,3] <- rho[3,1] <- rho[4,6] <- rho[6,4] <- p["rho2"] rho[2,3] <- rho[3,2] <- rho[5,6] <- rho[6,5] <- p["rho3"] rho[4:6,1:3] <- rho[1:3,4:6] <- p["rho4"] return(rho) } drho.2block <- function(p,n.time,X){ drho <- list(rho1 = matrix(0, nrow = n.time, ncol = n.time), rho2 = matrix(0, nrow = n.time, ncol = n.time), rho3 = matrix(0, nrow = n.time, ncol = n.time), rho4 = matrix(0, nrow = n.time, ncol = n.time)) drho$rho1[1,2] <- drho$rho1[2,1] <- drho$rho1[4,5] <- drho$rho1[5,4] <- 1 drho$rho2[1,3] <- drho$rho2[3,1] <- drho$rho2[4,6] <- drho$rho2[6,4] <- 1 drho$rho3[2,3] <- drho$rho3[3,2] <- drho$rho3[5,6] <- drho$rho3[6,5] <- 1 drho$rho4[4:6,1:3] <- drho$rho4[1:3,4:6] <- 1 return(drho) } d2rho.2block <- function(p,n.time,X){ d2rho <- list(rho1 = matrix(0, nrow = n.time, ncol = n.time), rho2 = matrix(0, nrow = n.time, ncol = n.time), rho3 = matrix(0, nrow = n.time, ncol = n.time), rho4 = matrix(0, nrow = n.time, ncol = n.time)) return(d2rho) } CUSTOM(~variable, FCT.sigma = function(p,n.time,X){rep(p,n.time)}, dFCT.sigma = function(p,n.time,X){list(sigma=rep(1,n.time))}, d2FCT.sigma = function(p,n.time,X){list(sigma=rep(0,n.time))}, init.sigma = c("sigma"=1), FCT.rho = rho.2block, dFCT.rho = drho.2block, d2FCT.rho = d2rho.2block, init.rho = c("rho1"=0.25,"rho2"=0.25,"rho3"=0.25,"rho4"=0.25))
## Compound symmetry structure CUSTOM(~1, FCT.sigma = function(p,n.time,X){rep(p,n.time)}, init.sigma = c("sigma"=1), dFCT.sigma = function(p,n.time,X){list(sigma = rep(1,n.time))}, d2FCT.sigma = function(p,n.time,X){list(sigma = rep(0,n.time))}, FCT.rho = function(p,n.time,X){ matrix(p,n.time,n.time)+diag(1-p,n.time,n.time) }, init.rho = c("rho"=0.5), dFCT.rho = function(p,n.time,X){ list(rho = matrix(1,n.time,n.time)-diag(1,n.time,n.time)) }, d2FCT.rho = function(p,n.time,X){list(rho = matrix(0,n.time,n.time))} ) ## 2 block structure rho.2block <- function(p,n.time,X){ rho <- matrix(0, nrow = n.time, ncol = n.time) rho[1,2] <- rho[2,1] <- rho[4,5] <- rho[5,4] <- p["rho1"] rho[1,3] <- rho[3,1] <- rho[4,6] <- rho[6,4] <- p["rho2"] rho[2,3] <- rho[3,2] <- rho[5,6] <- rho[6,5] <- p["rho3"] rho[4:6,1:3] <- rho[1:3,4:6] <- p["rho4"] return(rho) } drho.2block <- function(p,n.time,X){ drho <- list(rho1 = matrix(0, nrow = n.time, ncol = n.time), rho2 = matrix(0, nrow = n.time, ncol = n.time), rho3 = matrix(0, nrow = n.time, ncol = n.time), rho4 = matrix(0, nrow = n.time, ncol = n.time)) drho$rho1[1,2] <- drho$rho1[2,1] <- drho$rho1[4,5] <- drho$rho1[5,4] <- 1 drho$rho2[1,3] <- drho$rho2[3,1] <- drho$rho2[4,6] <- drho$rho2[6,4] <- 1 drho$rho3[2,3] <- drho$rho3[3,2] <- drho$rho3[5,6] <- drho$rho3[6,5] <- 1 drho$rho4[4:6,1:3] <- drho$rho4[1:3,4:6] <- 1 return(drho) } d2rho.2block <- function(p,n.time,X){ d2rho <- list(rho1 = matrix(0, nrow = n.time, ncol = n.time), rho2 = matrix(0, nrow = n.time, ncol = n.time), rho3 = matrix(0, nrow = n.time, ncol = n.time), rho4 = matrix(0, nrow = n.time, ncol = n.time)) return(d2rho) } CUSTOM(~variable, FCT.sigma = function(p,n.time,X){rep(p,n.time)}, dFCT.sigma = function(p,n.time,X){list(sigma=rep(1,n.time))}, d2FCT.sigma = function(p,n.time,X){list(sigma=rep(0,n.time))}, init.sigma = c("sigma"=1), FCT.rho = rho.2block, dFCT.rho = drho.2block, d2FCT.rho = d2rho.2block, init.rho = c("rho1"=0.25,"rho2"=0.25,"rho3"=0.25,"rho4"=0.25))
Estimate the residual degrees-of-freedom from a linear mixed model.
## S3 method for class 'lmm' df.residual(object, ...)
## S3 method for class 'lmm' df.residual(object, ...)
object |
a |
... |
Passed to |
The residual degrees-of-freedom is estimated using the sum of squared normalized residuals.
A numeric value
Combine the residuals degrees-of-freedom from group-specific linear mixed models.
## S3 method for class 'mlmm' df.residual(object, ...)
## S3 method for class 'mlmm' df.residual(object, ...)
object |
a |
... |
Passed to |
The residual degrees-of-freedom is estimated separately for each model using the sum of squared normalized residuals.
A numeric vector with one element for each model.
This expands the mean coefficients of the linear mixed model into one coefficient per level of the original variable, i.e., including the reference level(s) where the fitted coefficients are 0.
## S3 method for class 'lmm' dummy.coef(object, use.na = FALSE, ...)
## S3 method for class 'lmm' dummy.coef(object, use.na = FALSE, ...)
object |
a |
use.na |
[logical] Should |
... |
Not used. For compatibility with the generic method. |
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$group <- as.factor(paste0("d",dL$X1)) ## fit mixed model with interaction eUN.lmm <- lmm(Y ~ visit*group, repetition =~visit|id, data = dL, df = FALSE) dummy.coef(eUN.lmm) ## fit mixed model with baseline constraint dL$drug <- dL$group dL[dL$visit==1,"drug"] <- "d0" eUN.clmm <- lmm(Y ~ visit + visit:drug, repetition =~visit|id, data = dL, df = FALSE) dummy.coef(eUN.clmm) dL$drug <- factor(dL$group, levels = c("none","d0","d1")) dL[dL$visit==1,"drug"] <- "none" eUN.clmm.none <- lmm(Y ~ visit:drug, repetition =~visit|id, data = dL, df = FALSE) dummy.coef(eUN.clmm.none)
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$group <- as.factor(paste0("d",dL$X1)) ## fit mixed model with interaction eUN.lmm <- lmm(Y ~ visit*group, repetition =~visit|id, data = dL, df = FALSE) dummy.coef(eUN.lmm) ## fit mixed model with baseline constraint dL$drug <- dL$group dL[dL$visit==1,"drug"] <- "d0" eUN.clmm <- lmm(Y ~ visit + visit:drug, repetition =~visit|id, data = dL, df = FALSE) dummy.coef(eUN.clmm) dL$drug <- factor(dL$group, levels = c("none","d0","d1")) dL[dL$visit==1,"drug"] <- "none" eUN.clmm.none <- lmm(Y ~ visit:drug, repetition =~visit|id, data = dL, df = FALSE) dummy.coef(eUN.clmm.none)
Estimate average counterfactual outcome or contrast of outcome based on a linear mixed model.
## S3 method for class 'lmm' effects( object, variable, effects = "identity", type = "outcome", repetition = NULL, conditional = NULL, ref.repetition = 1, ref.variable = 1, newdata = NULL, rhs = NULL, multivariate = FALSE, prefix.time = NULL, prefix.var = TRUE, sep.var = ",", ... )
## S3 method for class 'lmm' effects( object, variable, effects = "identity", type = "outcome", repetition = NULL, conditional = NULL, ref.repetition = 1, ref.variable = 1, newdata = NULL, rhs = NULL, multivariate = FALSE, prefix.time = NULL, prefix.var = TRUE, sep.var = ",", ... )
object |
a |
variable |
[character/list] exposure variable relative to which the effect should be computed. Can also be a list with two elements: the first being the variable (i.e. a character) and the second the levels or values for this variable to be considered. |
effects |
[character] should the average counterfactual outcome for each variable level be evaluated ( |
type |
[character/numeric vector] Possible transformation of the outcome: no transformation ( |
repetition |
[character vector] repetition at which the effect should be assessed. By default it will be assessed at all repetitions. |
conditional |
[character/data.frame] variable(s) conditional to which the average conterfactual outcome or treatment effect should be computed. Alternatively can also be a data.frame where each column correspond to a variable and the rows to the level of the variable(s). |
ref.repetition |
[numeric or character] index or value of the reference level for the repetition variable.
Only relevant when |
ref.variable |
[numeric or character] index or value of the reference level for the exposure variable.
Only relevant when |
newdata |
[data.frame] a dataset reflecting the covariate distribution relative to which the average outcome or contrast should be computed. |
rhs |
[numeric] the right hand side of the hypothesis. |
multivariate |
[logical] should a multivariate Wald test be used to simultaneously test all null hypotheses. |
prefix.time |
[character] When naming the estimates, text to be pasted before the value of the repetition variable.
Only relevant when |
prefix.var |
[logical] When naming the estimates, should the variable name be added or only the value? |
sep.var |
[character] When naming the estimates, text to be pasted between the values to condition on.
Only relevant when |
... |
Arguments passed to |
The uncertainty is quantified assuming the contrast matrix to be a-priori known. Said otherwise the standard error does not account for the uncertainty about the covariate distribution.
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### Linear Mixed Model #### eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) ## outcome e.YbyX1 <- effects(eUN.lmm, variable = "X1") e.YbyX1 summary(e.YbyX1) model.tables(e.YbyX1) coef(e.YbyX1, type = "contrast") effects(eUN.lmm, effects = "difference", variable = "X1") effects(eUN.lmm, effects = "difference", variable = "X1", repetition = "3") ## change effects(eUN.lmm, type = "change", variable = "X1") effects(eUN.lmm, type = "change", variable = "X1", ref.repetition = 2) effects(eUN.lmm, type = "change", variable = "X1", conditional = NULL) effects(eUN.lmm, type = "change", effects = "difference", variable = "X1") ## auc effects(eUN.lmm, type = "auc", variable = "X1") effects(eUN.lmm, type = "auc", effects = "difference", variable = "X1") #### fit Linear Mixed Model with interaction #### dL$X1.factor <- as.factor(dL$X1) dL$X2.factor <- as.factor(dL$X2) eUN.lmmI <- lmm(Y ~ visit * X1.factor + X2.factor + X5, repetition = ~visit|id, structure = "UN", data = dL) ## average counterfactual conditional to a categorical covariate effects(eUN.lmmI, variable = "X1.factor", conditional = "X2.factor", repetition = "3") effects(eUN.lmmI, type = "change", variable = "X1.factor", conditional = "X2.factor", repetition = "3") effects(eUN.lmmI, type = "auc", variable = "X1.factor", conditional = "X2.factor") ## average difference in counterfactual conditional to a categorical covariate effects(eUN.lmmI, effects = "difference", variable = "X1.factor", conditional = c("X2.factor"), repetition = "3") effects(eUN.lmmI, effects = "difference", type = "change", variable = "X1.factor", conditional = c("X2.factor"), repetition = "3") effects(eUN.lmmI, effects = "difference", type = "auc", variable = "X1.factor", conditional = "X2.factor") ## average difference in counterfactual conditional to a covariate effects(eUN.lmmI, effect = "difference", variable = "X1.factor", conditional = data.frame(X5=0:2), repetition = "3") effects(eUN.lmmI, effect = "difference", type = "change", variable = "X1.factor", conditional = data.frame(X5=0:2))
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### Linear Mixed Model #### eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) ## outcome e.YbyX1 <- effects(eUN.lmm, variable = "X1") e.YbyX1 summary(e.YbyX1) model.tables(e.YbyX1) coef(e.YbyX1, type = "contrast") effects(eUN.lmm, effects = "difference", variable = "X1") effects(eUN.lmm, effects = "difference", variable = "X1", repetition = "3") ## change effects(eUN.lmm, type = "change", variable = "X1") effects(eUN.lmm, type = "change", variable = "X1", ref.repetition = 2) effects(eUN.lmm, type = "change", variable = "X1", conditional = NULL) effects(eUN.lmm, type = "change", effects = "difference", variable = "X1") ## auc effects(eUN.lmm, type = "auc", variable = "X1") effects(eUN.lmm, type = "auc", effects = "difference", variable = "X1") #### fit Linear Mixed Model with interaction #### dL$X1.factor <- as.factor(dL$X1) dL$X2.factor <- as.factor(dL$X2) eUN.lmmI <- lmm(Y ~ visit * X1.factor + X2.factor + X5, repetition = ~visit|id, structure = "UN", data = dL) ## average counterfactual conditional to a categorical covariate effects(eUN.lmmI, variable = "X1.factor", conditional = "X2.factor", repetition = "3") effects(eUN.lmmI, type = "change", variable = "X1.factor", conditional = "X2.factor", repetition = "3") effects(eUN.lmmI, type = "auc", variable = "X1.factor", conditional = "X2.factor") ## average difference in counterfactual conditional to a categorical covariate effects(eUN.lmmI, effects = "difference", variable = "X1.factor", conditional = c("X2.factor"), repetition = "3") effects(eUN.lmmI, effects = "difference", type = "change", variable = "X1.factor", conditional = c("X2.factor"), repetition = "3") effects(eUN.lmmI, effects = "difference", type = "auc", variable = "X1.factor", conditional = "X2.factor") ## average difference in counterfactual conditional to a covariate effects(eUN.lmmI, effect = "difference", variable = "X1.factor", conditional = data.frame(X5=0:2), repetition = "3") effects(eUN.lmmI, effect = "difference", type = "change", variable = "X1.factor", conditional = data.frame(X5=0:2))
Extract the Score Function for Multcomp. For internal use.
## S3 method for class 'lmm' estfun(x, ...)
## S3 method for class 'lmm' estfun(x, ...)
x |
a |
... |
Not used. For compatibility with the generic method. |
A matrix containing the score function for each model parameter (columns) relative to each cluster (rows).
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") ## fit Linear Mixed Model eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL, df = FALSE) ## test multiple linear hypotheses if(require(multcomp)){ LMMstar.options(effects = c("mean")) e.glht <- multcomp::glht(eUN.lmm) e.glht$linfct }
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") ## fit Linear Mixed Model eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL, df = FALSE) ## test multiple linear hypotheses if(require(multcomp)){ LMMstar.options(effects = c("mean")) e.glht <- multcomp::glht(eUN.lmm) e.glht$linfct }
Estimate standard errors, confidence intervals, and p-values for a smooth transformation of parameters from a linear mixed model.
## S3 method for class 'lmm' estimate( x, f, df = !is.null(x$df) & !robust, robust = FALSE, type.information = NULL, level = 0.95, method.numDeriv = NULL, average = FALSE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, ... )
## S3 method for class 'lmm' estimate( x, f, df = !is.null(x$df) & !robust, robust = FALSE, type.information = NULL, level = 0.95, method.numDeriv = NULL, average = FALSE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, ... )
x |
a |
f |
[function] function taking as input |
df |
[logical] Should degree-of-freedom, computed using Satterthwaite approximation, for the parameter of interest be output. Can also be a numeric vector providing providing the degrees-of-freedom relative to each estimate. |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
type.information |
[character] Should the expected information be used (i.e. minus the expected second derivative) or the observed inforamtion (i.e. minus the second derivative). |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
method.numDeriv |
[character] method used to approximate the gradient: either |
average |
[logical] is the estimand the average output of argument |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
... |
extra arguments passed to |
Based a first order delta method to evaluate the variance of the estimate.
The derivative of the transformation is evaluated using numerical differentiation (numDeriv::jacobian
).
Argument robust: the Satterhwaite approximation for the degrees-of-freedom of robust standard errors is often unreliable. This is why the default is to use the degrees-of-freedom of the modeled based standard error instead.
if(require(lava) && require(nlme)){ #### Random effect #### set.seed(10) dL <- sampleRem(1e2, n.times = 3, format = "long") e.lmm1 <- lmm(Y ~ X1+X2+X3 + (1|id), repetition = ~visit|id, data = dL) nlme::ranef(e.lmm1, se = TRUE) e.ranef <- estimate(e.lmm1, f = function(object, p){nlme::ranef(object, p = p)}) e.ranef if(require(ggplot2)){ df.gg <- cbind(index = 1:NROW(e.ranef), e.ranef) gg.ranef <- ggplot(df.gg, aes(x = index, y=estimate, ymin=lower, ymax = upper)) gg.ranef + geom_point() + geom_errorbar() + ylab("estimated random effect") + xlab("id") } #### ANCOVA via mixed model #### set.seed(10) d <- sampleRem(1e2, n.time = 2) e.ANCOVA1 <- lm(Y2~Y1+X1, data = d) dL2 <- reshape(d, direction = "long", idvar = c("id","X1"), timevar = "time", times = c("1","2"), varying = c("Y1","Y2"), v.names = "Y") ## estimated treatment effect (no baseline constraint) e.lmm <- lmm(Y ~ time + time:X1, data = dL2, repetition = ~time|id) e.delta <- estimate(e.lmm, function(p){ c(Y1 = p["rho(1,2)"]*p["k.2"], X1 = p["time2:X1"]-p["k.2"]*p["rho(1,2)"]*p["time1:X1"]) }) ## same estimate and similar standard errors. e.delta ## Degrees-of-freedom are a bit off though cbind(summary(e.ANCOVA1)$coef, df = df.residual(e.ANCOVA1)) ## estimated treatment effect (baseline constraint) dL2$time2 <- as.numeric(dL2$time=="2") e.lmmC <- lmm(Y ~ time2 + time2:X1, data = dL2, repetition = ~time|id) e.deltaC <- estimate(e.lmmC, function(p){ c(Y1 = p["rho(1,2)"]*p["k.2"], X1 = p["time2:X1"]) }) e.deltaC ## Degrees-of-freedom are a bit more accurate }
if(require(lava) && require(nlme)){ #### Random effect #### set.seed(10) dL <- sampleRem(1e2, n.times = 3, format = "long") e.lmm1 <- lmm(Y ~ X1+X2+X3 + (1|id), repetition = ~visit|id, data = dL) nlme::ranef(e.lmm1, se = TRUE) e.ranef <- estimate(e.lmm1, f = function(object, p){nlme::ranef(object, p = p)}) e.ranef if(require(ggplot2)){ df.gg <- cbind(index = 1:NROW(e.ranef), e.ranef) gg.ranef <- ggplot(df.gg, aes(x = index, y=estimate, ymin=lower, ymax = upper)) gg.ranef + geom_point() + geom_errorbar() + ylab("estimated random effect") + xlab("id") } #### ANCOVA via mixed model #### set.seed(10) d <- sampleRem(1e2, n.time = 2) e.ANCOVA1 <- lm(Y2~Y1+X1, data = d) dL2 <- reshape(d, direction = "long", idvar = c("id","X1"), timevar = "time", times = c("1","2"), varying = c("Y1","Y2"), v.names = "Y") ## estimated treatment effect (no baseline constraint) e.lmm <- lmm(Y ~ time + time:X1, data = dL2, repetition = ~time|id) e.delta <- estimate(e.lmm, function(p){ c(Y1 = p["rho(1,2)"]*p["k.2"], X1 = p["time2:X1"]-p["k.2"]*p["rho(1,2)"]*p["time1:X1"]) }) ## same estimate and similar standard errors. e.delta ## Degrees-of-freedom are a bit off though cbind(summary(e.ANCOVA1)$coef, df = df.residual(e.ANCOVA1)) ## estimated treatment effect (baseline constraint) dL2$time2 <- as.numeric(dL2$time=="2") e.lmmC <- lmm(Y ~ time2 + time2:X1, data = dL2, repetition = ~time|id) e.deltaC <- estimate(e.lmmC, function(p){ c(Y1 = p["rho(1,2)"]*p["k.2"], X1 = p["time2:X1"]) }) e.deltaC ## Degrees-of-freedom are a bit more accurate }
Estimate standard errors, confidence intervals, and p-values for a smooth transformation of parameters from group-specific linear mixed models.
## S3 method for class 'mlmm' estimate( x, f, df = FALSE, robust = FALSE, type.information = NULL, level = 0.95, method.numDeriv = NULL, average = FALSE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, ... )
## S3 method for class 'mlmm' estimate( x, f, df = FALSE, robust = FALSE, type.information = NULL, level = 0.95, method.numDeriv = NULL, average = FALSE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, ... )
x |
a |
f |
[function] function taking as input |
df |
[logical] Should degree-of-freedom, computed using Satterthwaite approximation, for the parameter of interest be output. Can also be a numeric vector providing providing the degrees-of-freedom relative to each estimate. |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
type.information |
[character] Should the expected information be used (i.e. minus the expected second derivative) or the observed inforamtion (i.e. minus the second derivative). |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
method.numDeriv |
[character] method used to approximate the gradient: either |
average |
[logical] is the estimand the average output of argument |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
... |
extra arguments passed to |
Evaluate the expected outcome conditional on covariates or the expected missing outcome value conditional on observed outcomes and covariates based on a linear mixed model.
## S3 method for class 'lmm' fitted( object, newdata = NULL, type = "mean", se = NULL, df = NULL, keep.data = NULL, format = "long", seed = NULL, simplify = TRUE, ... )
## S3 method for class 'lmm' fitted( object, newdata = NULL, type = "mean", se = NULL, df = NULL, keep.data = NULL, format = "long", seed = NULL, simplify = TRUE, ... )
object |
a |
newdata |
[data.frame] the covariate values for each cluster. |
type |
[character] by default fitted values are output ( |
se |
[character] passed to |
df |
[logical] should a Student's t-distribution be used to model the distribution of the predicted mean. Otherwise a normal distribution is used. |
keep.data |
[logical] should the dataset relative to which the predictions are evaluated be output along side the predicted values? Only possible in the long format. |
format |
[character] should the prediction be output
in a matrix format with clusters in row and timepoints in columns ( |
seed |
[integer, >0] random number generator (RNG) state used when starting imputation. If NULL no state is set. |
simplify |
[logical] simplify the data format (vector instead of data.frame) and column names (no mention of the time variable) when possible. |
... |
additional argument passed the |
Essentially a wrapper function of predict.lmm
where the fitted value are stored in the outcome column of the dataset instead of in a separate column.
When format="wide"
, a data.frame with as many rows as clusters.
When format="long"
, a data.frame with as many rows as observations (keep.data==TRUE
)
or a vector of length the number of observations (keep.data==TRUE
).
#### single arm trial #### data(gastricbypassL, package = "LMMstar") gastricbypassL <- gastricbypassL[order(gastricbypassL$id,gastricbypassL$visit),] gastricbypassL$weight0 <- unlist(tapply(gastricbypassL$weight,gastricbypassL$id, function(x){rep(x[1],length(x))})) eUN.lmm <- lmm(glucagonAUC ~ visit + weight0, repetition = ~visit|id, data = gastricbypassL, df = FALSE) ## fitted mean (conditional on covariates only) fitted(eUN.lmm) fitted(eUN.lmm, newdata = data.frame(visit = "3", weight0 = 0)) fitted(eUN.lmm, newdata = data.frame(visit = "3", weight0 = 0), keep.data = TRUE) ## fitted outcome value (conditional on covariates and covariates) fitted(eUN.lmm, type = "outcome") gastricbypassL.O <- fitted(eUN.lmm, type = "outcome", keep.data = TRUE) if(require(ggplot2)){ gg.outcome <- ggplot(gastricbypassL.O, aes(x=time, y = glucagonAUC, color = impute, group = id)) gg.outcome <- gg.outcome + geom_point() + geom_line()## + facet_wrap(~id) gg.outcome } tapply(gastricbypassL.O$glucagonAUC, gastricbypassL.O$time, mean) effects(eUN.lmm, variable = NULL) ## fitted change value (conditional on covariates and covariates) gastricbypassL.C <- fitted(eUN.lmm, type = "change", keep.data = TRUE) if(require(ggplot2)){ gg.change <- ggplot(gastricbypassL.C, aes(x=time, y = glucagonAUC, color = impute, group = id)) gg.change <- gg.change + geom_point() + geom_line() gg.change } tapply(gastricbypassL.C$glucagonAUC, gastricbypassL.O$time, mean) effects(eUN.lmm, type = "change", variable = NULL) ## fitted auc (conditional on covariates and covariates) gastricbypassL.AUC <- fitted(eUN.lmm, type = "auc", keep.data = TRUE) if(require(ggplot2)){ gg.auc <- ggplot(gastricbypassL.AUC, aes(x = "auc", y = glucagonAUC, color = impute)) gg.auc <- gg.auc + geom_point() gg.auc } mean(gastricbypassL.AUC$glucagonAUC) effects(eUN.lmm, type = "auc", variable = NULL) #### two arm trial #### ## Not run: if(require(nlmeU) & require(reshape2)){ data(armd.wide, package = "nlmeU") armd.long <- melt(armd.wide, measure.vars = paste0("visual",c(0,4,12,24,52)), id.var = c("subject","lesion","treat.f","miss.pat"), variable.name = "week", value.name = "visual") armd.long$week <- factor(armd.long$week, level = paste0("visual",c(0,4,12,24,52)), labels = c(0,4,12,24,52)) eUN2.lmm <- lmm(visual ~ treat.f*week + lesion, repetition = ~week|subject, structure = "UN", data = armd.long) ## fitted outcome value (conditional on covariates and covariates) armd.O <- fitted(eUN2.lmm, type = "outcome", keep.data = TRUE) gg2.outcome <- ggplot(armd.O, aes(x=week, y = visual, color = impute, group = subject)) gg2.outcome <- gg2.outcome + geom_point() + geom_line() + facet_wrap(~treat.f) gg2.outcome aggregate(visual ~ week + treat.f, FUN = mean, data = armd.O) effects(eUN2.lmm, variable = "treat.f") ## mismatch due to adjustment on lesion ## fitted change value (conditional on covariates and covariates) armd.C <- fitted(eUN2.lmm, type = "change", keep.data = TRUE) gg.change <- ggplot(armd.C, aes(x=week, y = visual, color = impute, group = subject)) gg.change <- gg.change + geom_point() + geom_line() + facet_wrap(~treat.f) gg.change coef(eUN2.lmm) effects(eUN2.lmm, type = "change", variable = "treat.f") effects(eUN2.lmm, type = c("change","difference"), variable = "treat.f") ## fitted auc (conditional on covariates and covariates) armd.AUC <- fitted(eUN2.lmm, type = "auc", keep.data = TRUE) gg.auc <- ggplot(armd.AUC, aes(x = treat.f, y = visual, color = impute)) gg.auc <- gg.auc + geom_point() gg.auc aggregate(visual ~ treat.f, data = armd.AUC, FUN = "mean") effects(eUN2.lmm, type = "auc", variable = "treat.f") ## adjusted for lesion effects(eUN2.lmm, type = c("auc","difference"), variable = "treat.f") } ## End(Not run)
#### single arm trial #### data(gastricbypassL, package = "LMMstar") gastricbypassL <- gastricbypassL[order(gastricbypassL$id,gastricbypassL$visit),] gastricbypassL$weight0 <- unlist(tapply(gastricbypassL$weight,gastricbypassL$id, function(x){rep(x[1],length(x))})) eUN.lmm <- lmm(glucagonAUC ~ visit + weight0, repetition = ~visit|id, data = gastricbypassL, df = FALSE) ## fitted mean (conditional on covariates only) fitted(eUN.lmm) fitted(eUN.lmm, newdata = data.frame(visit = "3", weight0 = 0)) fitted(eUN.lmm, newdata = data.frame(visit = "3", weight0 = 0), keep.data = TRUE) ## fitted outcome value (conditional on covariates and covariates) fitted(eUN.lmm, type = "outcome") gastricbypassL.O <- fitted(eUN.lmm, type = "outcome", keep.data = TRUE) if(require(ggplot2)){ gg.outcome <- ggplot(gastricbypassL.O, aes(x=time, y = glucagonAUC, color = impute, group = id)) gg.outcome <- gg.outcome + geom_point() + geom_line()## + facet_wrap(~id) gg.outcome } tapply(gastricbypassL.O$glucagonAUC, gastricbypassL.O$time, mean) effects(eUN.lmm, variable = NULL) ## fitted change value (conditional on covariates and covariates) gastricbypassL.C <- fitted(eUN.lmm, type = "change", keep.data = TRUE) if(require(ggplot2)){ gg.change <- ggplot(gastricbypassL.C, aes(x=time, y = glucagonAUC, color = impute, group = id)) gg.change <- gg.change + geom_point() + geom_line() gg.change } tapply(gastricbypassL.C$glucagonAUC, gastricbypassL.O$time, mean) effects(eUN.lmm, type = "change", variable = NULL) ## fitted auc (conditional on covariates and covariates) gastricbypassL.AUC <- fitted(eUN.lmm, type = "auc", keep.data = TRUE) if(require(ggplot2)){ gg.auc <- ggplot(gastricbypassL.AUC, aes(x = "auc", y = glucagonAUC, color = impute)) gg.auc <- gg.auc + geom_point() gg.auc } mean(gastricbypassL.AUC$glucagonAUC) effects(eUN.lmm, type = "auc", variable = NULL) #### two arm trial #### ## Not run: if(require(nlmeU) & require(reshape2)){ data(armd.wide, package = "nlmeU") armd.long <- melt(armd.wide, measure.vars = paste0("visual",c(0,4,12,24,52)), id.var = c("subject","lesion","treat.f","miss.pat"), variable.name = "week", value.name = "visual") armd.long$week <- factor(armd.long$week, level = paste0("visual",c(0,4,12,24,52)), labels = c(0,4,12,24,52)) eUN2.lmm <- lmm(visual ~ treat.f*week + lesion, repetition = ~week|subject, structure = "UN", data = armd.long) ## fitted outcome value (conditional on covariates and covariates) armd.O <- fitted(eUN2.lmm, type = "outcome", keep.data = TRUE) gg2.outcome <- ggplot(armd.O, aes(x=week, y = visual, color = impute, group = subject)) gg2.outcome <- gg2.outcome + geom_point() + geom_line() + facet_wrap(~treat.f) gg2.outcome aggregate(visual ~ week + treat.f, FUN = mean, data = armd.O) effects(eUN2.lmm, variable = "treat.f") ## mismatch due to adjustment on lesion ## fitted change value (conditional on covariates and covariates) armd.C <- fitted(eUN2.lmm, type = "change", keep.data = TRUE) gg.change <- ggplot(armd.C, aes(x=week, y = visual, color = impute, group = subject)) gg.change <- gg.change + geom_point() + geom_line() + facet_wrap(~treat.f) gg.change coef(eUN2.lmm) effects(eUN2.lmm, type = "change", variable = "treat.f") effects(eUN2.lmm, type = c("change","difference"), variable = "treat.f") ## fitted auc (conditional on covariates and covariates) armd.AUC <- fitted(eUN2.lmm, type = "auc", keep.data = TRUE) gg.auc <- ggplot(armd.AUC, aes(x = treat.f, y = visual, color = impute)) gg.auc <- gg.auc + geom_point() gg.auc aggregate(visual ~ treat.f, data = armd.AUC, FUN = "mean") effects(eUN2.lmm, type = "auc", variable = "treat.f") ## adjusted for lesion effects(eUN2.lmm, type = c("auc","difference"), variable = "treat.f") } ## End(Not run)
Evaluate the expected mean conditional to covariates or the expected missing outcome values conditional to observed outcome values and covariates. based on group-specific linear mixed models.
## S3 method for class 'mlmm' fitted(object, newdata = NULL, simplify = TRUE, ...)
## S3 method for class 'mlmm' fitted(object, newdata = NULL, simplify = TRUE, ...)
object |
a |
newdata |
[data.frame] the covariate values for each cluster along with the |
simplify |
[logical] simplify the column names (no mention of the time variable) and remove the cluster, by, and time column when possible. |
... |
additional argument passed the |
Data from the gastric bypass study where the bodyweight and serum glucagon (a gut hormone) were measured in 20 obese subjects prior and after gastric bypass surgery. This dataset is in the long format (i.e. one line per measurement).
id
: patient identifier.
visit
: the visit index (factor).
time
: the week at which the visit took place (numeric).
weight
: bodyweight (in kg) measured during the visit.
glucagonAUC
: glucagon measured during the visit (in pmol/l x hours).
data(gastricbypassL)
data(gastricbypassL)
The effect of Roux-en-Y gastric bypass surgery on the gut mucosal gene expression profile and circulating gut hormones. https://easddistribute.m-anage.com/from.storage?image=4iBH9mRQm1kfeEHULC2CxovdlyCtA1EHeVDdoffnZrAUGG9SHTO-U4ItnLU078eVkF1ZUZgYTy7THlTW3KSgFA2
Data from the gastric bypass study where the bodyweight and serum glucagon (a gut hormone) were measured in 20 obese subjects prior and after gastric bypass surgery. This dataset is in the wide format (i.e. one line per patient).
id
: patient identifier.
weight1
: bodyweight (in kg) 3 months before surgery.
weight2
: bodyweight (in kg) 1 week before surgery.
weight3
: bodyweight (in kg) 1 week after surgery.
weight4
: bodyweight (in kg) 3 months after surgery.
glucagonAUC1
: glucagon (in pmol/l x hours) 3 months before surgery.
glucagonAUC2
: glucagon (in pmol/l x hours) 1 week before surgery.
glucagonAUC3
: glucagon (in pmol/l x hours) 1 week after surgery.
glucagonAUC4
: glucagon (in pmol/l x hours) 3 months after surgery.
data(gastricbypassW)
data(gastricbypassW)
The effect of Roux-en-Y gastric bypass surgery on the gut mucosal gene expression profile and circulating gut hormones. https://easddistribute.m-anage.com/from.storage?image=4iBH9mRQm1kfeEHULC2CxovdlyCtA1EHeVDdoffnZrAUGG9SHTO-U4ItnLU078eVkF1ZUZgYTy7THlTW3KSgFA2
Variance-covariance structure where the residuals are independent and identically distributed. Can be stratified on a categorical variable.
ID(formula, var.cluster, var.time, add.time)
ID(formula, var.cluster, var.time, add.time)
formula |
formula indicating on which variable to stratify the residual variance (left hand side). |
var.cluster |
[character] cluster variable. |
var.time |
[character] time variable. |
add.time |
not used. |
A typical formula would be ~1
.
An object of class IND
that can be passed to the argument structure
of the lmm
function.
ID(NULL, var.cluster = "id", var.time = "time") ID(~1, var.cluster = "id", var.time = "time") ID(~gender, var.cluster = "id", var.time = "time") ID(gender~1, var.cluster = "id", var.time = "time")
ID(NULL, var.cluster = "id", var.time = "time") ID(~1, var.cluster = "id", var.time = "time") ID(~gender, var.cluster = "id", var.time = "time") ID(gender~1, var.cluster = "id", var.time = "time")
Extract the influence function for an ML or REML estimator of parameters from a linear mixed model.
## S3 method for class 'lmm' iid( x, effects = "mean", p = NULL, robust = TRUE, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, REML2ML = NULL, ... )
## S3 method for class 'lmm' iid( x, effects = "mean", p = NULL, robust = TRUE, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, REML2ML = NULL, ... )
x |
a |
effects |
[character] Should the influence function for all coefficients be output ( |
p |
[numeric vector] value of the model coefficients at which to evaluate the influence function. Only relevant if differs from the fitted values. |
robust |
[logical] If |
type.information |
[character] Should the expected information be used (i.e. minus the expected second derivative) or the observed inforamtion (i.e. minus the second derivative). |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
... |
Not used. For compatibility with the generic method. |
The influence function equals the individual score rescaled by the (inverse) information. With the expected information and for a lmm fitted using ML, the information is block diagonal so the influence function for the mean and variance parameters can be computed separately. Otherwise the information and individual score relative to all model parameters should be considered. The later is probablematic when using REML as the REML term is the ratio of two term linear in the individual contributions which is not itself linear in the individual contributions. As an add-hoc solution, the denominator is treated as fixed so the ratio is decomposed w.r.t. its numerator.
A matrix with one row per observation and one column per parameter.
df=TRUE
: with an attribute "df"
containing a numeric vector with one element per parameter.
effects
includes "gradient"
: with an attribute "gradient"
containing a 3 dimensional array with dimension the number of parameters.
data(gastricbypassL) #### Case WITHOUT cross terms #### e.lmmREML <- lmm(weight ~ visit, method.fit = "REML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) e.lmmML <- lmm(weight ~ visit, method.fit = "ML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) name.mu <- names(coef(e.lmmREML, effects = "mean")) name.sigmakrho <- names(coef(e.lmmREML, effects = c("variance","correlation"))) info.REML <- information(e.lmmREML, effects = "all", transform.names = FALSE) info.ML <- information(e.lmmML, effects = "all", transform.names = FALSE) info.REML2ML <- information(e.lmmML, p = coef(e.lmmREML, effects = "all"), effects = "all", transform.names = FALSE) range(info.REML[name.mu,name.sigmakrho]) range(info.ML[name.mu,name.sigmakrho]) range(info.REML[name.mu,]-info.REML2ML[name.mu,]) range(iid(e.lmmREML, REML2ML = TRUE) - iid(e.lmmREML, REML2ML = FALSE)) ## neglectable differences #### Case WITH cross terms #### e2.lmmREML <- lmm(glucagonAUC ~ visit + weight, method.fit = "REML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) e2.lmmML <- lmm(glucagonAUC ~ visit + weight, method.fit = "ML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) name2.mu <- names(coef(e2.lmmREML, effects = "mean")) name2.sigmakrho <- names(coef(e2.lmmREML, effects = c("variance","correlation"))) info2.REML <- information(e2.lmmREML, effects = "all", transform.names = FALSE) info2.ML <- information(e2.lmmML, effects = "all", transform.names = FALSE) info2.REML2ML <- information(e2.lmmML, p = coef(e2.lmmREML, effects = "all"), effects = "all", transform.names = FALSE) range(info2.REML[name.mu,]-info2.REML2ML[name.mu,]) ## neglectable difference range(info2.REML[name.mu,name.sigmakrho]) range(info2.ML[name.mu,name.sigmakrho]) range(iid(e2.lmmREML, REML2ML = TRUE) - iid(e2.lmmREML, REML2ML = FALSE)) ## non-neglectable differences diag(crossprod(iid(e2.lmmREML, REML2ML = TRUE)))/diag(vcov(e2.lmmREML)) diag(crossprod(iid(e2.lmmREML, REML2ML = FALSE)))/diag(vcov(e2.lmmREML))
data(gastricbypassL) #### Case WITHOUT cross terms #### e.lmmREML <- lmm(weight ~ visit, method.fit = "REML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) e.lmmML <- lmm(weight ~ visit, method.fit = "ML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) name.mu <- names(coef(e.lmmREML, effects = "mean")) name.sigmakrho <- names(coef(e.lmmREML, effects = c("variance","correlation"))) info.REML <- information(e.lmmREML, effects = "all", transform.names = FALSE) info.ML <- information(e.lmmML, effects = "all", transform.names = FALSE) info.REML2ML <- information(e.lmmML, p = coef(e.lmmREML, effects = "all"), effects = "all", transform.names = FALSE) range(info.REML[name.mu,name.sigmakrho]) range(info.ML[name.mu,name.sigmakrho]) range(info.REML[name.mu,]-info.REML2ML[name.mu,]) range(iid(e.lmmREML, REML2ML = TRUE) - iid(e.lmmREML, REML2ML = FALSE)) ## neglectable differences #### Case WITH cross terms #### e2.lmmREML <- lmm(glucagonAUC ~ visit + weight, method.fit = "REML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) e2.lmmML <- lmm(glucagonAUC ~ visit + weight, method.fit = "ML", df = FALSE, repetition = ~ visit|id, data = gastricbypassL) name2.mu <- names(coef(e2.lmmREML, effects = "mean")) name2.sigmakrho <- names(coef(e2.lmmREML, effects = c("variance","correlation"))) info2.REML <- information(e2.lmmREML, effects = "all", transform.names = FALSE) info2.ML <- information(e2.lmmML, effects = "all", transform.names = FALSE) info2.REML2ML <- information(e2.lmmML, p = coef(e2.lmmREML, effects = "all"), effects = "all", transform.names = FALSE) range(info2.REML[name.mu,]-info2.REML2ML[name.mu,]) ## neglectable difference range(info2.REML[name.mu,name.sigmakrho]) range(info2.ML[name.mu,name.sigmakrho]) range(iid(e2.lmmREML, REML2ML = TRUE) - iid(e2.lmmREML, REML2ML = FALSE)) ## non-neglectable differences diag(crossprod(iid(e2.lmmREML, REML2ML = TRUE)))/diag(vcov(e2.lmmREML)) diag(crossprod(iid(e2.lmmREML, REML2ML = FALSE)))/diag(vcov(e2.lmmREML))
Extract the influence function for ML or REML estimators of parameters from group-specific linear mixed models.
## S3 method for class 'mlmm' iid(x, effects = "contrast", p = NULL, ordering = "by", simplify = TRUE, ...)
## S3 method for class 'mlmm' iid(x, effects = "contrast", p = NULL, ordering = "by", simplify = TRUE, ...)
Extract the influence function of linear contrasts applied to an ML or REML estimator of parameters from a linear mixed model.
## S3 method for class 'Wald_lmm' iid(x, effects = "Wald", transform.names = TRUE, ...)
## S3 method for class 'Wald_lmm' iid(x, effects = "Wald", transform.names = TRUE, ...)
effects |
[character] should the influence function for the linear contrasts involved in the Wald test be output ( |
transform.names |
[logical] should the name of the coefficients be updated to reflect the transformation that has been used?
Only relevant when |
... |
Not used. For compatibility with the generic method. |
object |
a |
A matrix with one row per observation and one column per parameter (effects="Wald"
or effects="all"
) or a logical value (effects="test"
).
Variance-covariance structure where the residuals are independent but may have different variance. Can be stratified on a categorical variable.
IND(formula, var.cluster, var.time, add.time)
IND(formula, var.cluster, var.time, add.time)
formula |
formula indicating variables influencing the residual variance, using either as a multiplicative factor (right hand side) or stratification (left hand side) to model their effect. |
var.cluster |
[character] cluster variable. |
var.time |
[character] time variable. |
add.time |
Should the default formula (i.e. when |
A typical formula would be either ~1
indicating constant variance
or ~time
indicating a time dependent variance.
An object of class IND
that can be passed to the argument structure
of the lmm
function.
IND(NULL, var.cluster = "id", var.time = "time", add.time = TRUE) IND(~1, var.cluster = "id", var.time = "time") IND(gender~1, var.cluster = "id", var.time = "time") IND(gender~time, var.cluster = "id", var.time = "time") IND(~gender+time, var.cluster = "id", var.time = "time")
IND(NULL, var.cluster = "id", var.time = "time", add.time = TRUE) IND(~1, var.cluster = "id", var.time = "time") IND(gender~1, var.cluster = "id", var.time = "time") IND(gender~time, var.cluster = "id", var.time = "time") IND(~gender+time, var.cluster = "id", var.time = "time")
Extract or compute minus the second derivative of the log-likelihood of a linear mixed model.
## S3 method for class 'lmm' information( x, effects = NULL, newdata = NULL, p = NULL, indiv = FALSE, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
## S3 method for class 'lmm' information( x, effects = NULL, newdata = NULL, p = NULL, indiv = FALSE, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
x |
a |
effects |
[character] Should the information relative to all coefficients be output ( |
newdata |
[data.frame] dataset relative to which the information should be computed. Only relevant if differs from the dataset used to fit the model. |
p |
[numeric vector] value of the model coefficients at which to evaluate the information. Only relevant if differs from the fitted values. |
indiv |
[logical] Should the contribution of each cluster to the information be output? Otherwise output the sum of all clusters of the derivatives. |
type.information |
[character] Should the expected information be computed (i.e. minus the expected second derivative) or the observed inforamtion (i.e. minus the second derivative). |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
... |
Not used. For compatibility with the generic method. |
For details about the arguments transform.sigma, transform.k, transform.rho, see the documentation of the coef.lmm function.
When argument indiv is FALSE
, a matrix with the value of the infroamtion relative to each pair of coefficient (in rows and columns) and each cluster (in rows).
When argument indiv is TRUE
, a 3-dimensional array with the value of the information relative to each pair of coefficient (dimension 2 and 3) and each cluster (dimension 1).
Contrasts and reference level used when modeling the mean in a linear mixed modek.
## S3 method for class 'lmm' levels(x)
## S3 method for class 'lmm' levels(x)
x |
an |
a list with two elements
all: contrast matrix for each categorical or factor variable
reference: reference level: one value for each categorical variable
Fit a linear mixed model defined by a mean and a covariance structure.
lmm( formula, data, repetition, structure, weights = NULL, method.fit = NULL, df = NULL, type.information = NULL, trace = NULL, control = NULL )
lmm( formula, data, repetition, structure, weights = NULL, method.fit = NULL, df = NULL, type.information = NULL, trace = NULL, control = NULL )
formula |
[formula] Specify the model for the mean. On the left hand side the outcome and on the right hand side the covariates affecting the mean value. E.g. Y ~ Gender + Gene. |
data |
[data.frame] dataset (in the long format) containing the observations. |
repetition |
[formula] Specify the structure of the data: the time/repetition variable and the grouping variable, e.g. ~ time|id. |
structure |
[character] type of covariance structure, either |
weights |
[formula or character] variable in the dataset used to weight the log-likelihood and its derivative. Should be constant within cluster. |
method.fit |
[character] Should Restricted Maximum Likelihoood ( |
df |
[logical] Should the degrees-of-freedom be computed using a Satterthwaite approximation? |
type.information |
[character] Should the expected information be computed (i.e. minus the expected second derivative) or the observed inforamtion (i.e. minus the second derivative). |
trace |
[interger, >0] Show the progress of the execution of the function. |
control |
[list] Control values for the optimization method.
The element |
Computation time the lmm
has not been developped to be a fast function as, by default, it uses REML estimation with the observed information matrix and uses a Satterthwaite approximation to compute degrees-of-freedom (this require to compute the third derivative of the log-likelihood which is done by numerical differentiation). The computation time can be substantially reduced by using ML estimation with the expected information matrix and no calculation of degrees-of-freedom: arguments method.fit="ML"
, type.information="expected"
, df=FALSE
. This will, however, lead to less accurate p-values and confidence intervals in small samples.
By default, the estimation of the model parameters will be made using a Newton Raphson algorithm.
This algorithm does not ensure that the residual covariance matrix is positive definite and therefore may sometimes fail.
See argument optimizer in LMMstar.options
.
Argument control: when using the optimizer "FS"
, the following elements can be used
init
: starting values for the model parameters.
n.iter
: maximum number of interations of the optimization algorithm.
tol.score
: score value below which convergence has been reached.
tol.param
: difference in estimated parameters from two successive iterations below which convergence has been reached.
trace
: display progress of the optimization procedure.
Argument repetition: when numeric, it will be converted into a factor variable, possibly adding a leading 0 to preserve the ordering.
This transformation may cause inconsistency when combining results between different lmm
object.
This is why the grouping variable should preferably be of type character or factor.
an object of class lmm
containing the estimated parameter values, the residuals, and relevant derivatives of the likelihood.
summary.lmm
for a summary of the model fit. model.tables.lmm
for a data.frame containing estimates with their uncertainty. plot.lmm
for a graphical display of the model fit or diagnostic plots. levels.lmm
to display the reference level. anova.lmm
for testing linear combinations of coefficients (F-test, multiple Wald tests) effects.lmm
for evaluating average marginal or counterfactual effects sigma.lmm
for extracting estimated residual variance-covariance matrices. residuals.lmm
for extracting residuals or creating residual plots (e.g. qqplots). predict.lmm
for evaluating mean and variance of the outcome conditional on covariates or other outcome values. ranef.lmm
for evaluating random effects and variance components in random effects models.
#### 1- simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$X1 <- as.factor(dL$X1) dL$X2 <- as.factor(dL$X2) #### 2- fit Linear Mixed Model #### eCS.lmm <- lmm(Y ~ X1 * X2 + X5, repetition = ~visit|id, structure = "CS", data = dL) logLik(eCS.lmm) ## -670.9439 summary(eCS.lmm) #### 3- estimates #### ## reference level levels(eCS.lmm)$reference ## mean parameters coef(eCS.lmm) model.tables(eCS.lmm) confint(eCS.lmm) ## all parameters coef(eCS.lmm, effects = "all") model.tables(eCS.lmm, effects = "all") confint(eCS.lmm, effects = "all") ## variance-covariance structure sigma(eCS.lmm) #### 4- diagnostic plots #### quantile(residuals(eCS.lmm)) quantile(residuals(eCS.lmm, type = "normalized")) ## Not run: if(require(ggplot2)){ ## investigate misspecification of the mean structure plot(eCS.lmm, type = "scatterplot") ## investigate misspecification of the variance structure plot(eCS.lmm, type = "scatterplot2") ## investigate misspecification of the correlation structure plot(eCS.lmm, type = "correlation") ## investigate misspecification of the residual distribution plot(eCS.lmm, type = "qqplot") } ## End(Not run) #### 5- statistical inference #### anova(eCS.lmm) ## effect of each variable anova(eCS.lmm, effects = "X11-X21=0") ## test specific coefficient ## test several hypothese with adjustment for multiple comparisons summary(anova(eCS.lmm, effects = c("X11=0","X21=0"))) #### 6- prediction #### ## conditional on covariates newd <- dL[1:3,] predict(eCS.lmm, newdata = newd, keep.data = TRUE) ## conditional on covariates and outcome newd <- dL[1:3,] newd$Y[3] <- NA predict(eCS.lmm, newdata = newd, type = "dynamic", keep.data = TRUE) #### EXTRA #### if(require(mvtnorm)){ ## model for the average over m replicates ## (only works with independent replicates) Sigma1 <- diag(1,1,1); Sigma5 <- diag(1,5,5) n <- 100 dfW <- rbind(data.frame(id = 1:n, rep = 5, Y = rowMeans(rmvnorm(n, sigma = Sigma5))), data.frame(id = (n+1):(2*n), rep = 1, Y = rmvnorm(n, sigma = Sigma1))) e.lmmW <- lmm(Y~1, data = dfW, weigths=~rep, control = list(optimizer = "FS")) e.lmm0 <- lmm(Y~1, data = dfW, control = list(optimizer = "FS")) model.tables(e.lmmW, effects = "all") model.tables(e.lmm0, effects = "all") ## TRUE standard error is 1 }
#### 1- simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$X1 <- as.factor(dL$X1) dL$X2 <- as.factor(dL$X2) #### 2- fit Linear Mixed Model #### eCS.lmm <- lmm(Y ~ X1 * X2 + X5, repetition = ~visit|id, structure = "CS", data = dL) logLik(eCS.lmm) ## -670.9439 summary(eCS.lmm) #### 3- estimates #### ## reference level levels(eCS.lmm)$reference ## mean parameters coef(eCS.lmm) model.tables(eCS.lmm) confint(eCS.lmm) ## all parameters coef(eCS.lmm, effects = "all") model.tables(eCS.lmm, effects = "all") confint(eCS.lmm, effects = "all") ## variance-covariance structure sigma(eCS.lmm) #### 4- diagnostic plots #### quantile(residuals(eCS.lmm)) quantile(residuals(eCS.lmm, type = "normalized")) ## Not run: if(require(ggplot2)){ ## investigate misspecification of the mean structure plot(eCS.lmm, type = "scatterplot") ## investigate misspecification of the variance structure plot(eCS.lmm, type = "scatterplot2") ## investigate misspecification of the correlation structure plot(eCS.lmm, type = "correlation") ## investigate misspecification of the residual distribution plot(eCS.lmm, type = "qqplot") } ## End(Not run) #### 5- statistical inference #### anova(eCS.lmm) ## effect of each variable anova(eCS.lmm, effects = "X11-X21=0") ## test specific coefficient ## test several hypothese with adjustment for multiple comparisons summary(anova(eCS.lmm, effects = c("X11=0","X21=0"))) #### 6- prediction #### ## conditional on covariates newd <- dL[1:3,] predict(eCS.lmm, newdata = newd, keep.data = TRUE) ## conditional on covariates and outcome newd <- dL[1:3,] newd$Y[3] <- NA predict(eCS.lmm, newdata = newd, type = "dynamic", keep.data = TRUE) #### EXTRA #### if(require(mvtnorm)){ ## model for the average over m replicates ## (only works with independent replicates) Sigma1 <- diag(1,1,1); Sigma5 <- diag(1,5,5) n <- 100 dfW <- rbind(data.frame(id = 1:n, rep = 5, Y = rowMeans(rmvnorm(n, sigma = Sigma5))), data.frame(id = (n+1):(2*n), rep = 1, Y = rmvnorm(n, sigma = Sigma1))) e.lmmW <- lmm(Y~1, data = dfW, weigths=~rep, control = list(optimizer = "FS")) e.lmm0 <- lmm(Y~1, data = dfW, control = list(optimizer = "FS")) model.tables(e.lmmW, effects = "all") model.tables(e.lmm0, effects = "all") ## TRUE standard error is 1 }
Fit a linear mixed model on the complete case data. Mostly useful as a sanity check, to match the results of a univariate analysis on the change.
lmmCC(object, ...) ## S3 method for class 'formula' lmmCC( object, repetition, data, lm.change = FALSE, df = NULL, trace = TRUE, control = NULL, ... ) ## S3 method for class 'lm' lmmCC( object, repetition, data, name.time = "time", df = NULL, trace = TRUE, control = NULL, ... )
lmmCC(object, ...) ## S3 method for class 'formula' lmmCC( object, repetition, data, lm.change = FALSE, df = NULL, trace = TRUE, control = NULL, ... ) ## S3 method for class 'lm' lmmCC( object, repetition, data, name.time = "time", df = NULL, trace = TRUE, control = NULL, ... )
object |
[formula] Specify the model for the mean. On the left hand side the outcome and on the right hand side the covariates affecting the mean value. E.g. Y ~ Gender + Gene. |
... |
Not used. For compatibility with the generic method. |
repetition |
[formula] Specify the structure of the data: the time/repetition variable and the grouping variable, e.g. ~ time|id. |
data |
[data.frame] dataset (in the long format) containing the observations. |
lm.change |
[logical] Should a linear model on the change in outcome be estimated. Only possible with two repetitions. Will match the mixed model if the later includes repetition-dependent effects for all covariates. |
df |
[logical] Should the degrees-of-freedom be computed using a Satterthwaite approximation? |
trace |
[interger, >0] Show the progress of the execution of the function. |
control |
[list] Control values for the optimization method.
The element |
name.time |
[character] name of the time variable. |
A lmmCC
object, which inherits from lmm
.
#### 1- simulate data in the wide format #### set.seed(10) dW <- sampleRem(100, n.times = 3, format = "wide") dW$Y3[1:10] <- NA dW$change2 <- dW$Y2 - dW$Y1 dW$change3 <- dW$Y3 - dW$Y1 e.lm2 <- lm(change2 ~ X1 + X2, data = dW) summary(e.lm2)$coef e.lm3 <- lm(change3 ~ X1 + X2, data = dW) summary(e.lm3)$coef #### 2- complete case LMM from LM #### e.lmmCC2 <- lmmCC(e.lm2, repetition = change2~Y2-Y1) model.tables(e.lmmCC2) e.lmmCC3 <- lmmCC(e.lm3, repetition = change3~Y3-Y1) model.tables(e.lmmCC3) #### 3- complete case LMM #### dL <- reshape(dW[,c("id","X1","X2","Y1","Y2","Y3")], direction = "long", varying = c("Y1","Y2","Y3"), sep = "", idvar = "id") dL$time <- as.character(dL$time) e.lmm2 <- lmmCC(Y ~ time + time*X1 + time*X2, repetition = ~time|id, data = dL[dL$time %in% 1:2,]) model.tables(e.lmm2) e.lmm3.bis <- lmmCC(Y ~ time + time*X1 + time*X2, repetition = ~time|id, data = dL[dL$time %in% c(1,3),], lm.change = TRUE) model.tables(e.lmm3.bis) e.lmm3.bis$lm.change
#### 1- simulate data in the wide format #### set.seed(10) dW <- sampleRem(100, n.times = 3, format = "wide") dW$Y3[1:10] <- NA dW$change2 <- dW$Y2 - dW$Y1 dW$change3 <- dW$Y3 - dW$Y1 e.lm2 <- lm(change2 ~ X1 + X2, data = dW) summary(e.lm2)$coef e.lm3 <- lm(change3 ~ X1 + X2, data = dW) summary(e.lm3)$coef #### 2- complete case LMM from LM #### e.lmmCC2 <- lmmCC(e.lm2, repetition = change2~Y2-Y1) model.tables(e.lmmCC2) e.lmmCC3 <- lmmCC(e.lm3, repetition = change3~Y3-Y1) model.tables(e.lmmCC3) #### 3- complete case LMM #### dL <- reshape(dW[,c("id","X1","X2","Y1","Y2","Y3")], direction = "long", varying = c("Y1","Y2","Y3"), sep = "", idvar = "id") dL$time <- as.character(dL$time) e.lmm2 <- lmmCC(Y ~ time + time*X1 + time*X2, repetition = ~time|id, data = dL[dL$time %in% 1:2,]) model.tables(e.lmm2) e.lmm3.bis <- lmmCC(Y ~ time + time*X1 + time*X2, repetition = ~time|id, data = dL[dL$time %in% c(1,3),], lm.change = TRUE) model.tables(e.lmm3.bis) e.lmm3.bis$lm.change
Update or select global options for the LMMstar package.
LMMstar.options(..., reinitialise = FALSE)
LMMstar.options(..., reinitialise = FALSE)
... |
options to be selected or updated |
reinitialise |
should all the global parameters be set to their default value |
The options are:
adj.method [character vector]: possible methods to adjust for multiple comparisons. NOT MEANT TO BE CHANGED BY THE USER.
backtransform.confint [logical]: should variance/covariance/correlation estimates be back-transformed when they are transformed on the log or atanh scale. Used by confint
.
columns.anova [character vector]: columns to ouput when using anova
with argument ci=TRUE
.
columns.confint [character vector]: columns to ouput when using confint
.
columns.summarize [character vector]: columns to ouput when displaying descriptive statistics using summarize
.
columns.summary [character vector]: columns to ouput when displaying the model coefficients using summary
.
df [logical]: should approximate degrees-of-freedom be computed for Wald and F-tests. Used by lmm
, anova
, predict
, and confint
.
drop.X [logical]: should columns causing non-identifiability of the model coefficients be dropped from the design matrix. Used by lmm
.
effects [character]: parameters relative to which estimates, score, information should be output.
min.df [integer]: minimum possible degree-of-freedom. Used by confint
.
method.fit [character]: objective function when fitting the Linear Mixed Model (REML or ML). Used by lmm
.
method.numDeriv [character]: type used to approximate the third derivative of the log-likelihood (when computing the degrees-of-freedom). Can be "simple"
or "Richardson"
. See numDeriv::jacobian
for more details. Used by lmm
.
n.sampleCopula [integer]: number of samples used to compute confidence intervals and p-values adjusted for multiple comparisons via "single-step2"
. Used by confint.Wald_lmm
.
optimizer [character]: method used to estimate the model parameters. Either "FS"
, an home-made fisher scoring algorithm, or a method from optimx:optimx
like "BFGS"
or Nelder-Mead
.
param.optimizer [numeric vector]: default option for the FS
optimization routine: maximum number of gradient descent iterations (n.iter
), maximum acceptable score value (tol.score
), maximum acceptable change in parameter value (tol.param
).
pool.method [character vector]: possible methods to pool estimates. NOT MEANT TO BE CHANGED BY THE USER.
precompute.moments [logical]: Should the cross terms between the residuals and design matrix be pre-computed. Useful when the number of subject is substantially larger than the number of mean paramters.
sep [character vector]: character used to combined two strings of characters in various functions (lp: .vcov.model.matrix, k.cov/k.strata: .skeletonK, pattern: .findUpatterns, rho.name/rho.strata: .skeletonRho, reformat: .reformat ).
trace [logical]: Should the progress of the execution of the lmm
function be displayed?
tranform.sigma, tranform.k, tranform.rho: transformation used to compute the confidence intervals/p-values for the variance and correlation parameters. See the detail section of the coef function for more information.
Used by lmm
, anova
and confint
.
type.information [character]: Should the expected or observed information ("expected"
or "observed"
) be used to perform statistical inference? Used by lmm
, anova
and confint
.
A list containing the default options.
Extract or compute the log-likelihood of a linear mixed model.
## S3 method for class 'lmm' logLik(object, newdata = NULL, p = NULL, indiv = FALSE, ...)
## S3 method for class 'lmm' logLik(object, newdata = NULL, p = NULL, indiv = FALSE, ...)
object |
a |
newdata |
[data.frame] dataset relative to which the log-likelihood should be computed. Only relevant if differs from the dataset used to fit the model. |
p |
[numeric vector] value of the model coefficients at which to evaluate the log-likelihood. Only relevant if differs from the fitted values. |
indiv |
[logical] Should the contribution of each cluster to the log-likelihood be output? Otherwise output the sum of all clusters of the derivatives. |
... |
Not used. For compatibility with the generic method. |
indiv: only relevant when using maximum likelihood. Must be FALSE
when using restricted maximum likelihood.
A numeric value (total logLikelihood) or a vector of numeric values, one for each cluster (cluster specific logLikelihood).
Extract or compute the log-likelihood from each group-specific Linear Mixed Model.
## S3 method for class 'mlmm' logLik(object, newdata = NULL, p = NULL, indiv = FALSE, ...)
## S3 method for class 'mlmm' logLik(object, newdata = NULL, p = NULL, indiv = FALSE, ...)
object |
a |
newdata |
[data.frame] dataset relative to which the log-likelihood should be computed. Only relevant if differs from the dataset used to fit the model. |
p |
[list of numeric vector] values for the model parameters at which to evaluate the log-likelihood. Only relevant if differs from the fitted values. |
indiv |
[logical] Should the contribution of each cluster to the log-likelihood be output? Otherwise output the sum of all clusters of the derivatives. |
... |
Not used. For compatibility with the generic method. |
indiv: only relevant when using maximum likelihood. Must be FALSE
when using restricted maximum likelihood.
A numeric vector when indiv=FALSE
(log-likelihood per model)
or a matrix of numeric values when when indiv=TRUE
with one line for each cluster and one column for each model.
Fit several linear mixed models, extract relevant coefficients, and combine them into a single table.
mlmm( ..., data, by, contrast.rbind = NULL, effects = NULL, robust = FALSE, df = NULL, name.short = TRUE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, trace = TRUE )
mlmm( ..., data, by, contrast.rbind = NULL, effects = NULL, robust = FALSE, df = NULL, name.short = TRUE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, trace = TRUE )
... |
arguments passed to |
data |
[data.frame] dataset (in the long format) containing the observations. |
by |
[character] variable used to split the dataset. On each split a seperate linear mixed model is fit. |
contrast.rbind |
[character or numeric matrix] Contrast to be be applied to compare the groups.
Argument passed to the argument |
effects |
[character or numeric matrix] Linear combinations of coefficients relative to which Wald test should be computed.
Argument passed to |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
df |
[logical] Should the degrees-of-freedom be computed using a Satterthwaite approximation?
Argument passed to |
name.short |
[logical] use short names for the output coefficients (omit the name of the by variable). |
transform.sigma , transform.k , transform.rho , transform.names
|
[character] transformation used on certain type of parameters. |
trace |
[interger, >0] Show the progress of the execution of the function. |
Grouping variable in argument repetition: when numeric, it will be converted into a factor variable, possibly adding a leading 0 to preserve the ordering.
This transformation may cause inconsistency when combining results between different lmm
object.
This is why the grouping variable should preferably be of type character or factor.
confint.mlmm
for a data.frame containing estimates with their uncertainty. summary.mlmm
for a summary of the model and estimates. autoplot.Wald_lmm
for a graphical display.
#### univariate regression #### if(require(lava) && require(multcomp)){ set.seed(10) d1 <- cbind(sim(lvm(Y~0.5*X1), 25), group = "A") d2 <- cbind(sim(lvm(Y~0.1*X1), 100), group = "B") d3 <- cbind(sim(lvm(Y~0.01*X1), 1000), group = "C") d1$id <- 1:NROW(d1) d2$id <- 1:NROW(d2) d3$id <- 1:NROW(d3) d <- rbind(d1,d2,d3) e.mlmm <- mlmm(Y~X1, data = d, by = "group", effects = "X1=0") summary(e.mlmm) summary(e.mlmm, method = "single-step") summary(e.mlmm, method = "single-step2") ## re-work contrast summary(anova(e.mlmm, effects = mcp(X1 = "Dunnett")), method = "none") ## summary(mlmm(Y~X1, data = d, by = "group", effects = mcp(X1="Dunnett"))) } #### multivariate regression #### set.seed(10) dL <- sampleRem(250, n.times = 3, format = "long") e.mlmm <- mlmm(Y~X1+X2+X6, repetition = ~visit|id, data = dL, by = "X4", structure = "CS") summary(e.mlmm) e.mlmmX1 <- mlmm(Y~X1+X2+X6, repetition = ~visit|id, data = dL, by = "X4", effects = "X1=0", structure = "CS") summary(e.mlmmX1) summary(e.mlmmX1, method = "single-step")
#### univariate regression #### if(require(lava) && require(multcomp)){ set.seed(10) d1 <- cbind(sim(lvm(Y~0.5*X1), 25), group = "A") d2 <- cbind(sim(lvm(Y~0.1*X1), 100), group = "B") d3 <- cbind(sim(lvm(Y~0.01*X1), 1000), group = "C") d1$id <- 1:NROW(d1) d2$id <- 1:NROW(d2) d3$id <- 1:NROW(d3) d <- rbind(d1,d2,d3) e.mlmm <- mlmm(Y~X1, data = d, by = "group", effects = "X1=0") summary(e.mlmm) summary(e.mlmm, method = "single-step") summary(e.mlmm, method = "single-step2") ## re-work contrast summary(anova(e.mlmm, effects = mcp(X1 = "Dunnett")), method = "none") ## summary(mlmm(Y~X1, data = d, by = "group", effects = mcp(X1="Dunnett"))) } #### multivariate regression #### set.seed(10) dL <- sampleRem(250, n.times = 3, format = "long") e.mlmm <- mlmm(Y~X1+X2+X6, repetition = ~visit|id, data = dL, by = "X4", structure = "CS") summary(e.mlmm) e.mlmmX1 <- mlmm(Y~X1+X2+X6, repetition = ~visit|id, data = dL, by = "X4", effects = "X1=0", structure = "CS") summary(e.mlmmX1) summary(e.mlmmX1, method = "single-step")
Contruct a data frame containing all the variables involved in a Linear Mixed Model.
## S3 method for class 'lmm' model.frame( formula, newdata = NULL, type = NULL, add.index = FALSE, na.rm = TRUE, ... )
## S3 method for class 'lmm' model.frame( formula, newdata = NULL, type = NULL, add.index = FALSE, na.rm = TRUE, ... )
formula |
a |
newdata |
[data.frame] dataset relative to which the model frame should be constructed. |
type |
[character] by default returns the processed dataset used to fit the Linear Mixed Model ( |
add.index |
[logical] should columns indexing the row number from the original dataset, time variable, cluster variable, strata variable be added to the output? |
na.rm |
[logical] should rows containing missing values for the variables used in the linear mixed model be removed?
Not relevant when argument type is |
... |
not used. For compatibility with the generic method. |
Column "XXindexXX"
refers to the row of the original dataset (i.e. passed to argument data
when calling lmm
).
When adding rows relative to missing repetitions, since there is no row in the original dataset, a negative sign is used.
data("armd.wide", package = "nlmeU") e.lmH <- lmm(visual52 ~ lesion, structure = IND(~treat.f), data = armd.wide) model.frame(e.lmH) model.frame(e.lmH, add.index = TRUE) model.frame(e.lmH, type = "unique") data("gastricbypassL", package = "LMMstar") dfL.NNA <- na.omit(gastricbypassL) e.lmm <- lmm(glucagonAUC ~ time, repetition = ~visit|id, data = dfL.NNA, df = FALSE) model.frame(e.lmm, type = "unique") model.frame(e.lmm, type = c("unique","correlation")) model.frame(e.lmm, type = "add.NA", add.index = TRUE)
data("armd.wide", package = "nlmeU") e.lmH <- lmm(visual52 ~ lesion, structure = IND(~treat.f), data = armd.wide) model.frame(e.lmH) model.frame(e.lmH, add.index = TRUE) model.frame(e.lmH, type = "unique") data("gastricbypassL", package = "LMMstar") dfL.NNA <- na.omit(gastricbypassL) e.lmm <- lmm(glucagonAUC ~ time, repetition = ~visit|id, data = dfL.NNA, df = FALSE) model.frame(e.lmm, type = "unique") model.frame(e.lmm, type = c("unique","correlation")) model.frame(e.lmm, type = "add.NA", add.index = TRUE)
Extract or construct design matrices for a linear mixed model.
## S3 method for class 'lmm' model.matrix( object, newdata = NULL, effects = "mean", simplify = TRUE, drop.X = NULL, na.rm = TRUE, ... )
## S3 method for class 'lmm' model.matrix( object, newdata = NULL, effects = "mean", simplify = TRUE, drop.X = NULL, na.rm = TRUE, ... )
object |
an |
newdata |
[data.frame] dataset relative to which the design matrix should be constructed. |
effects |
[character] design matrix relative to the mean model ( |
simplify |
[logical] simplify the data format of the output (matrix instead of a list of matrix) when possible. |
drop.X |
[logical] when the design matrix does not have full rank, should columns be dropped? |
na.rm |
[logical] Should row containing missing values for the variables used in the linear mixed model be removed? |
... |
Not used. For compatibility with the generic method. |
When simplify
is FALSE
, a list with the followin elements:
mean
: design matrix for the mean model
Y
: vector of outcome values
vcov
: list of elements for the variance and correlation models.
index.cluster
: list containing, for each cluster, the location of the corresponding observations in the processed dataset.
index.clusterTime
: list containing, for each cluster, the repetition index corresponding observations.
index.clusterStrata
: list containing, for each cluster, the strata index corresponding observations.
param
: data.frame describing the modle parameters.
drop.X
: logical value indicating whether columns in the design matrix should be dropped if it has not full rank.
precompute.XX
, precompute.XY
: moments of X and Y
When simplify
is TRUE
, this list will be simplified into a list with three elements:
mean
: design matrix for the mean model
variance
: design matrix for the variance model
correlation
: design matrix for the correlation model
or a single design matrixx.
Export estimated parameters with their uncertainty (standard errors, degrees-of-freedom, confidence intervals and p-values) from a linear mixed model or a table describing each parameter (type, associated sigma or k parameter, ...).
## S3 method for class 'lmm' model.tables( x, level = 0.95, effects = NULL, robust = FALSE, null = NULL, columns = NULL, df = NULL, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, backtransform = NULL, simplify = TRUE, ... )
## S3 method for class 'lmm' model.tables( x, level = 0.95, effects = NULL, robust = FALSE, null = NULL, columns = NULL, df = NULL, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, backtransform = NULL, simplify = TRUE, ... )
x |
a |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
effects |
[character] Should the CIs/p-values for all coefficients be output ( |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
null |
[numeric vector] the value of the null hypothesis relative to each coefficient. |
columns |
[character vector] Columns to be output.
Can be any of |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the coefficient. Otherwise a normal distribution is used. |
type.information , transform.sigma , transform.k , transform.rho , transform.names
|
are passed to the |
backtransform |
[logical] should the variance/covariance/correlation coefficient be backtransformed? |
simplify |
[logical] omit from the output the backtransform attribute.
Not relevant when the argument |
... |
Not used. For compatibility with the generic method. |
When effects
differs from "param"
, this function is a wrapper for confint
with different default value for the argument column
.
A data.frame
object.
Combine estimated parameters with their uncertainty (standard errors, degrees-of-freedom, confidence intervals and p-values) from group-specific linear mixed models or a table describing each parameter (type, associated sigma or k parameter, ...).
## S3 method for class 'mlmm' model.tables( x, parm = NULL, level = 0.95, method = NULL, df = NULL, columns = NULL, backtransform = NULL, ordering = "parameter", ... )
## S3 method for class 'mlmm' model.tables( x, parm = NULL, level = 0.95, method = NULL, df = NULL, columns = NULL, backtransform = NULL, ordering = "parameter", ... )
x |
a |
method |
[character] type of adjustment for multiple comparisons, one of |
... |
arguments to be passed to |
effects |
[character] Should the CIs/p-values for all coefficients be output ( |
simplify |
[logical] omit from the output the backtransform attribute.
Not relevant when the argument |
When effects
is not "param"
, this function simply calls confint
with a specific value for the argument column
.
A data.frame
object.
Combine estimates, standard errors, degrees-of-freedom, confidence intervals (CIs) and p-values relative to linear contrasts of parameters from different linear mixed models.
## S3 method for class 'rbindWald_lmm' model.tables( x, effects = "Wald", level = 0.95, df = NULL, method = NULL, columns = NULL, ordering = NULL, backtransform = NULL, transform.names = TRUE, simplify = TRUE, ... )
## S3 method for class 'rbindWald_lmm' model.tables( x, effects = "Wald", level = 0.95, df = NULL, method = NULL, columns = NULL, ordering = NULL, backtransform = NULL, transform.names = TRUE, simplify = TRUE, ... )
x |
a |
effects |
[character] should the linear contrasts involved in the Wald test be output ( |
level |
[numeric, 0-1] nominal coverage of the confidence intervals. |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the Wald statistic. Otherwise a normal distribution is used. |
method |
[character] Should pointwise confidence intervals be output ( |
columns |
[character vector] Columns to be output.
Can be any of |
ordering |
[character] should the output be ordered by name of the linear contrast ( |
backtransform |
[logical] should the estimates, standard errors, and confidence intervals be backtransformed? |
transform.names |
[logical] should the name of the coefficients be updated to reflect the transformation that has been used?
Only relevant when |
simplify |
[logical] with argument |
... |
Not used. For compatibility with the generic method. |
Export estimates, standard errors, degrees-of-freedom, confidence intervals (CIs) and p-values relative to linear contrasts of parameters from a linear mixed model.
## S3 method for class 'Wald_lmm' model.tables( x, effects = "Wald", level = 0.95, df = NULL, method = NULL, columns = NULL, backtransform = NULL, transform.names = TRUE, simplify = TRUE, ... )
## S3 method for class 'Wald_lmm' model.tables( x, effects = "Wald", level = 0.95, df = NULL, method = NULL, columns = NULL, backtransform = NULL, transform.names = TRUE, simplify = TRUE, ... )
x |
a |
effects |
[character] should the linear contrasts involved in the Wald test be output ( |
level |
[numeric, 0-1] nominal coverage of the confidence intervals. |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the Wald statistic. Otherwise a normal distribution is used. |
method |
[character] Should pointwise confidence intervals be output ( |
columns |
[character vector] Columns to be output.
Can be any of |
backtransform |
[logical] should the estimates, standard errors, and confidence intervals be backtransformed? |
transform.names |
[logical] should the name of the coefficients be updated to reflect the transformation that has been used?
Only relevant when |
simplify |
[logical] with argument |
... |
Not used. For compatibility with the generic method. |
Perform multiple Student's t-Test via heteroschedastic linear regression and combine the results, possibly adjusted for multiplicity.
mt.test(formula, data, method = NULL, level = 0.95, trace = TRUE)
mt.test(formula, data, method = NULL, level = 0.95, trace = TRUE)
formula |
A formula like
|
data |
dataset in the wide format. Should inherit from data.frame. |
method |
[character] type of adjustment for multiple comparisons, one of |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
trace |
[logical] should a message be displayed in the console when there are missing data. |
In presence of missing values, performs a outcome specific complete case analysis.
A data.frame with the estimates, confidence intervals, and p-values relative to each outcome.
Depending on the argument method
confidence intervals and p-values may be adjusted for multiple comparisons.
The data.frame has an attribute mlmm
containing the underlying regression models.
data(calciumW, package = "LMMstar") t.test(bmd1 ~ grp, data = calciumW) mt.test(bmd1+bmd2+bmd3+bmd4+bmd5 ~ grp, data = calciumW) mt.test(bmd1+bmd2+bmd3+bmd4+bmd5 ~ grp|girl, data = calciumW) mt.test(bmd1+bmd2+bmd3+bmd4+bmd5 ~ grp|girl, data = calciumW, method = "none")
data(calciumW, package = "LMMstar") t.test(bmd1 ~ grp, data = calciumW) mt.test(bmd1+bmd2+bmd3+bmd4+bmd5 ~ grp, data = calciumW) mt.test(bmd1+bmd2+bmd3+bmd4+bmd5 ~ grp|girl, data = calciumW) mt.test(bmd1+bmd2+bmd3+bmd4+bmd5 ~ grp|girl, data = calciumW, method = "none")
Data from the National Cooperative Gallstone Study (NCGS), a randomized study where the level of serum cholesterol was measured at baseline and after intake of high-dose chenondiol (750mg/day) or placebo. This dataset is in the long format (i.e. one line per measurement).
group
: treatment group (highdose or placebo).
id
: patient identifier.
visit
: visit index.
cholest
: cholesterol measurement.
time
: time after the start of the study at which the measurement has been done (in month). Treatment is given at 0+.
data(ncgsL)
data(ncgsL)
Grundy SM, Lan SP, Lachin J. The effects of chenodiol on biliary lipids and their association with gallstone dissolution in the National Cooperative Gallstone Study (NCGS). J Clin Invest. 1984 Apr;73(4):1156-66. doi: 10.1172/JCI111301.
Data from the National Cooperative Gallstone Study (NCGS), a randomized study where the level of serum cholesterol was measured at baseline and after intake of high-dose chenondiol (750mg/day) or placebo. This dataset is in the wide format (i.e. one line per patient).
group
: treatment group (highdose or placebo).
id
: patient identifier.
cholest1
: cholesterol measurement at baseline (before treatment).
cholest2
: cholesterol measurement at 6 months (after treatment).
cholest3
: cholesterol measurement at 12 months (after treatment).
cholest4
: cholesterol measurement at 20 months (after treatment).
cholest5
: cholesterol measurement at 24 months (after treatment).
data(ncgsW)
data(ncgsW)
Grundy SM, Lan SP, Lachin J. The effects of chenodiol on biliary lipids and their association with gallstone dissolution in the National Cooperative Gallstone Study (NCGS). J Clin Invest. 1984 Apr;73(4):1156-66. doi: 10.1172/JCI111301.
Extract the number of observations from a Linear Mixed Model.
## S3 method for class 'lmm' nobs(object, ...)
## S3 method for class 'lmm' nobs(object, ...)
object |
a |
... |
Not used. For compatibility with the generic method. |
A vector with 4 elements:
obs
: the number of repetitions with full data
cluster
: the number of clusters with a least one repetition with full data
missing.obs
: the number of repetitions with missing data
missing.cluster
: the number of cluster with only missing data
Extract the number of observations from multiple linear mixed models.
## S3 method for class 'mlmm' nobs(object, ...)
## S3 method for class 'mlmm' nobs(object, ...)
object |
a |
... |
Not used. For compatibility with the generic method. |
A matrix with as many rows as models and 4 columns:
obs
: the number of repetitions with full data
cluster
: the number of clusters with a least one repetition with full data
missing.obs
: the number of repetitions with missing data
missing.cluster
: the number of cluster with only missing data
Data from the toenail onychomycosis study, a randomized double-blind comparative study comparing Terbinafine (250 mg/day) against Itraconazole (200 mg/day) over 7 weeks. This dataset is in the long format (i.e. one line per measurement).
id
: patient identifier.
group
: treatment arm to which the patient has been randomized ("itraconazole"
or "terbinafine"
).
treatment
: treatment recieved by the patient at a given timepoint ("none"
, "itraconazole"
, or "terbinafine"
).
visit
: index of time at which the measurement was taken (1
to 7
).
time
: scheduled time, in months, for each measurement (0
, 4
, 8
, 12
, 24
, 36
, 48
).
obstime
: time, in months, at which the measurement was taken (numeric between 0
and 18.5
).
response
: degree of onycholysis (separation of the nail plate from the nail-bed). Can be 0 for none or mild, 1 for moderate or severe.
data(onycholysisL)
data(onycholysisL)
De Backer et al. Twelve weeks of continuous oral therapy for toenail onychomycosis caused by dermatophytes: A double-blind comparative trial of terbinafine 250 mg/day versus itraconazole 200 mg/day, Journal of the American Academy of Dermatology, (2020) 110. doi: 10.1016/s0190-9622(98)70486-4
Data from the toenail onychomycosis study, a randomized double-blind comparative study comparing Terbinafine (250 mg/day) against Itraconazole (200 mg/day) over 7 weeks. This dataset is in the wide format (i.e. one line per patient).
id
: patient identifier.
group
: treatment arm to which the patient has been randomized.
response1
,...,response7
: degree of onycholysis (separation of the nail plate from the nail-bed). Can be 0 for none or mild, 1 for moderate or severe.
time1
,...,time7
: time elapsed between baseline and the measurement (in months).
data(onycholysisW)
data(onycholysisW)
De Backer et al. Twelve weeks of continuous oral therapy for toenail onychomycosis caused by dermatophytes: A double-blind comparative trial of terbinafine 250 mg/day versus itraconazole 200 mg/day, Journal of the American Academy of Dermatology, (2020) 110. doi: 10.1016/s0190-9622(98)70486-4
Estimate the partial correlation based on equation 19 of Lloyd et al 2008 (partialCor.lmm
) or explicitely modeling the correlation via a linear mixed model (partialCor.list
, partialCor.formula
).
The first option is numerically more efficient and exact with a single observation per cluster.
With multiple repetitions, what is being estimated with the first option may not be clear and the second option is therefore preferrable.
partialCor(object, ...) ## S3 method for class 'list' partialCor( object, data, repetition = NULL, structure = NULL, by = NULL, effects = NULL, rhs = NULL, method = "none", df = NULL, transform.rho = NULL, name.short = c(TRUE, FALSE), ... ) ## S3 method for class 'formula' partialCor(object, repetition, ...) ## S3 method for class 'lmm' partialCor(object, level = 0.95, R2 = FALSE, se = TRUE, df = TRUE, ...)
partialCor(object, ...) ## S3 method for class 'list' partialCor( object, data, repetition = NULL, structure = NULL, by = NULL, effects = NULL, rhs = NULL, method = "none", df = NULL, transform.rho = NULL, name.short = c(TRUE, FALSE), ... ) ## S3 method for class 'formula' partialCor(object, repetition, ...) ## S3 method for class 'lmm' partialCor(object, level = 0.95, R2 = FALSE, se = TRUE, df = TRUE, ...)
object |
a formula with in the left hand side the variables for which the correlation should be computed and on the right hand side the adjustment set. Can also be a list of formula for outcome-specific adjustment set. |
... |
arguments passed to |
data |
[data.frame] dataset containing the variables. |
repetition |
[formula] Specify the structure of the data: the time/repetition variable and the grouping variable, e.g. ~ time|id. |
structure |
[character] Specify the residual variance-covariance structure.
Without repetitions, either |
by |
[character] variable used to stratified the correlation on. |
effects |
[character or matrix] type of contrast to be used for comparing the correlation parameters. One of |
rhs |
[numeric vector] right hand side for the comparison of correlation parameters. |
method |
[character] adjustment for multiple comparisons (e.g. |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the coefficient. Otherwise a normal distribution is used. |
transform.rho |
[character] scale on which perform statistical inference (e.g. |
name.short |
[logical vector of length 2] use short names for the output coefficients (omit the name of the by variable, omit name of the correlation parameter) |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
R2 |
[logical] Should the R2 (coefficient of determination) be computed? |
se |
[logical] Should the uncertainty about the partial correlation be evaluated? Only relevant for |
Fit a mixed model to estimate the partial correlation with the following variance-covariance pattern:
no repetition: unstructure or compound symmetry structure for M observations, M being the number of variables on the left hand side (i.e. outcomes).
repetition: structure for M*T observations where M being the number of variables (typically 2) and T the number of repetitions. Can be
"UN"
: unstructured (except the off-diagonal containing the correlation parameter which is constant).
"PEARSON"
: same as unstructured except it only uses a single variance parameter per variable, i.e. it assumes constant variance over repetitions.
"HLAG"
: toeplitz by block with variable and repetition specific variance.
"LAG"
: toeplitz by block, i.e. correlation depending on the gap between repetitions and specific to each variable. It assumes constant variance over repetitions.
"HCS"
: heteroschedastic compound symmetry by block, i.e. variable specific correlation constant over repetitions. A specific parameter is used for the off-diagonal crossing the variables at the same repetition (which is the marginal correlation parameter).
"CS"
: compound symmetry by block. It assumes constant variance and correlation over repetitions.
A data.frame with the estimate partial correlation (rho), standard error, degrees-of-freedom, confidence interval, and p-value (test of no correlation).
When structure="CS"
or structure="HCS"
is used with repeated measurements, a second correlation coefficient (r) is output where the between subject variance has been removed (similar to Bland et al. 1995).
Bland J M, Altman D G. Statistics notes: Calculating correlation coefficients with repeated observations: Part 1—correlation within subjects BMJ 1995; 310 :446 doi:10.1136/bmj.310.6977.446 Edwards, L.J., Muller, K.E., Wolfinger, R.D., Qaqish, B.F. and Schabenberger, O. (2008), An R2 statistic for fixed effects in the linear mixed model. Statist. Med., 27: 6137-6157. https://doi.org/10.1002/sim.3429
#### no repetition #### ## example from ppcor::pcor y.data <- data.frame( hl=c(7,15,19,15,21,22,57,15,20,18), disp=c(0.000,0.964,0.000,0.000,0.921,0.000,0.000,1.006,0.000,1.011), deg=c(9,2,3,4,1,3,1,3,6,1), BC=c(1.78e-02,1.05e-06,1.37e-05,7.18e-03,0.00e+00,0.00e+00,0.00e+00 , 4.48e-03,2.10e-06,0.00e+00) ) ## ppcor::pcor(y.data) ## partial correlation based on a formula partialCor(c(hl,disp)~BC+deg, data = y.data) partialCor(hl + disp~BC+deg, data = y.data) ## partial correlation based on a list partialCor(list(hl~BC+deg,disp~BC+deg), data = y.data) ## via an existing model e.lm <- lmm(hl~disp+BC+deg, data = y.data) partialCor(e.lm) ## using a different set of covariates for outcome partialCor(list(hl~BC+deg, disp~BC), data = y.data) ## statified correlation (using another dataset) data(gastricbypassW, package = "LMMstar") gastricbypassW$weight.bin <- gastricbypassW$weight1>=120 partialCor(glucagonAUC1+glucagonAUC2~1, data = gastricbypassW, by = "weight.bin") ## compared correlation between groups partialCor(glucagonAUC1+glucagonAUC2~1, data = gastricbypassW, by = "weight.bin", effects = "Dunnett") #### with repetitions #### ## Not run: data(gastricbypassL, package = "LMMstar") ## via a mixed model eUN.lmm <- lmm(weight ~ glucagonAUC+time, repetition =~time|id, data = gastricbypassL, structure = "UN") partialCor(eUN.lmm) ## mean: variable and timepoint specific mean parameter (8) ## variance: variable and timepoint specific variance parameter (8) ## correlation: correlation parameter specific for each variable and time lag (10) e.cor <- partialCor(weight+glucagonAUC~time, repetition =~time|id, data = gastricbypassL, structure = "LAG") e.cor coef(attr(e.cor,"lmm"), effects = "correlation") if(require(ggplot2)){ autoplot(e.cor) } ## same except for the mean structure: variable specific mean parameter (2) e.cor2 <- partialCor(weight+glucagonAUC~time, repetition =~time|id, data = gastricbypassL, structure = "LAG") ## mean: variable and timepoint specific mean parameter (8) ## variance: variable specific variance parameter (2) ## correlation: correlation parameter specific for each variable and some time lag (4) e.cor3 <- partialCor(weight+glucagonAUC~time, repetition =~time|id, data = gastricbypassL, structure = "CS") e.cor3 coef(attr(e.cor3,"lmm"), effects = "correlation") if(require(ggplot2)){ autoplot(e.cor3) } ## End(Not run)
#### no repetition #### ## example from ppcor::pcor y.data <- data.frame( hl=c(7,15,19,15,21,22,57,15,20,18), disp=c(0.000,0.964,0.000,0.000,0.921,0.000,0.000,1.006,0.000,1.011), deg=c(9,2,3,4,1,3,1,3,6,1), BC=c(1.78e-02,1.05e-06,1.37e-05,7.18e-03,0.00e+00,0.00e+00,0.00e+00 , 4.48e-03,2.10e-06,0.00e+00) ) ## ppcor::pcor(y.data) ## partial correlation based on a formula partialCor(c(hl,disp)~BC+deg, data = y.data) partialCor(hl + disp~BC+deg, data = y.data) ## partial correlation based on a list partialCor(list(hl~BC+deg,disp~BC+deg), data = y.data) ## via an existing model e.lm <- lmm(hl~disp+BC+deg, data = y.data) partialCor(e.lm) ## using a different set of covariates for outcome partialCor(list(hl~BC+deg, disp~BC), data = y.data) ## statified correlation (using another dataset) data(gastricbypassW, package = "LMMstar") gastricbypassW$weight.bin <- gastricbypassW$weight1>=120 partialCor(glucagonAUC1+glucagonAUC2~1, data = gastricbypassW, by = "weight.bin") ## compared correlation between groups partialCor(glucagonAUC1+glucagonAUC2~1, data = gastricbypassW, by = "weight.bin", effects = "Dunnett") #### with repetitions #### ## Not run: data(gastricbypassL, package = "LMMstar") ## via a mixed model eUN.lmm <- lmm(weight ~ glucagonAUC+time, repetition =~time|id, data = gastricbypassL, structure = "UN") partialCor(eUN.lmm) ## mean: variable and timepoint specific mean parameter (8) ## variance: variable and timepoint specific variance parameter (8) ## correlation: correlation parameter specific for each variable and time lag (10) e.cor <- partialCor(weight+glucagonAUC~time, repetition =~time|id, data = gastricbypassL, structure = "LAG") e.cor coef(attr(e.cor,"lmm"), effects = "correlation") if(require(ggplot2)){ autoplot(e.cor) } ## same except for the mean structure: variable specific mean parameter (2) e.cor2 <- partialCor(weight+glucagonAUC~time, repetition =~time|id, data = gastricbypassL, structure = "LAG") ## mean: variable and timepoint specific mean parameter (8) ## variance: variable specific variance parameter (2) ## correlation: correlation parameter specific for each variable and some time lag (4) e.cor3 <- partialCor(weight+glucagonAUC~time, repetition =~time|id, data = gastricbypassL, structure = "CS") e.cor3 coef(attr(e.cor3,"lmm"), effects = "correlation") if(require(ggplot2)){ autoplot(e.cor3) } ## End(Not run)
Data from the potassium intake study, a randomized placebo-controlled crossover study where the effect of potassium supplement (90 mmol/day) on the renin-angiostensin-aldosteron system (RAAS) was assessed. This dataset is in the long format (i.e. one line per measurement) and contains measurement over 6 timepoints for each time period.
id
: patient identifier.
sequence
: treatment group to which the patient has been randomized.
period
: time period.
treatment
: treatment during the time period.
time
: time within each period.
aldo
: ??
data(potassiumRepeatedL)
data(potassiumRepeatedL)
Dreier et al. Effect of increased potassium intake on the reninangiotensinaldosterone system and subcutaneous resistance arteries: a randomized crossover study, Nephrol Dial Transplant (2020) 110. doi: 10.1093/ndt/gfaa114
Data from the potassium intake study, a randomized placebo-controlled crossover study where the effect of potassium supplement (90 mmol/day) on the renin-angiostensin-aldosteron system (RAAS) was assessed. This dataset is in the long format (i.e. one line per measurement).
id
: patient identifier.
sequence
: treatment group to which the patient has been randomized.
period
: time period.
treatment
: treatment during the time period.
auc
: area under the curve of ?? during the time period.
bsauc
: ??
aldo
: ??
data(potassiumSingleL)
data(potassiumSingleL)
Dreier et al. Effect of increased potassium intake on the reninangiotensinaldosterone system and subcutaneous resistance arteries: a randomized crossover study, Nephrol Dial Transplant (2020) 110. doi: 10.1093/ndt/gfaa114
Data from the potassium intake study, a randomized placebo-controlled crossover study where the effect of potassium supplement (90 mmol/day) on the renin-angiostensin-aldosteron system (RAAS) was assessed. This dataset is in the wide format (i.e. one line per patient).
id
: patient identifier.
sequence
: treatment group to which the patient has been randomized.
treatment1
: treatment during the first time period.
treatment2
: treatment during the second time period.
auc1
: area under the curve of ?? during the first time period.
auc2
: area under the curve of ?? during the second time period.
bsauc1
: ??
aldo1
: ??
aldo2
: ??
data(potassiumSingleW)
data(potassiumSingleW)
Dreier et al. Effect of increased potassium intake on the reninangiotensinaldosterone system and subcutaneous resistance arteries: a randomized crossover study, Nephrol Dial Transplant (1998) 110. doi: 10.1093/ndt/gfaa114
Estimate the expected outcome conditional on covariates and possibly on other outcomes based on a linear mixed model.
## S3 method for class 'lmm' predict( object, newdata, type = "static", p = NULL, se = NULL, robust = FALSE, df = NULL, level = 0.95, keep.data = NULL, format = "long", export.vcov = FALSE, simplify = TRUE, ... )
## S3 method for class 'lmm' predict( object, newdata, type = "static", p = NULL, se = NULL, robust = FALSE, df = NULL, level = 0.95, keep.data = NULL, format = "long", export.vcov = FALSE, simplify = TRUE, ... )
object |
a |
newdata |
[data.frame] a dataset containing covariate values to condition on. When setting the argument 'dynamic' predictions should also contain cluster, timepoint, and outcome values. |
type |
[character] evaluate the expected outcome conditional on covariates only ( |
p |
[numeric vector] value of the model coefficients at which to evaluate the predictions. Only relevant if differs from the fitted values. |
se |
[logical] should the standard error and confidence intervals for the predictions be output?
It can also be a logical vector of length 2 to indicate the type of uncertainty to be accounted for: estimation and residual variance.
In particular |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
df |
[logical] should a Student's t-distribution be used to model the distribution of the predicted mean. Otherwise a normal distribution is used. |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
keep.data |
[logical] should the dataset relative to which the predicted means are evaluated be output along side the predicted values? Only possible in the long format. |
format |
[character] should the prediction be output
in a matrix format with clusters in row and timepoints in columns ( |
export.vcov |
[logical] should the variance-covariance matrix of the prediction error be outcome as an attribute ( |
simplify |
[logical] simplify the data format (vector instead of data.frame) and column names (no mention of the time variable) when possible. |
... |
Not used. For compatibility with the generic method. |
vcov |
[logical] should the variance-covariance matrix of the predictions be output as an attribute. |
Static prediction are made using the linear predictor while dynamic prediction uses the conditional normal distribution of the missing outcome given the observed outcomes. So if outcome 1 is observed but not 2, prediction for outcome 2 is obtain by
. In that case, the uncertainty is computed as the sum of the conditional variance
plus the uncertainty about the estimated conditional mean (obtained via delta method using numerical derivatives).
The model terms are computing similarly to stats::predict.lm
, by centering the design matrix around the mean value of the covariates used to fit the model.
Then the centered design matrix is multiplied by the mean coefficients and columns assigned to the same variable (e.g. three level factor variable) are summed together.
When format="long"
, a data.frame containing the following columns:
estimate
: predicted mean.
se
: uncertainty about the predicted mean.
df
: degrees-of-freedom
lower
: lower bound of the confidence interval of the predicted mean
upper
: upper bound of the confidence interval of the predicted mean
When format="wide"
, a matrix containing the predict means (one line per cluster, one column per timepoint).
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") ## fit Linear Mixed Model eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) ## prediction newd <- data.frame(X1 = 1, X2 = 2, X5 = 3, visit = factor(1:3, levels = 1:3)) predict(eUN.lmm, newdata = newd) predict(eUN.lmm, newdata = newd, keep.data = TRUE) predict(eUN.lmm, newdata = newd, keep.data = TRUE, se = c(TRUE,TRUE)) ## dynamic prediction newd.d1 <- cbind(newd, Y = c(NA,NA,NA)) predict(eUN.lmm, newdata = newd.d1, keep.data = TRUE, type = "dynamic") newd.d2 <- cbind(newd, Y = c(6.61,NA,NA)) predict(eUN.lmm, newdata = newd.d2, keep.data = TRUE, type = "dynamic") newd.d3 <- cbind(newd, Y = c(1,NA,NA)) predict(eUN.lmm, newdata = newd.d3, keep.data = TRUE, type = "dynamic") newd.d4 <- cbind(newd, Y = c(1,1,NA)) predict(eUN.lmm, newdata = newd.d4, keep.data = TRUE, type = "dynamic")
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") ## fit Linear Mixed Model eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5, repetition = ~visit|id, structure = "UN", data = dL) ## prediction newd <- data.frame(X1 = 1, X2 = 2, X5 = 3, visit = factor(1:3, levels = 1:3)) predict(eUN.lmm, newdata = newd) predict(eUN.lmm, newdata = newd, keep.data = TRUE) predict(eUN.lmm, newdata = newd, keep.data = TRUE, se = c(TRUE,TRUE)) ## dynamic prediction newd.d1 <- cbind(newd, Y = c(NA,NA,NA)) predict(eUN.lmm, newdata = newd.d1, keep.data = TRUE, type = "dynamic") newd.d2 <- cbind(newd, Y = c(6.61,NA,NA)) predict(eUN.lmm, newdata = newd.d2, keep.data = TRUE, type = "dynamic") newd.d3 <- cbind(newd, Y = c(1,NA,NA)) predict(eUN.lmm, newdata = newd.d3, keep.data = TRUE, type = "dynamic") newd.d4 <- cbind(newd, Y = c(1,1,NA)) predict(eUN.lmm, newdata = newd.d4, keep.data = TRUE, type = "dynamic")
Estimate the expected outcome conditional on covariates and possibly on other outcomes based on group-specific linear mixed models.
## S3 method for class 'mlmm' predict( object, p = NULL, newdata = NULL, keep.data = FALSE, simplify = TRUE, ... )
## S3 method for class 'mlmm' predict( object, p = NULL, newdata = NULL, keep.data = FALSE, simplify = TRUE, ... )
Evaluate the (restricted) log-likelihood around Maximum Likelihood Estimate (MLE) of a linear mixed model. It will vary one parameter and either keep the other parameters constant or set the other parameters at their MLE (profile likelihood). Since a locally quadratic log-likelihood with an Hessian equivariant in law implies normally distributed estimates (Geyer 2013) it can help trusting confidence intervals and p-values in small samples with a non-normally distributed outcome.
## S3 method for class 'lmm' profile( fitted, effects = NULL, profile.likelihood = FALSE, maxpts = NULL, level = 0.95, trace = FALSE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
## S3 method for class 'lmm' profile( fitted, effects = NULL, profile.likelihood = FALSE, maxpts = NULL, level = 0.95, trace = FALSE, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
fitted |
a |
effects |
[character vector] name of the parameters who will be constrained.
Alternatively can be the type of parameters, e.g. |
profile.likelihood |
[logical] should the other parameters be set to the value maximizing the constrained likelihood or stay at their MLE value? |
maxpts |
[integer, >0] number of points use to discretize the likelihood, |
level |
[numeric, 0-1] the confidence level of the confidence intervals used to decide about the range of values for each parameter. |
trace |
[logical] Show the progress of the execution of the function. |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
... |
Not used. For compatibility with the generic method. |
Each parameter defined by the argument effets
is treated separately:
the confidence interval of a parameter is discretized with maxpt
points,
this parameter is set to each discretization value.
the other parameters are either set to the (unconstrained) MLE (profile.likelihood=FALSE
)
or to constrained MLE (profile.likelihood=TRUE
). The latter case is much more computer intensive as it implies re-running the estimation procedure.
the (restricted) log-likelihood is evaluated.
A data.frame object containing the log-likelihood for various parameter values.
Geyer, C. J. (2013). Asymptotics of maximum likelihood without the lln or clt or sample size going to infinity. In Advances in Modern Statistical Theory and Applications: A Festschrift in honor of Morris L. Eaton, pages 1–24. Institute of Mathematical Statistics.
data(gastricbypassW, package = "LMMstar") e.lmm <- lmm(weight2 ~ weight1 + glucagonAUC1, data = gastricbypassW, control = list(optimizer = "FS")) ## profile logLiklihood ## Not run: e.pro <- profile(e.lmm, effects = "all", maxpts = 10, profile.likelihood = TRUE) head(e.pro) plot(e.pro) ## End(Not run) ## along a single parameter axis e.sliceNone <- profile(e.lmm, effects = "all", maxpts = 10, transform.sigma = "none") plot(e.sliceNone) e.sliceLog <- profile(e.lmm, effects = "all", maxpts = 10, transform.sigma = "log") plot(e.sliceLog)
data(gastricbypassW, package = "LMMstar") e.lmm <- lmm(weight2 ~ weight1 + glucagonAUC1, data = gastricbypassW, control = list(optimizer = "FS")) ## profile logLiklihood ## Not run: e.pro <- profile(e.lmm, effects = "all", maxpts = 10, profile.likelihood = TRUE) head(e.pro) plot(e.pro) ## End(Not run) ## along a single parameter axis e.sliceNone <- profile(e.lmm, effects = "all", maxpts = 10, transform.sigma = "none") plot(e.sliceNone) e.sliceLog <- profile(e.lmm, effects = "all", maxpts = 10, transform.sigma = "log") plot(e.sliceLog)
Recover the random effects from the variance-covariance parameters of a linear mixed model.
## S3 method for class 'lmm' ranef( object, effects = "mean", scale = "absolute", se = FALSE, df = NULL, transform = (effects %in% c("std", "variance")), p = NULL, newdata = NULL, format = "long", simplify = TRUE, ... )
## S3 method for class 'lmm' ranef( object, effects = "mean", scale = "absolute", se = FALSE, df = NULL, transform = (effects %in% c("std", "variance")), p = NULL, newdata = NULL, format = "long", simplify = TRUE, ... )
object |
a |
effects |
[character] should the estimated random effects ( |
scale |
[character] should the total variance, variance relative to each random effect, and residual variance be output ( |
se |
[logical] should standard error and confidence intervals be evaluated using a delta method? Will slow down the execution of the function. |
df |
[logical] Should degrees-of-freedom, computed using Satterthwaite approximation, be output. |
transform |
[logical] should confidence intervals for the variance estimates (resp. relative variance estimates) be evaluated using a log-transform (resp. atanh transformation)? |
p |
[numeric vector] value of the model coefficients to be used. Only relevant if differs from the fitted values. |
newdata |
[data.frame] dataset relative to which the random effects should be computed. Only relevant if differs from the dataset used to fit the model. |
format |
[character] should each type of random effect be output in a data.frame ( |
simplify |
[logical] when relevant will convert list with a single element to vectors and omit unessential output. |
... |
for internal use. |
Consider the following mixed model:
where the variance of is denoted
,
the variance of
is denoted
,
and the variance of
is
with
is the identity matrix.
The random effets are estimating according to:
A data.frame or a list depending on the argument format
.
if(require(nlme)){ data(gastricbypassL, package = "LMMstar") ## random intercept e.RI <- lmm(weight ~ time + (1|id), data = gastricbypassL) ranef(e.RI, effects = "mean") ranef(e.RI, effects = "mean", se = TRUE) ranef(e.RI, effects = "variance") ranef(e.RI, effects = "variance", format = "wide") }
if(require(nlme)){ data(gastricbypassL, package = "LMMstar") ## random intercept e.RI <- lmm(weight ~ time + (1|id), data = gastricbypassL) ranef(e.RI, effects = "mean") ranef(e.RI, effects = "mean", se = TRUE) ranef(e.RI, effects = "variance") ranef(e.RI, effects = "variance", format = "wide") }
Combine linear hypothesis tests from possibly different linear mixed models.
## S3 method for class 'Wald_lmm' rbind( model, ..., effects = NULL, rhs = NULL, univariate = TRUE, multivariate = TRUE, name = NULL, name.short = TRUE, sep = ": " )
## S3 method for class 'Wald_lmm' rbind( model, ..., effects = NULL, rhs = NULL, univariate = TRUE, multivariate = TRUE, name = NULL, name.short = TRUE, sep = ": " )
model |
a |
... |
possibly other |
effects |
[character or numeric matrix] how to combine the left-hand side of the hypotheses.
By default identity matrix but can also be |
rhs |
[numeric vector] the right hand side of the hypothesis. Should have the same length as the number of row of argument |
univariate |
[logical] Should an estimate, standard error, confidence interval, and p-value be output for each hypothesis? |
multivariate |
[logical] Should all hypotheses be simultaneously tested using a multivariate Wald test? |
name |
[character vector or NULL] character used to identify each model in the output. By default, use the name of the outcome of the model. |
name.short |
[logical] use short names for the output coefficients, e.g., omit the regression variable name when the same regression variable is used in all models. |
sep |
[character] character used to separate the name/outcome and the covariate when identifying the linear hypotheses. |
In presence of measurements from the same cluster across several models,
the influence function is used to estimate the covariance between the model parameters.
This is why the (robust) standard errors may not match the (model-based) standard error from the linear mixed
Setting the argument robust
to FALSE
when calling anova.lmm
will rescale the (robust) standard errors to mimic the original model-based standard errors.
## simulate data set.seed(10) dL <- sampleRem(1e2, n.times = 3, format = "long") ## estimate mixed models e.lmm1 <- lmm(Y ~ X1+X2+X3, repetition = ~visit|id, data = dL, structure = "CS", df = FALSE) e.lmm2 <- lmm(Y ~ X1+X8+X9, repetition = ~visit|id, data = dL, structure = "CS", df = FALSE) ## combine null hypotheses ## - model-based standard errors AAA <- anova(e.lmm1, effect = c("X1|X2,X3"="X1=0","X2|X1,X3"="X2=0"), simplify = FALSE) BBB <- anova(e.lmm2, effect = c("X1|X8,X9"="X1=0"), simplify = FALSE) ZZZ <- rbind(AAA,BBB) summary(ZZZ) ## adjusted for multiple testing rbind(model.tables(e.lmm1)[2:3,], model.tables(e.lmm2)[2,,drop=FALSE]) ## select null hypotheses & combine (model-based like standard errors) AA <- anova(e.lmm1, effect = c("X1|X2,X3"="X1=0","X2|X1,X3"="X2=0"), robust = TRUE, simplify = FALSE) BB <- anova(e.lmm2, effect = c("X1|X8,X9"="X1=0"), robust = TRUE, simplify = FALSE) ZZ <- rbind(AA,BB) summary(ZZ) ## adjusted for multiple testing rbind(model.tables(e.lmm1, robust = TRUE)[2:3,], model.tables(e.lmm2, robust = TRUE)[2,,drop=FALSE])
## simulate data set.seed(10) dL <- sampleRem(1e2, n.times = 3, format = "long") ## estimate mixed models e.lmm1 <- lmm(Y ~ X1+X2+X3, repetition = ~visit|id, data = dL, structure = "CS", df = FALSE) e.lmm2 <- lmm(Y ~ X1+X8+X9, repetition = ~visit|id, data = dL, structure = "CS", df = FALSE) ## combine null hypotheses ## - model-based standard errors AAA <- anova(e.lmm1, effect = c("X1|X2,X3"="X1=0","X2|X1,X3"="X2=0"), simplify = FALSE) BBB <- anova(e.lmm2, effect = c("X1|X8,X9"="X1=0"), simplify = FALSE) ZZZ <- rbind(AAA,BBB) summary(ZZZ) ## adjusted for multiple testing rbind(model.tables(e.lmm1)[2:3,], model.tables(e.lmm2)[2,,drop=FALSE]) ## select null hypotheses & combine (model-based like standard errors) AA <- anova(e.lmm1, effect = c("X1|X2,X3"="X1=0","X2|X1,X3"="X2=0"), robust = TRUE, simplify = FALSE) BB <- anova(e.lmm2, effect = c("X1|X8,X9"="X1=0"), robust = TRUE, simplify = FALSE) ZZ <- rbind(AA,BB) summary(ZZ) ## adjusted for multiple testing rbind(model.tables(e.lmm1, robust = TRUE)[2:3,], model.tables(e.lmm2, robust = TRUE)[2,,drop=FALSE])
Variance-covariance structure parametrized via random effects. Can be stratified on a categorical variable.
RE(formula, var.cluster, var.time, ranef = NULL, add.time)
RE(formula, var.cluster, var.time, ranef = NULL, add.time)
formula |
formula indicating on which variable to stratify the residual variance and correlation (left hand side) and variables influencing the residual variance and correlation (right hand side).##' |
var.cluster |
[character] cluster variable. |
var.time |
[character] time variable. |
ranef |
[list] characteristics of the random effects |
add.time |
not used. |
A typical formula would be ~1
, indicating a variance constant over time and the same correlation between all pairs of times.
An object of class CS
that can be passed to the argument structure
of the lmm
function.
RE(~1, var.cluster = "id", var.time = "time") RE(~gender, var.cluster = "id", var.time = "time") RE(gender~(1|id), var.time = "time")
RE(~1, var.cluster = "id", var.time = "time") RE(~gender, var.cluster = "id", var.time = "time") RE(gender~(1|id), var.time = "time")
Auxiliary function that can be used when specifying the argument columns
(e.g. calling confint.lmm
) to remove columns.
Not called remove to avoid confusion with base::remove
rem(...)
rem(...)
... |
[character vector] name of the columns to be removed to the default output. |
A character vector
set.seed(10) dW <- sampleRem(25, n.times = 1, format = "long") e.lmm <- lmm(Y~X1, data = dW) confint(e.lmm, columns = rem("estimate"))
set.seed(10) dW <- sampleRem(25, n.times = 1, format = "long") e.lmm <- lmm(Y~X1, data = dW) confint(e.lmm, columns = rem("estimate"))
Create a vector containing, for each line of the dataset, the number of occurences of the corresponding cluster up to the current line. Can stratify the number of occurences on one or several variables.
repetition( formula, data, format = "factor", keep.time = TRUE, sep = c(":", ".") )
repetition( formula, data, format = "factor", keep.time = TRUE, sep = c(":", ".") )
formula |
[formula] Specify the structure of the data: the time/repetition variable and the grouping variable, e.g. ~1|id, ~ time|id, or ~time+region|id. |
data |
[data.frame] dataset containing the observations. |
format |
[character] the type of the output: a numeric vector ( |
keep.time |
[logical] should the value of the time variable at the repetition be kept in the output (e.g. baseline.1, baseline.2, followUp.1 instead of 1,2,3).
Only relevant when argument |
sep |
[character vector of length 2] character strings used to combine time variables (first element) and the name of the time variable with its values (second element). |
A numeric/character/factor vector, depending on argument format
.
data(sleepL, package = "LMMstar") sleepL$dday <- paste0("d",sleepL$day) sleepL$rep <- repetition(~1|id, data = sleepL) sleepL$repDay <- repetition(~dday|id, data = sleepL) sleepL$repDay.num <- repetition(~dday|id, data = sleepL, format = "numeric") head(sleepL,15)
data(sleepL, package = "LMMstar") sleepL$dday <- paste0("d",sleepL$day) sleepL$rep <- repetition(~1|id, data = sleepL) sleepL$repDay <- repetition(~dday|id, data = sleepL) sleepL$repDay.num <- repetition(~dday|id, data = sleepL, format = "numeric") head(sleepL,15)
Non-parametric bootstrap or permutation test for a linear mixed model.
Non-parametric bootstrap or permutation test for group-specific linear mixed models.
resample(object, type, ...) ## S3 method for class 'lmm' resample( object, type, effects, n.sample = 1000, studentized = TRUE, trace = TRUE, seed = NULL, cpus = 1, export.cpus = NULL, ... ) ## S3 method for class 'mlmm' resample( object, type, method = NULL, n.sample = 1000, studentized = TRUE, trace = TRUE, seed = NULL, cpus = 1, export.cpus = NULL, ... )
resample(object, type, ...) ## S3 method for class 'lmm' resample( object, type, effects, n.sample = 1000, studentized = TRUE, trace = TRUE, seed = NULL, cpus = 1, export.cpus = NULL, ... ) ## S3 method for class 'mlmm' resample( object, type, method = NULL, n.sample = 1000, studentized = TRUE, trace = TRUE, seed = NULL, cpus = 1, export.cpus = NULL, ... )
object |
a |
type |
[character] should permutation test ( |
... |
Not used. For compatibility with the generic method. |
effects |
[character vector] the variable(s) to be permuted or the effect(s) to be tested via non-parametric bootstrap. Can also be a function of the model parameters when performing non-parametric bootstrap. |
n.sample |
[integer] the number of samples used. |
studentized |
[logical] should a studentized boostrap or permutation test be used? |
trace |
[logical] should the execution of the resampling be traced? |
seed |
[integer, >0] Random number generator (RNG) state used when starting resampling. |
cpus |
[integer, >0] number of child-processes for parallel evaluation.
If |
export.cpus |
[character vector] name of the variables to export to each cluster. |
method |
[character] type of adjustment for multiple comparisons, one of |
All approach are carried at the cluster level:
Bootstrap: sampling with replacement clusters. If a cluster is picked twice then different cluster id is used for each pick.
Permutation: permuting covariate values between clusters (this only lead to the null hypothesis when the covariate values are constant within clusters) or assigning new outcome values by, under the null, permuting the normalized residuals, rescaling to residuals, and adding back the permuted fixed effect (any mean effect under H1 would be 0 because of the permutation if the variance-covariance structure is correct). The later procedure originates from Oliver et al (2012).
The studentized bootstrap keep the original estimate and standard error but uses the samples to evaluates the quantiles of the distribution used to form the confidence intervals. The studentized permutation test approximate the distribution of the test statistic under the null (instead of the distribution of the estimate under the null.).
P-values for the bootstrap are computed by centering the bootstrap distribution of the estimate or test statistic around 0 and evaluating the frequency at which it takes values more extreme than the observed estimate or test statistics.
Oliver E. Lee and Thomas M. Braun (2012), Permutation Tests for Random Effects in Linear Mixed Models. Biometrics, Journal 68(2).
## Not run: #### univariate linear regression #### data(gastricbypassW, package = "LMMstar") ## rescale to ease optimization gastricbypassW$weight1 <- scale(gastricbypassW$weight1) gastricbypassW$weight2 <- scale(gastricbypassW$weight2) gastricbypassW$glucagonAUC1 <- scale(gastricbypassW$glucagonAUC1) e.lm <- lmm(weight2~weight1+glucagonAUC1, data = gastricbypassW) model.tables(e.lm) ## non-parametric bootstrap resample(e.lm, type = "boot", effects = c("weight1","glucagonAUC1"), seed = 10) ## permutation test resample(e.lm, type = "perm-var", effects = "weight1", seed = 10) resample(e.lm, type = "perm-var", effects = "glucagonAUC1", seed = 10) ## using multiple cores resample(e.lm, type = "boot", effects = c("weight1","glucagonAUC1"), cpus = 4) #### random intercept model #### data(gastricbypassL, package = "LMMstar") gastricbypassL$weight <- scale(gastricbypassL$weight) gastricbypassL$glucagonAUC <- scale(gastricbypassL$glucagonAUC) gastricbypassL$gender <- as.numeric(gastricbypassL$id) %% 2 gastricbypassLR <- na.omit(gastricbypassL) eCS.lmm <- lmm(weight~glucagonAUC+gender, data = gastricbypassLR, repetition = ~visit|id, structure = "CS") model.tables(eCS.lmm) ## non-parametric bootstrap resample(eCS.lmm, type = "boot", effects = c("glucagonAUC","gender"), seed = 10, trace = FALSE) ## permutation test resample(eCS.lmm, type = "perm-var", effects = "gender", seed = 10) resample(eCS.lmm, type = "perm-res", effects = "glucagonAUC", seed = 10) ## End(Not run)
## Not run: #### univariate linear regression #### data(gastricbypassW, package = "LMMstar") ## rescale to ease optimization gastricbypassW$weight1 <- scale(gastricbypassW$weight1) gastricbypassW$weight2 <- scale(gastricbypassW$weight2) gastricbypassW$glucagonAUC1 <- scale(gastricbypassW$glucagonAUC1) e.lm <- lmm(weight2~weight1+glucagonAUC1, data = gastricbypassW) model.tables(e.lm) ## non-parametric bootstrap resample(e.lm, type = "boot", effects = c("weight1","glucagonAUC1"), seed = 10) ## permutation test resample(e.lm, type = "perm-var", effects = "weight1", seed = 10) resample(e.lm, type = "perm-var", effects = "glucagonAUC1", seed = 10) ## using multiple cores resample(e.lm, type = "boot", effects = c("weight1","glucagonAUC1"), cpus = 4) #### random intercept model #### data(gastricbypassL, package = "LMMstar") gastricbypassL$weight <- scale(gastricbypassL$weight) gastricbypassL$glucagonAUC <- scale(gastricbypassL$glucagonAUC) gastricbypassL$gender <- as.numeric(gastricbypassL$id) %% 2 gastricbypassLR <- na.omit(gastricbypassL) eCS.lmm <- lmm(weight~glucagonAUC+gender, data = gastricbypassLR, repetition = ~visit|id, structure = "CS") model.tables(eCS.lmm) ## non-parametric bootstrap resample(eCS.lmm, type = "boot", effects = c("glucagonAUC","gender"), seed = 10, trace = FALSE) ## permutation test resample(eCS.lmm, type = "perm-var", effects = "gender", seed = 10) resample(eCS.lmm, type = "perm-res", effects = "glucagonAUC", seed = 10) ## End(Not run)
Extract or compute the residuals of a linear mixed model.
## S3 method for class 'lmm' residuals( object, type = "response", variable = NULL, at = NULL, newdata = NULL, p = NULL, format = "long", keep.data = FALSE, fitted.ci = FALSE, simplify = TRUE, var, ... ) ## S3 method for class 'clmm' residuals(object, ...) ## S3 method for class 'mlmm' residuals( object, p = NULL, newdata = NULL, keep.data = FALSE, simplify = TRUE, ... )
## S3 method for class 'lmm' residuals( object, type = "response", variable = NULL, at = NULL, newdata = NULL, p = NULL, format = "long", keep.data = FALSE, fitted.ci = FALSE, simplify = TRUE, var, ... ) ## S3 method for class 'clmm' residuals(object, ...) ## S3 method for class 'mlmm' residuals( object, p = NULL, newdata = NULL, keep.data = FALSE, simplify = TRUE, ... )
object |
a |
type |
[character] type of residual to output such as raw residuals ( |
variable |
[character vector] name of the variable relative to which the partial residuals should be computed. |
at |
[data.frame] values for the covariates at which to evaluate the partial residuals. |
newdata |
[data.frame] dataset relative to which the residuals should be computed. Only relevant if differs from the dataset used to fit the model. |
p |
[numeric vector] value of the model coefficients at which to evaluate the residuals. Only relevant if differs from the fitted values. |
format |
[character] Should the residuals be output
in a matrix format with clusters in row and timepoints in columns ( |
keep.data |
[logical] Should the dataset relative to which the residuals are evaluated be output along side the residual values? Only possible in the long format. |
fitted.ci |
[logical] Should the confidence intervals relative to the fitted values be added to the output. Only relevant when argument |
simplify |
[logical] Simplify the data format (vector instead of data.frame) and column names (no mention of the time variable) when possible. Otherwise, information about the call and reference values used for partial residuals be added as an attribute. |
var |
[Deprecated] Not used anymore, replaced by argument variable. |
... |
Not used. For compatibility with the generic method. |
The argument type
defines how the residuals are computed:
"response"
: raw residual, i.e. observed outcome minus fitted value .
"pearson"
: each raw residual is divided by its modeled standard deviation .
"studentized"
: same as "pearson"
but excluding the contribution of the cluster in the modeled standard deviation .
"normalized"
: raw residuals are multiplied, within clusters, by the inverse of the (upper) Cholesky factor of the modeled residual variance covariance matrix .
"normalized2"
: raw residuals are multiplied, within clusters, by the inverse of the modeled residual variance covariance matrix .
"scaled"
: scaled residuals (see PROC MIXED in SAS). Numerically identical to "normalized"
but computed by sequentially scaling and centering the residuals, to make them conditionally independent of previous residuals from the same cluster at previous repetitions.
"partial"
: partial residuals (). A reference level can be also be specified via the attribute
"reference"
to change the absolute level of the partial residuals.
"partial-center"
: partial residuals with centered continuous covariates ( where
has been centered, i.e., has 0-mean)
where
the design matrix. For partial residuals, it is split according to the variable(s) in argument
variable
() and the rest (
).
the outcome
the estimated mean coefficients relative to
the modeled variance-covariance of the residuals and
its diagonal elements
the upper Cholesky factor of
, i.e. upper triangular matrix satisfying
a cluster specific correction factor, approximating the contribution of cluster i to
. Its diagonal elements are denoted
.
the upper Cholesky factor of
Setting argument fitted.ci
to TRUE
, simplify
to FALSE
, format
to "long"
returns an attribute "grad"
containing the first order partial derivatives of the residuals with respect to the model parameters.
lmm: a vector or a data.frame when format="long"
(one line per observation, one column per type of residual),
a matrix when format="wide"
(one line per cluster, one column per timepoint).
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### Linear Model #### e.lm <- lmm(Y ~ visit + X1 + X2 + X6, data = dL) ## partial residuals pRes <- residuals(e.lm, type = "partial", variable = "X6") range(residuals(e.lm) + dL$X6 * coef(e.lm)["X6"] - pRes) e.reslm <- residuals(e.lm, type = "partial", variable = "X6", keep.data = TRUE, simplify = FALSE) plot(e.reslm) ## partial residuals with specific reference residuals(e.lm, type = "partial", variable = "X1", at = data.frame(visit=factor(2,1:3),X2=0,X6=3)) ## partial residuals with centered covariates residuals(e.lm, type = "partial-center", variable = "X1") #### Linear Mixed Model #### eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5 + X6, repetition = ~visit|id, structure = "UN", data = dL) ## residuals e.resL <- residuals(eUN.lmm, type = "normalized", keep.data = TRUE, simplify = FALSE) plot(e.resL, type = "qqplot") plot(e.resL, type = "scatterplot", labeller = ggplot2::label_both) e.resW <- residuals(eUN.lmm, format = "wide", type = "normalized", simplify = FALSE) plot(e.resW, type = "correlation") ## residuals and predicted values residuals(eUN.lmm, type = "all") residuals(eUN.lmm, type = "all", keep.data = TRUE) ## partial residuals residuals(eUN.lmm, type = "partial", variable = c("(Intercept)","visit","X6"))
#### simulate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") #### Linear Model #### e.lm <- lmm(Y ~ visit + X1 + X2 + X6, data = dL) ## partial residuals pRes <- residuals(e.lm, type = "partial", variable = "X6") range(residuals(e.lm) + dL$X6 * coef(e.lm)["X6"] - pRes) e.reslm <- residuals(e.lm, type = "partial", variable = "X6", keep.data = TRUE, simplify = FALSE) plot(e.reslm) ## partial residuals with specific reference residuals(e.lm, type = "partial", variable = "X1", at = data.frame(visit=factor(2,1:3),X2=0,X6=3)) ## partial residuals with centered covariates residuals(e.lm, type = "partial-center", variable = "X1") #### Linear Mixed Model #### eUN.lmm <- lmm(Y ~ visit + X1 + X2 + X5 + X6, repetition = ~visit|id, structure = "UN", data = dL) ## residuals e.resL <- residuals(eUN.lmm, type = "normalized", keep.data = TRUE, simplify = FALSE) plot(e.resL, type = "qqplot") plot(e.resL, type = "scatterplot", labeller = ggplot2::label_both) e.resW <- residuals(eUN.lmm, format = "wide", type = "normalized", simplify = FALSE) plot(e.resW, type = "correlation") ## residuals and predicted values residuals(eUN.lmm, type = "all") residuals(eUN.lmm, type = "all", keep.data = TRUE) ## partial residuals residuals(eUN.lmm, type = "partial", variable = c("(Intercept)","visit","X6"))
Sample longuitudinal data with covariates
sampleRem( n, n.times, mu = 1:n.times, sigma = rep(1, n.times), lambda = rep(1, n.times), beta = c(2, 1, 0, 0, 0, 1, 1, 0, 0, 0), gamma = matrix(0, nrow = n.times, ncol = 10), format = "wide", latent = FALSE )
sampleRem( n, n.times, mu = 1:n.times, sigma = rep(1, n.times), lambda = rep(1, n.times), beta = c(2, 1, 0, 0, 0, 1, 1, 0, 0, 0), gamma = matrix(0, nrow = n.times, ncol = 10), format = "wide", latent = FALSE )
n |
[integer,>0] sample size |
n.times |
[integer,>0] number of visits (i.e. measurements per individual). |
mu |
[numeric vector] expected measurement value at each visit (when all covariates are fixed to 0). Must have length |
sigma |
[numeric vector,>0] standard error of the measurements at each visit (when all covariates are fixed to 0). Must have length |
lambda |
[numeric vector] covariance between the measurement at each visit and the individual latent variable. Must have length |
beta |
[numeric vector of length 10] regression coefficient between the covariates and the latent variable. |
gamma |
[numeric matrix with n.times rows and 10 columns] regression coefficient specific to each timepoint (i.e. interaction with time). |
format |
[character] Return the data in the wide format ( |
latent |
[logical] Should the latent variable be output? |
The generative model is a latent variable model where each outcome () load on the latent variable (
) with a coefficient lambda:
The latent variable is related to the covariates ():
and
are independent random variables with standard normal distribution.
a data.frame
#### generate data in the wide format #### set.seed(10) dW <- sampleRem(100, n.times = 3, format = "wide") head(dW) #### generate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") head(dL)
#### generate data in the wide format #### set.seed(10) dW <- sampleRem(100, n.times = 3, format = "wide") head(dW) #### generate data in the long format #### set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") head(dL)
Produce a matrix of plot for continuous variables: scatterplots, histograms, correlation and missing values.
Inspired from the ggpairs
function of the R package GGally.
scatterplot( data, formula, columns, format = NULL, group = NULL, transform = NULL, facet = "grid_both", alpha.point = 1, type.diag = "histogram", bins = NULL, position.bar = "identity", linewidth.density = NULL, alpha.area = NULL, method.cor = "pearson", name.cor = "r", size.cor = NULL, digits = c(3, 2), display.NA = NULL, color = NULL, xlim = NULL, ylim = NULL, size.axis = NULL, size.legend = NULL, size.facet = NULL )
scatterplot( data, formula, columns, format = NULL, group = NULL, transform = NULL, facet = "grid_both", alpha.point = 1, type.diag = "histogram", bins = NULL, position.bar = "identity", linewidth.density = NULL, alpha.area = NULL, method.cor = "pearson", name.cor = "r", size.cor = NULL, digits = c(3, 2), display.NA = NULL, color = NULL, xlim = NULL, ylim = NULL, size.axis = NULL, size.legend = NULL, size.facet = NULL )
data |
[data.frame] dataset containing the variables to be displayed. |
formula |
[formula] formula indicating the variables to be used (outcome~time|id). Long format only. |
columns |
[character vector] Columns whose numerical values are to be displayed. Wide format only. |
format |
[character] Is the dataset in the long ( |
group |
[character] optional group variable used to color the points, stratify the histogram/density and correlation. |
transform |
[character or function] optional transformation to be applied on the outcome. |
facet |
[character] whether to use |
alpha.point |
[numeric] the transparency level used to display the points in the scatterplot. |
type.diag |
[character] type of graphical display on the diagonal: |
bins |
[character or numeric vector] algorithm or values or number of values used to create the histogram cells.
When using |
position.bar |
[character] passed to |
linewidth.density |
[numeric,>0] width of the lines on the density plot. |
alpha.area |
[numeric, 0-1] the transparency level used to display the area under the density curve or histogram. |
method.cor |
[character] estimator of the correlation. Argument passed to |
name.cor |
[character] character used to represent the correlation. By default |
size.cor |
[numeric,>0] size of the font used to display the correlation or information about missing values. |
digits |
[numeric of length 2] number of digits used to display the correlation or round the percentage of missing values. |
display.NA |
[0:2 or "only"] Should the number of missing values be displayed. When taking value 2, will also display the percentage of missing values. |
color |
[character vector] color used to display the values for each group. |
xlim |
[numeric,>0 or "common"] range of the x-axis. |
ylim |
[numeric,>0 or "common"] range of the y-axis. |
size.axis |
[numeric,>0] size of the font used to display the tick labels. |
size.legend |
[numeric,>0] size of the font used to display the legend.
When argument |
size.facet |
[numeric,>0] size of the font used to display the facets (row and column names).
When argument |
In the long format, the outcome variable contains the numerical values to be displayed. The time variable will be used to spit outcome and display each split separately or jointly with one other split. The identifier links the outcome values across time.
a list of ggplot objects (facet="grid"
) or a ggplot object (facet="grid2"
)
data(gastricbypassL, package = "LMMstar") gastricbypassL$group <- as.numeric(gastricbypassL$id) %% 3 data(gastricbypassW, package = "LMMstar") ## single group (wide or long format) scatterplot(gastricbypassL, formula = weight~time|id) scatterplot(gastricbypassW, columns = paste0("weight",1:4)) ## Not run: ## change the number of bins scatterplot(gastricbypassL, formula = weight~time|id, type.diag = "hist", bins = 15) ## remove variable name in facet scatterplot(gastricbypassL, formula = weight~time|id, facet = "grid") ## use boxplot instead of histogram scatterplot(gastricbypassL, formula = weight~time|id, type.diag = "boxplot") ## same scale scatterplot(gastricbypassL, formula = weight~time|id, xlim = "common", ylim = "common") ## transform outcome scatterplot(gastricbypassL, formula = weight~time|id, transform = "log") ## handling missing values scatterplot(gastricbypassL, formula = glucagonAUC~time|id) ## coloring per group scatterplot(gastricbypassL, formula = weight~time|id, group = "group") ## only display NAs scatterplot(gastricbypassL, formula = glucagonAUC~time|id, display.NA = "only", group = "group") ## increase size of the legend (text, symbol, width) scatterplot(gastricbypassL, formula = glucagonAUC~time|id, display.NA = "only", group = "group", size.legend = c(20,3,0.5)) ## End(Not run)
data(gastricbypassL, package = "LMMstar") gastricbypassL$group <- as.numeric(gastricbypassL$id) %% 3 data(gastricbypassW, package = "LMMstar") ## single group (wide or long format) scatterplot(gastricbypassL, formula = weight~time|id) scatterplot(gastricbypassW, columns = paste0("weight",1:4)) ## Not run: ## change the number of bins scatterplot(gastricbypassL, formula = weight~time|id, type.diag = "hist", bins = 15) ## remove variable name in facet scatterplot(gastricbypassL, formula = weight~time|id, facet = "grid") ## use boxplot instead of histogram scatterplot(gastricbypassL, formula = weight~time|id, type.diag = "boxplot") ## same scale scatterplot(gastricbypassL, formula = weight~time|id, xlim = "common", ylim = "common") ## transform outcome scatterplot(gastricbypassL, formula = weight~time|id, transform = "log") ## handling missing values scatterplot(gastricbypassL, formula = glucagonAUC~time|id) ## coloring per group scatterplot(gastricbypassL, formula = weight~time|id, group = "group") ## only display NAs scatterplot(gastricbypassL, formula = glucagonAUC~time|id, display.NA = "only", group = "group") ## increase size of the legend (text, symbol, width) scatterplot(gastricbypassL, formula = glucagonAUC~time|id, display.NA = "only", group = "group", size.legend = c(20,3,0.5)) ## End(Not run)
Simulated data a nested structure: Student/Class/School and one outcome.
school
:
class
:
student
:
outcome
:
data(schoolL)
data(schoolL)
Extract or compute the first derivative of the log-likelihood of a linear mixed model.
## S3 method for class 'lmm' score( x, effects = "mean", indiv = FALSE, newdata = NULL, p = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
## S3 method for class 'lmm' score( x, effects = "mean", indiv = FALSE, newdata = NULL, p = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
x |
a |
effects |
[character] Should the score relative to all coefficients be output ( |
indiv |
[logical] Should the contribution of each cluster to the score be output? Otherwise output the sum of all clusters of the derivatives.
or only coefficients relative to the mean ( |
newdata |
[data.frame] dataset relative to which the score should be computed. Only relevant if differs from the dataset used to fit the model. |
p |
[numeric vector] value of the model coefficients at which to evaluate the score. Only relevant if differs from the fitted values. |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
... |
Not used. For compatibility with the generic method. |
For details about the arguments transform.sigma, transform.k, transform.rho, see the documentation of the coef.lmm function.
When argument indiv is FALSE
, a vector with the value of the score relative to each coefficient.
When argument indiv is TRUE
, a matrix with the value of the score relative to each coefficient (in columns) and each cluster (in rows).
Extract or compute the first derivative of the log-likelihood of each linear mixed model.
## S3 method for class 'mlmm' score( x, effects = "contrast", indiv = FALSE, p = NULL, newdata = NULL, ordering = "by", simplify = TRUE, ... )
## S3 method for class 'mlmm' score( x, effects = "contrast", indiv = FALSE, p = NULL, newdata = NULL, ordering = "by", simplify = TRUE, ... )
x |
a |
effects |
[character] By default will output the estimates relative to the hypotheses being tested ( |
indiv |
[logical] Should the contribution of each cluster to the score be output? Otherwise output the sum of all clusters of the derivatives. |
p |
[list of numeric vector] list of model coefficients to be used. Only relevant if differs from the fitted values. |
newdata |
[NULL] Not used. For compatibility with the generic method. |
ordering |
[character] should the output be ordered by type of parameter ( |
simplify |
[logical] should the score be combined across models into a single vector ( |
... |
passed to |
When argument indiv is FALSE
, a vector with the value of the score relative to each coefficient.
When argument indiv is TRUE
, a matrix with the value of the score relative to each coefficient (in columns) and each cluster (in rows).
When effects
differs from "contrast"
and simplify=FALSE
, it will store the score in a list with an element relative to each parameter or model (argument ordering
).
Extract the unique set of residuals variance-covariance matrices or the one relative to specific clusters.
## S3 method for class 'lmm' sigma( object, cluster = NULL, p = NULL, chol = FALSE, inverse = FALSE, simplify = TRUE, ... )
## S3 method for class 'lmm' sigma( object, cluster = NULL, p = NULL, chol = FALSE, inverse = FALSE, simplify = TRUE, ... )
object |
a |
cluster |
[character, data.frame, NULL] identifier of the cluster(s) for which to extract the residual variance-covariance matrix.
For new clusters, a dataset containing the information (cluster, time, strata, ...) to be used to generate the residual variance-covariance matrices.
When |
p |
[numeric vector] value of the model coefficients at which to evaluate the residual variance-covariance matrix. Only relevant if differs from the fitted values. |
chol |
[logical] Output the cholesky factorization of the variance-covariance matrix. |
inverse |
[logical] Output the matrix inverse of the variance-covariance matrix. |
simplify |
[logical] When there is only one variance-covariance matrix, output a matrix instead of a list of matrices. |
... |
Not used. For compatibility with the generic method. |
A list where each element contains a residual variance-covariance matrix.
Can also be directly a matrix when argument is simplify=TRUE
and there is a single residual variance-covariance matrix.
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$id.fac <- paste0("id",dL$id) ## fit Linear Mixed Model eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id.fac, structure = "UN", data = dL, df = FALSE) ## extract residuals variance covariance matrix sigma(eUN.lmm) ## unique patterns sigma(eUN.lmm, cluster = c("id1","id5")) ## existing clusters sigma(eUN.lmm, cluster = dL[1:7,,drop=FALSE]) ## new clusters
## simulate data in the long format set.seed(10) dL <- sampleRem(100, n.times = 3, format = "long") dL$id.fac <- paste0("id",dL$id) ## fit Linear Mixed Model eUN.lmm <- lmm(Y ~ X1 + X2 + X5, repetition = ~visit|id.fac, structure = "UN", data = dL, df = FALSE) ## extract residuals variance covariance matrix sigma(eUN.lmm) ## unique patterns sigma(eUN.lmm, cluster = c("id1","id5")) ## existing clusters sigma(eUN.lmm, cluster = dL[1:7,,drop=FALSE]) ## new clusters
Compute summary statistics for multiple variables and/or multiple groups and save them in a data frame.
summarize( formula, data, repetition = NULL, columns = NULL, FUN = NULL, na.action = stats::na.pass, na.rm = FALSE, level = 0.95, skip.reference = FALSE, digits = NULL, filter = NULL, ... )
summarize( formula, data, repetition = NULL, columns = NULL, FUN = NULL, na.action = stats::na.pass, na.rm = FALSE, level = 0.95, skip.reference = FALSE, digits = NULL, filter = NULL, ... )
formula |
[formula] on the left hand side the outcome(s) and on the right hand side the grouping variables. E.g. Y1+Y2 ~ Gender + Gene will compute for each gender and gene the summary statistics for Y1 and for Y2. |
data |
[data.frame] dataset containing the observations. |
repetition |
[formula] Specify the structure of the data: the time/repetition variable and the grouping variable, e.g. ~ time|id. Used in the long format to count the number of missing values (i.e. add the number of missing rows) and evaluate the correlation. |
columns |
[character vector] name of the summary statistics to kept in the output. Can be any of, or a combination of:
|
FUN |
[function] user-defined function for computing summary statistics. It should take a vector as an argument and output a named single value or a named vector. |
na.action |
[function] a function which indicates what should happen when the data contain 'NA' values.
Passed to the |
na.rm |
[logical] Should the summary statistics be computed by omitting the missing values. |
level |
[numeric,0-1] the confidence level of the confidence intervals. |
skip.reference |
[logical] should the summary statistics for the reference level of categorical variables be omitted? |
digits |
[integer, >=0] the minimum number of significant digits to be used to display the results. Passed to |
filter |
[character] a regular expression passed to |
... |
additional arguments passed to argument |
This function is essentially an interface to the stats::aggregate
function.
WARNING: it has the same name as a function from the dplyr package. If you have loaded dplyr already, you should use :::
to call summarize i.e. use LMMstar:::summarize
.
Confidence intervals (CI) and prediction intervals (PI) for the mean are computed via stats::t.test
.
Confidence intervals (CI) for the standard deviation are computed using a chi-squared approximation.
Confidence intervals (CI) for the median are computed via asht::medianTest
.
A data frame containing summary statistics (in columns) for each outcome and value of the grouping variables (rows). It has an attribute "correlation"
when it was possible to compute the correlation matrix for each outcome with respect to the grouping variable.
correlate
for correlation matrix.
#### simulate data (wide format) #### set.seed(10) d <- sampleRem(1e2, n.times = 3) d$treat <- sample(LETTERS[1:3], NROW(d), replace=TRUE, prob=c(0.3, 0.3, 0.4) ) ## add a missing value d2 <- d d2[1,"Y2"] <- NA #### summarize (wide format) #### ## summary statistic (single variable) summarize(Y1 ~ 1, data = d) ## stratified summary statistic (single variable) summarize(Y1 ~ X1, data = d2) ## stratified summary statistic (multiple variable) summarize(Y1+Y2 ~ X1, data = d) ## categorical variable summarize(treat ~ 1, data = d) summarize(treat ~ 1, skip.reference = TRUE, data = d) ## aggregate data summarize( ~ X1 + treat, data = d) ## user defined summary statistic summarize(Y1 ~ 1, data = d, FUN = quantile) summarize(Y1 ~ 1, data = d, FUN = quantile, p = c(0.25,0.75)) ## complete case summary statistic summarize(Y1+Y2 ~ X1, data = d2, na.rm = TRUE) ## shortcut to consider all outcomes with common naming summarize(. ~ treat, data = d2, na.rm = TRUE, filter = "Y") #### summarize (long format) #### dL <- reshape(d2, idvar = "id", direction = "long", v.names = "Y", varying = c("Y1","Y2","Y3")) summarize(Y ~ time + X1, data = dL, na.rm = TRUE) ## user defined summary statistic (outlier) summarize(Y ~ time + X1, data = dL, FUN = function(x){ c(outlier.down = sum(x<mean(x,na.rm=TRUE)-2*sd(x,na.rm=TRUE), na.rm=TRUE), outlier.up = sum(x>mean(x,na.rm=TRUE)+2*sd(x,na.rm=TRUE), na.rm=TRUE)) }, na.rm = TRUE) ## user defined summary statistic (auc) myAUC <- function(Y,time){approxAUC(x = time, y = Y, from = 1, to = 3)} myAUC(Y = dL[dL$id==1,"Y"], time = dL[dL$id==1,"time"]) summarize(Y ~ id, data = dL, FUN = myAUC, na.rm = TRUE) ## add correlation (see correlate function) e.S <- summarize(Y ~ time + X1, data = dL, repetition = ~time|id, na.rm = TRUE, columns = add("correlation"), na.rm = TRUE) e.S #### summarize (long format, missing lines) #### ## use repetition argument to count missing lines in the number of missing values dL.NNA <- dL[rowSums(is.na(dL))==0,] summarize(Y ~ time + X1, data = dL.NNA, repetition =~time|id, na.rm = TRUE)
#### simulate data (wide format) #### set.seed(10) d <- sampleRem(1e2, n.times = 3) d$treat <- sample(LETTERS[1:3], NROW(d), replace=TRUE, prob=c(0.3, 0.3, 0.4) ) ## add a missing value d2 <- d d2[1,"Y2"] <- NA #### summarize (wide format) #### ## summary statistic (single variable) summarize(Y1 ~ 1, data = d) ## stratified summary statistic (single variable) summarize(Y1 ~ X1, data = d2) ## stratified summary statistic (multiple variable) summarize(Y1+Y2 ~ X1, data = d) ## categorical variable summarize(treat ~ 1, data = d) summarize(treat ~ 1, skip.reference = TRUE, data = d) ## aggregate data summarize( ~ X1 + treat, data = d) ## user defined summary statistic summarize(Y1 ~ 1, data = d, FUN = quantile) summarize(Y1 ~ 1, data = d, FUN = quantile, p = c(0.25,0.75)) ## complete case summary statistic summarize(Y1+Y2 ~ X1, data = d2, na.rm = TRUE) ## shortcut to consider all outcomes with common naming summarize(. ~ treat, data = d2, na.rm = TRUE, filter = "Y") #### summarize (long format) #### dL <- reshape(d2, idvar = "id", direction = "long", v.names = "Y", varying = c("Y1","Y2","Y3")) summarize(Y ~ time + X1, data = dL, na.rm = TRUE) ## user defined summary statistic (outlier) summarize(Y ~ time + X1, data = dL, FUN = function(x){ c(outlier.down = sum(x<mean(x,na.rm=TRUE)-2*sd(x,na.rm=TRUE), na.rm=TRUE), outlier.up = sum(x>mean(x,na.rm=TRUE)+2*sd(x,na.rm=TRUE), na.rm=TRUE)) }, na.rm = TRUE) ## user defined summary statistic (auc) myAUC <- function(Y,time){approxAUC(x = time, y = Y, from = 1, to = 3)} myAUC(Y = dL[dL$id==1,"Y"], time = dL[dL$id==1,"time"]) summarize(Y ~ id, data = dL, FUN = myAUC, na.rm = TRUE) ## add correlation (see correlate function) e.S <- summarize(Y ~ time + X1, data = dL, repetition = ~time|id, na.rm = TRUE, columns = add("correlation"), na.rm = TRUE) e.S #### summarize (long format, missing lines) #### ## use repetition argument to count missing lines in the number of missing values dL.NNA <- dL[rowSums(is.na(dL))==0,] summarize(Y ~ time + X1, data = dL.NNA, repetition =~time|id, na.rm = TRUE)
Summarize missing data patterns.
summarizeNA( data, formula, repetition = NULL, sep = "", newnames = c("variable", "frequency", "missing.pattern", "n.missing"), filter = NULL, keep.data = TRUE )
summarizeNA( data, formula, repetition = NULL, sep = "", newnames = c("variable", "frequency", "missing.pattern", "n.missing"), filter = NULL, keep.data = TRUE )
data |
[data.frame] dataset containing the observations. |
formula |
[formula] On the left hand side the variable(s) for which the missing data patterns should be evaluated and on the right hand side the grouping variables. E.g. Y1 ~ Gender will compute missing data pattern w.r.t Y1 for each gender. |
repetition |
[formula] Specify the structure of the data when in the long format: the time/repetition variable and the grouping variable, e.g. ~ time|id. When specified the missing data pattern is specific to each variable not present in the formula. |
sep |
[character] character used to separate the missing data indicator (0/1) when naming the missing data patterns. |
newnames |
[character vector of length 4] additional column containing the variable name (only when argument |
filter |
[character] a regular expression passed to |
keep.data |
[logical] should the indicator of missing data per variable in the original dataset per pattern be output. |
a data frame
autoplot.summarizeNA
for a graphical display.
#### display missing data pattern (wide format) #### data(gastricbypassW, package = "LMMstar") e.SNA <- summarizeNA(gastricbypassW) e.SNA plot(e.SNA) ## only focus on some variables eG.SNA <- summarizeNA(gastricbypassW, filter = "glucagon") eG.SNA plot(eG.SNA) summarizeNA(weight3+glucagonAUC3 ~ 1, data = gastricbypassW) #### display missing data pattern (long format) #### ## example 1 (single group) data(gastricbypassL, package = "LMMstar") e.SNAL <- summarizeNA(gastricbypassL, repetition = ~time|id) e.SNAL plot(e.SNAL, variable = "glucagonAUC") ## example 2 (two groups) data(calciumL, package = "LMMstar") ## over both groups mp <- summarizeNA(calciumL, repetition = ~visit|girl) plot(mp, variable = "bmd") plot(mp, variable = "bmd", order.pattern = "frequency") plot(mp, variable = "bmd", order.pattern = 5:1) ## per group mp2 <- summarizeNA(bmd ~ grp, data = calciumL, repetition = ~visit|girl) mp2 plot(mp2) ## artificially create different patterns in each group calciumL2 <- calciumL[order(calciumL$girl),] calciumL2[calciumL2$girl == 101,"bmd"] <- c(NA,NA,1,1,1) calciumL2[calciumL2$girl == 104,"bmd"] <- c(NA,1,NA,1,NA) mp3 <- summarizeNA(bmd ~ grp, data = calciumL2, repetition = ~visit|girl) mp3 plot(mp3) plot(mp3, order.pattern = "n.missing") plot(mp3, order.pattern = "frequency")
#### display missing data pattern (wide format) #### data(gastricbypassW, package = "LMMstar") e.SNA <- summarizeNA(gastricbypassW) e.SNA plot(e.SNA) ## only focus on some variables eG.SNA <- summarizeNA(gastricbypassW, filter = "glucagon") eG.SNA plot(eG.SNA) summarizeNA(weight3+glucagonAUC3 ~ 1, data = gastricbypassW) #### display missing data pattern (long format) #### ## example 1 (single group) data(gastricbypassL, package = "LMMstar") e.SNAL <- summarizeNA(gastricbypassL, repetition = ~time|id) e.SNAL plot(e.SNAL, variable = "glucagonAUC") ## example 2 (two groups) data(calciumL, package = "LMMstar") ## over both groups mp <- summarizeNA(calciumL, repetition = ~visit|girl) plot(mp, variable = "bmd") plot(mp, variable = "bmd", order.pattern = "frequency") plot(mp, variable = "bmd", order.pattern = 5:1) ## per group mp2 <- summarizeNA(bmd ~ grp, data = calciumL, repetition = ~visit|girl) mp2 plot(mp2) ## artificially create different patterns in each group calciumL2 <- calciumL[order(calciumL$girl),] calciumL2[calciumL2$girl == 101,"bmd"] <- c(NA,NA,1,1,1) calciumL2[calciumL2$girl == 104,"bmd"] <- c(NA,1,NA,1,NA) mp3 <- summarizeNA(bmd ~ grp, data = calciumL2, repetition = ~visit|girl) mp3 plot(mp3) plot(mp3, order.pattern = "n.missing") plot(mp3, order.pattern = "frequency")
Summary output for a linear mixed model fitted with lmm
.
## S3 method for class 'lmm' summary( object, level = 0.95, robust = FALSE, df = NULL, print = TRUE, columns = NULL, digits = 3, digits.df = 1, digits.p.value = 3, hide.data = FALSE, hide.fit = FALSE, hide.cor = NULL, type.cor = NULL, hide.var = NULL, hide.sd = NULL, hide.re = NULL, hide.mean = FALSE, ... )
## S3 method for class 'lmm' summary( object, level = 0.95, robust = FALSE, df = NULL, print = TRUE, columns = NULL, digits = 3, digits.df = 1, digits.p.value = 3, hide.data = FALSE, hide.fit = FALSE, hide.cor = NULL, type.cor = NULL, hide.var = NULL, hide.sd = NULL, hide.re = NULL, hide.mean = FALSE, ... )
object |
[lmm] output of the |
level |
[numeric,0-1] confidence level for the confidence intervals. |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
df |
[logical] Should a Student's t-distribution be used to model the distribution of the coefficient. Otherwise a normal distribution is used. |
print |
[logical] should the output be printed in the console. |
columns |
[character vector] Columns to be output for the fixed effects.
Can be any of |
digits |
[interger, >0] number of digits used to display estimates. |
digits.df |
[interger, >0] number of digits used to display degrees-of-freedom. |
digits.p.value |
[interger, >0] number of digits used to display p-values. |
hide.data |
[logical] should information about the dataset not be printed. |
hide.fit |
[logical] should information about the model fit not be printed. |
hide.cor |
[logical] should information about the correlation structure not be printed. |
type.cor |
[character] should the correlation matrix be display ( |
hide.var |
[logical] should information about the variance not be printed. |
hide.sd |
[logical] should information about the standard deviation not be printed. |
hide.re |
[logical] should information about the random effect not be printed. |
hide.mean |
[logical] should information about the mean structure not be printed. |
... |
not used. For compatibility with the generic function. |
A list containing elements displayed in the summary:
correlation
: the correlation structure.
variance
: the variance structure.
sd
: the variance structure expressed in term of standard deviations.
mean
: the mean structure.
Estimates, p-values, and confidence intevals for multiple linear mixed models.
## S3 method for class 'mlmm' summary( object, digits = 3, method = NULL, print = NULL, hide.data = FALSE, hide.fit = FALSE, ... )
## S3 method for class 'mlmm' summary( object, digits = 3, method = NULL, print = NULL, hide.data = FALSE, hide.fit = FALSE, ... )
object |
an |
digits |
[integer,>0] number of digits used to display numeric values. |
method |
[character] type of adjustment for multiple comparisons, one of |
print |
[logical] should the output be printed in the console. Can be a vector of length 2 where the first element refer to the global tests and the second to the individual tests. |
hide.data |
[logical] should information about the dataset not be printed. |
hide.fit |
[logical] should information about the model fit not be printed. |
... |
other arguments are passed to |
Display estimated partial correlation and associated p-values and confidence intevals.
## S3 method for class 'partialCor' summary(object, digits = 3, detail = TRUE, ...)
## S3 method for class 'partialCor' summary(object, digits = 3, detail = TRUE, ...)
object |
a |
digits |
[integer,>0] number of digits used to display numeric values. |
detail |
[integer,>0] passed to |
... |
other arguments are passed to |
Estimates, p-values, and confidence intevals for linear hypothesis testing, possibly adjusted for multiple comparisons.
## S3 method for class 'Wald_lmm' summary( object, print = TRUE, seed = NULL, columns = NULL, legend = TRUE, digits = 3, digits.df = 1, digits.p.value = 3, sep = ": ", ... )
## S3 method for class 'Wald_lmm' summary( object, print = TRUE, seed = NULL, columns = NULL, legend = TRUE, digits = 3, digits.df = 1, digits.p.value = 3, sep = ": ", ... )
object |
an |
print |
[logical] should the output be printed in the console. Can be a vector of length 2 where the first element refer to the global tests and the second to the individual tests. |
seed |
[integer] value that will be set before adjustment for multiple comparisons to ensure reproducible results.
Can also be |
columns |
[character vector] Columns to be displayed for each null hypothesis.
Can be any of |
legend |
[logical] should explanations about the content of the table be displayed. |
digits |
[interger, >0] number of digits used to display estimates. |
digits.df |
[interger, >0] number of digits used to display degrees-of-freedom. |
digits.p.value |
[interger, >0] number of digits used to display p-values. |
sep |
[character] character string used to separate the type of test (e.g. mean, variance) and the name of the test. |
... |
arguments |
By default a single step max-test adjustment adjustment is performed in presence of multiple comparisons.
It is carried out either using the multcomp package (equal degrees-of-freedom, method="single-step"
)
or using the copula package (unequal degrees-of-freedom, method="single-step2"
).
See the argument method
of confint.Wald_lmm
for other adjustments for multiple comparisons.
When considering multiple multivariate Wald tests, adjustment for multiple comparisons for the univariate Wald tests is performed within each multivariate Wald test.
The number of tests ajusted for equal the first degree-of-freedom of the multivariate Wald statistic.
Adding the value "type"
in argument "columns"
ensures that the type of parameter that is being test (mean, variance, correlation) is output.
NULL
Data from the swabs study, where the pneumococcus was studied in 18 families with different space available for the household. This dataset is in the long format (i.e. one line per measurement).
crowding
: space available in the household.
family
: family serial number
name
: type of family member.
swabs
: number of times the swab measurement was positive.
data(swabsL)
data(swabsL)
TODO
Data from the swabs study, where the pneumococcus was studied in 18 families with different space available for the household. This dataset is in the wide format (i.e. one line per patient).
crowding
: space available in the household.
family
: family serial number
mother
: number of times the swab measurement was positive for the mother.
father
: number of times the swab measurement was positive for the father.
child1
: number of times the swab measurement was positive for the first child.
child2
: number of times the swab measurement was positive for the second child.
child3
: number of times the swab measurement was positive for the third child.
data(swabsW)
data(swabsW)
Grundy SM, Lan SP, Lachin J. The effects of chenodiol on biliary lipids and their association with gallstone dissolution in the National Cooperative Gallstone Study (SWABS). J Clin Invest. 1984 Apr;73(4):1156-66. doi: 10.1172/JCI111301.
Model terms for linear mixed models. Used by multcomp::glht
.
## S3 method for class 'lmm' terms(x, ...)
## S3 method for class 'lmm' terms(x, ...)
x |
a |
... |
not used, for compatibility with the generic method. |
An object of class terms
giving a symbolic representation of the mean structure.
Variance-covariance structure where the correlation depends on time elapsed between two repetitions. Can be stratified on a categorical variable.
TOEPLITZ(formula, var.cluster, var.time, type = "LAG", add.time)
TOEPLITZ(formula, var.cluster, var.time, type = "LAG", add.time)
formula |
formula indicating on which variable to stratify the residual variance and correlation (left hand side) and variables influencing the residual variance and correlation (right hand side). |
var.cluster |
[character] cluster variable. |
var.time |
[character] time variable. |
type |
[character] degree of flexibility of the correlation structure within covariate ( |
add.time |
Should the default formula (i.e. when |
formula: there can only be at most one covariate for the correlation structure.
A typical formula would be ~1
, indicating a variance constant over time and a correlation specific to each gap time.
type: for a binary covariate the correlation matrix can be decomposed into four blocs: A, B, B, C. A correspond the correlation within level 0 of the covariate, C within level 1, and B between level 0 and 1. Different correlation structures can be specified:
"UN"
: unstructured matrix except for the diagonal elements of C which are constrained to be equal.
"LAG"
: Toeplitz structure within A, B, and C, i.e. correlation specific to each time lag and covariate level.
"CS"
: block-specific value except for C which has a different value for its diagonal elements.
An object of class TOEPLITZ
that can be passed to the argument structure
of the lmm
function.
## no covariate TOEPLITZ(~time, var.cluster = "id", var.time = "time") TOEPLITZ(gender~time, var.cluster = "id", var.time = "time") TOEPLITZ(list(~time,~time), var.cluster = "id", var.time = "time") TOEPLITZ(list(gender~time,gender~time), var.cluster = "id", var.time = "time") ## with covariates TOEPLITZ(~side, var.cluster = "id", type = "UN", var.time = "time", add.time = TRUE) TOEPLITZ(~side, var.cluster = "id", type = "LAG", var.time = "time", add.time = TRUE) TOEPLITZ(~side, var.cluster = "id", type = "CS", var.time = "time", add.time = TRUE) TOEPLITZ(gender~side, var.cluster = "id", type = "CS", var.time = "time", add.time = TRUE)
## no covariate TOEPLITZ(~time, var.cluster = "id", var.time = "time") TOEPLITZ(gender~time, var.cluster = "id", var.time = "time") TOEPLITZ(list(~time,~time), var.cluster = "id", var.time = "time") TOEPLITZ(list(gender~time,gender~time), var.cluster = "id", var.time = "time") ## with covariates TOEPLITZ(~side, var.cluster = "id", type = "UN", var.time = "time", add.time = TRUE) TOEPLITZ(~side, var.cluster = "id", type = "LAG", var.time = "time", add.time = TRUE) TOEPLITZ(~side, var.cluster = "id", type = "CS", var.time = "time", add.time = TRUE) TOEPLITZ(gender~side, var.cluster = "id", type = "CS", var.time = "time", add.time = TRUE)
Variance-covariance structure where the residuals have time-specific variance and correlation. Can be stratified on a categorical variable.
UN(formula, var.cluster, var.time, add.time)
UN(formula, var.cluster, var.time, add.time)
formula |
formula indicating on which variable to stratify the covariance structure. |
var.cluster |
[character] cluster variable. |
var.time |
[character] time variable. |
add.time |
Should the default formula (i.e. when |
A typical formula would be ~1
, indicating a time-specific variance parameter and a correlation parameter specific to each pair of times.
An object of class UN
that can be passed to the argument structure
of the lmm
function.
UN(NULL, var.cluster = "id", var.time = "time", add.time = TRUE) UN(~gender, var.cluster = "id", var.time = "time", add.time = TRUE) UN(gender ~ 1, var.cluster = "id", var.time = "time", add.time = TRUE) UN(list(~gender,~1), var.cluster = "id", var.time = "time", add.time = TRUE) UN(list(gender~age,gender~1), var.cluster = "id", var.time = "time", add.time = TRUE)
UN(NULL, var.cluster = "id", var.time = "time", add.time = TRUE) UN(~gender, var.cluster = "id", var.time = "time", add.time = TRUE) UN(gender ~ 1, var.cluster = "id", var.time = "time", add.time = TRUE) UN(list(~gender,~1), var.cluster = "id", var.time = "time", add.time = TRUE) UN(list(gender~age,gender~1), var.cluster = "id", var.time = "time", add.time = TRUE)
Extract the variables used by the linear mixed model.
## S3 method for class 'lmm' variable.names(object, effects = "all", original = TRUE, simplify = TRUE, ...)
## S3 method for class 'lmm' variable.names(object, effects = "all", original = TRUE, simplify = TRUE, ...)
object |
a |
effects |
[character] Should all variable be output ( |
original |
[logical] Should only the variables present in the original data be output?
When |
simplify |
[logical] Should the list be converted into a vector if a single |
... |
not used. For compatibility with the generic function |
A list of character vectors or a character vector.
Extract the variables used to obtain multiple linear mixed models.
## S3 method for class 'mlmm' variable.names(object, effects = "all", original = TRUE, simplify = TRUE, ...)
## S3 method for class 'mlmm' variable.names(object, effects = "all", original = TRUE, simplify = TRUE, ...)
object |
a |
effects |
[character] Should all variable be output ( |
original |
[logical] Should only the variables present in the original data be output?
When |
simplify |
[logical] Should the list be converted into a vector if a single |
... |
not used. For compatibility with the generic function |
A list of character vectors or a character vector.
Data from the VAS Study, a randomized controlled clinial trial assessing the healing effect of topical zink sulfate on epidermal wound. The study includes 30 heatlhy volunteers with induced wounds on each buttock which where subsequently treated with a different treatment for each wound. Then the VAS-score (pain sensation on a 0-100mm visual analogue scale) was assessed after each treatment application and summarized by area under the curve. This dataset is in the long format (i.e. one line per measurement).
id
: patient identifier.
group
: treatment group to which the patient has been randomized.
treat.num
:
vas
: VAS-score relative to the wound.
treatment
: Treatment used on the wound.
A: active treatment (zink shower gel),
B: placebo treatment (shower gel without zink),
C: control treatment (demineralized water).
data(vasscoresL)
data(vasscoresL)
TODO
Data from the VAS Study, a randomized controlled clinial trial assessing the healing effect of topical zink sulfate on epidermal wound. The study includes 30 heatlhy volunteers with induced wounds on each buttock which where subsequently treated with a different treatment for each wound. Then the VAS-score (pain sensation on a 0-100mm visual analogue scale) was assessed after each treatment application and summarized by area under the curve. This dataset is in the wide format (i.e. one line per patient).
id
: patient identifier.
group
: treatment group to which the patient has been randomized.
vasA
: VAS-score when using a zink shower gel.
vasB
: VAS-score when using a placebo treatment (shower gel without zink).
vasC
: VAS-score when using a control treatment with demineralized water.
data(vasscoresW)
data(vasscoresW)
TODO
Extract the variance-covariance matrix of the model coefficients of a linear mixed model.
## S3 method for class 'lmm' vcov( object, effects = NULL, robust = FALSE, df = FALSE, strata = NULL, newdata = NULL, p = NULL, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
## S3 method for class 'lmm' vcov( object, effects = NULL, robust = FALSE, df = FALSE, strata = NULL, newdata = NULL, p = NULL, type.information = NULL, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, ... )
object |
a |
effects |
[character vector] Should the variance-covariance matrix for all coefficients be output ( |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
df |
[logical] Should degrees-of-freedom, computed using Satterthwaite approximation, for the model parameters be output. |
strata |
[character vector] When not |
newdata |
[data.frame] dataset relative to which the information should be computed. Only relevant if differs from the dataset used to fit the model. |
p |
[numeric vector] value of the model coefficients at which to evaluate the variance-covariance matrix. Only relevant if differs from the fitted values. |
type.information |
[character] Should the expected information be used (i.e. minus the expected second derivative) or the observed inforamtion (i.e. minus the second derivative). |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
... |
Not used. For compatibility with the generic method. |
For details about the arguments transform.sigma, transform.k, transform.rho, see the documentation of the coef.lmm function.
A matrix with one column and column per parameter.
df=TRUE
: with an attribute "df"
containing a numeric vector with one element per parameter.
effects
includes "gradient"
: with an attribute "gradient"
containing a 3 dimensional array with dimension the number of parameters.
Extract the variance-covariance matrix of the model coefficients from multiple linear mixed models.
## S3 method for class 'mlmm' vcov( object, effects = "contrast", p = NULL, newdata = NULL, ordering = "by", robust = object$args$robust, type.information = object$object$type.information, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, simplify = TRUE, ... )
## S3 method for class 'mlmm' vcov( object, effects = "contrast", p = NULL, newdata = NULL, ordering = "by", robust = object$args$robust, type.information = object$object$type.information, transform.sigma = NULL, transform.k = NULL, transform.rho = NULL, transform.names = TRUE, simplify = TRUE, ... )
object |
a |
effects |
[character] By default will output the estimates relative to the hypotheses being tested ( |
p |
[list of numeric vector] list of model coefficients to be used. Only relevant if differs from the fitted values. |
newdata |
[NULL] Not used. For compatibility with the generic method. |
ordering |
[character] should the output be ordered by type of parameter ( |
robust |
[logical] Should robust standard errors (aka sandwich estimator) be output instead of the model-based standard errors.
Can also be |
type.information |
[character] Should the expected information be used (i.e. minus the expected second derivative) or the observed inforamtion (i.e. minus the second derivative). |
transform.sigma |
[character] Transformation used on the variance coefficient for the reference level. One of |
transform.k |
[character] Transformation used on the variance coefficients relative to the other levels. One of |
transform.rho |
[character] Transformation used on the correlation coefficients. One of |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
simplify |
[logical] Should the column names contain the level of the by variable?
Not relevant when |
... |
Not used. For compatibility with the generic method. |
Extract the variance-covariance matrix from Wald tests applied to a linear mixed models.
## S3 method for class 'rbindWald_lmm' vcov( object, effects = "Wald", method = "none", df = FALSE, ordering = NULL, transform.names = TRUE, simplify = TRUE, ... )
## S3 method for class 'rbindWald_lmm' vcov( object, effects = "Wald", method = "none", df = FALSE, ordering = NULL, transform.names = TRUE, simplify = TRUE, ... )
object |
a |
effects |
[character] should the linear contrasts involved in the Wald test be output ( |
method |
[character vector] type of adjustment for multiple comparisons across the linear contrasts (one of |
df |
[logical] Should degrees-of-freedom, computed using Satterthwaite approximation, for the model parameters be output? |
ordering |
[character] should the output be ordered by name of the linear contrast ( |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used?
Only relevant when |
simplify |
[logical] should the output be a vector or a list with one element specific to each possible ordering (i.e. contrast or model). |
... |
Not used. For compatibility with the generic method. |
A matrix with one column and column per parameter.
Extract the variance-covariance matrix of the linear contrasts involved in the Wald test.
## S3 method for class 'Wald_lmm' vcov(object, effects = "Wald", df = FALSE, transform.names = TRUE, ...)
## S3 method for class 'Wald_lmm' vcov(object, effects = "Wald", df = FALSE, transform.names = TRUE, ...)
object |
a |
effects |
[character vector] should the variance-covariance of the linear contrasts involved in the Wald test be output ( |
df |
[logical] Should degrees-of-freedom, computed using Satterthwaite approximation, for the model parameters be output? |
transform.names |
[logical] Should the name of the coefficients be updated to reflect the transformation that has been used? |
... |
Not used. For compatibility with the generic method. |
A matrix with one column and column per parameter.
df=TRUE
: with an attribute "df"
containing a numeric vector with one element per parameter.
effects
includes "gradient"
: with an attribute "gradient"
containing a 3 dimensional array with dimension the number of parameters.
Data from the vitamin Study, a randomized study where the growth of guinea pigs was monitored before and after intake of vitamin E/placebo. The weight of each guinea pig was recorded at the end of week 1, 3, 4, 5, 6, and 7. Vitamin E/placebo is given at the beginning of week 5. This dataset is in the long format (i.e. one line per measurement).
group
: treatment group: vitamin or placebo.
animal
: identifier
weight1
: weight (in g) of the pig at the end of week 1 (before treatment).
weight3
: weight (in g) of the pig at the end of week 3 (before treatment).
weight4
: weight (in g) of the pig at the end of week 4 (before treatment).
weight5
: weight (in g) of the pig at the end of week 5 (after treatment).
weight6
: weight (in g) of the pig at the end of week 6 (after treatment).
weight7
: weight (in g) of the pig at the end of week 7 (after treatment).
data(vitaminL)
data(vitaminL)
Crowder and Hand (1990, p. 27) Analysis of Repeated Measures.
Data from the vitamin Study, a randomized study where the growth of guinea pigs was monitored before and after intake of vitamin E/placebo. The weight of each guinea pig was recorded at the end of week 1, 3, 4, 5, 6, and 7. Vitamin E/placebo is given at the beginning of week 5. This dataset is in the wide format (i.e. one line per patient).
group
: treatment group: vitamin or placebo.
animal
: identifier
weight1
: weight (in g) of the pig at the end of week 1 (before treatment).
weight3
: weight (in g) of the pig at the end of week 3 (before treatment).
weight4
: weight (in g) of the pig at the end of week 4 (before treatment).
weight5
: weight (in g) of the pig at the end of week 5 (after treatment).
weight6
: weight (in g) of the pig at the end of week 6 (after treatment).
weight7
: weight (in g) of the pig at the end of week 7 (after treatment).
data(vitaminW)
data(vitaminW)
TODO
Extract weights used to pool estimates.
## S3 method for class 'rbindWald_lmm' weights(object, method, effects = "Wald", transform.names = TRUE, ...)
## S3 method for class 'rbindWald_lmm' weights(object, method, effects = "Wald", transform.names = TRUE, ...)
object |
a |
method |
[character] method for combining the estimates, one of |
effects |
[character] should the weights relative to the Wald test be output ( |
transform.names |
[logical] should the name of the coefficients be updated to reflect the transformation that has been used? |
... |
Not used. For compatibility with the generic method. |
a numeric vector whose elements sum to 1.
set.seed(10) dL <- sampleRem(250, n.times = 3, format = "long") e.mlmm <- mlmm(Y~X1+X2+X6, repetition = ~visit|id, data = dL, by = "X4", effects = "X1=0", structure = "CS") weights(e.mlmm, method = c("average","pool.se","pool.gls"))
set.seed(10) dL <- sampleRem(250, n.times = 3, format = "long") e.mlmm <- mlmm(Y~X1+X2+X6, repetition = ~visit|id, data = dL, by = "X4", effects = "X1=0", structure = "CS") weights(e.mlmm, method = c("average","pool.se","pool.gls"))