Software for Bayesian Statistical Analysis

So far, simple Bayesian models with conjugate priors have been considered. As explained in previous practicals, when the posterior distribution is not available in closed form, MCMC algorithms such as the Metropolis-Hastings or Gibbs Sampling can be used to obtain samples from it.

In general, posterior distributions are seldom available in closed form and implementing MCMC algorithms for complex models can be technically difficult and very time-consuming.

For this reason, in this Practical we start by looking at a number of R packages to fit Bayesian statistical models. These packages will equip us with tools which can be used to deal with more complex models efficiently, without us having to do a lot of extra coding ourselves. Fitting Bayesian models in R will then be much like fitting non-Bayesian models, using model-fitting functions at the command line, and using standard syntax for model specification.

BayesX and INLA

In particular, the following two software packages will be considered:

  • BayesX

  • INLA

BayesX (http://www.bayesx.org/) implements MCMC methods to obtain samples from the joint posterior and is conveniently accessed from R via the package R2BayesX.

INLA (https://www.r-inla.org/) is based on producing (accurate) approximations to the marginal posterior distributions of the model parameters. Although this can be enough most of the time, making multivariate inference with INLA can be difficult or impossible. However, in many cases this is not needed and INLA can fit some classes of models in a fraction of the time it takes with MCMC.

Both R2BayesX and INLA have a very simple interface to define models using a formula (in the same way as with glm() and gam() functions).

While R2BayesX can be installed from CRAN, INLA is not on CRAN and needs to be installed from a specific repository.

Other Bayesian Software

  • Package MCMCpack in R contains functions such as MCMClogit(), MCMCPoisson() and MCMCprobit() for fitting specific kinds of models.
  • A classic MCMC program is BUGS, (Bayesian Analysis using Gibbs Sampling) described in Lunn et al. (2000): http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml. BUGS can be used through graphical interfaces WinBUGS and OpenBUGS. Both of these packages can be called from within R using packages R2WinBUGS and R2OpenBUGS.
  • JAGS, which stands for “just another Gibbs sampler”. Can also be called from R using package r2jags.
  • The NIMBLE package extends BUGS and implements MCMC and other methods for Bayesian inference. You can get it from https://r-nimble.org, and is best run directly from R.
  • The Stan software implements Hamiltonian Monte Carlo and other methods for fit hierarchical Bayesian models. It is available from https://mc-stan.org.

Bayesian Logistic Regression

Model Formulation

To summarise the model formulation presented in the lecture, given a response variable \(Y_i\) representing the count of a number of successes from a given number of trials \(n_i\) with success probability \(\theta_i\), we have

  • \((Y_i \mid \boldsymbol \theta_i) \sim\mbox{Bi}(n_i, \theta_i),\,\, i.i.d.\,\,, i=1, \ldots, m\) \[\begin{align*} \mbox{logit}(\theta_i) & =\eta_i \nonumber\\ \eta_{i} & =\beta_0+\beta_1 x_{i1}+\ldots+\beta_p x_{ip}=\boldsymbol x_i\boldsymbol \beta\nonumber \end{align*}\] assuming the logit link function and with linear predictor \(\eta_{i}\).

Example: Fake News

The fake_news data set in the bayesrules package in R contains information about 150 news articles, some real news and some fake news.

In this example, we will look at trying to predict whether an article of news is fake or not given three explanatory variables.

We can use the following code to extract the variables we want from the data set:

library(bayesrules)
fakenews <- fake_news[,c("type","title_has_excl","title_words","negative")]

The response variable type takes values fake or real, which should be self-explanatory. The three explanatory variables are:

  • title_has_excl, whether or not the article contains an excalamation mark (values TRUE or FALSE);

  • title_words, the number of words in the title (a positive integer); and

  • negative, a sentiment rating, recorded on a continuous scale.

In the exercise to follow, we will examine whether the chance of an article being fake news is related to the three covariates here.

Fitting Bayesian Logistic Regression Models

BayesX makes inference via MCMC, via the R2BayesX package which as noted makes the syntax for model fitting very similar to that for fitting non-Bayesian models using glm() in R. If you do not yet have it installed, you can install it in the usual way from CRAN.

The package must be loaded into R:

library(R2BayesX)
#> Loading required package: BayesXsrc
#> Loading required package: colorspace
#> Loading required package: mgcv
#> Loading required package: nlme
#> This is mgcv 1.8-42. For overview type 'help("mgcv-package")'.

The syntax for fitting a Bayesian Logistic Regression Model with one response variable and three explanatory variables will be like so:

model1 <- bayesx(formula = y ~ x1 + x2 + x3,
                 data = data.set,
                 family = "binomial")

Alternatively, we can obtain a Bayesian model fit without using MCMC with the INLA software, implemented in R via the INLA package. If you do not have this package installed already, as it is not on CRAN you will need to install it via

install.packages("INLA",repos=c(getOption("repos"),INLA="https://inla.r-inla-download.org/R/stable"), dep=TRUE)

After this, the package can be loaded into R:

library(INLA)
#> Loading required package: Matrix
#> Loading required package: foreach
#> Loading required package: parallel
#> Loading required package: sp
#> The legacy packages maptools, rgdal, and rgeos, underpinning the sp package,
#> which was just loaded, will retire in October 2023.
#> Please refer to R-spatial evolution reports for details, especially
#> https://r-spatial.org/r/2023/05/15/evolution4.html.
#> It may be desirable to make the sf package available;
#> package maintainers should consider adding sf to Suggests:.
#> The sp package is now running under evolution status 2
#>      (status 2 uses the sf package in place of rgdal)
#> This is INLA_23.04.24 built 2023-04-24 19:29:22 UTC.
#>  - See www.r-inla.org/contact-us for how to get help.
#>  - To enable PARDISO sparse library; see inla.pardiso()

Model Fitting

Note that the variable title_has_excl will need to be either replaced by or converted to a factor, for example

fakenews$titlehasexcl <- as.factor(fakenews$title_has_excl)

Functions summary and confint produce a summary (including parameter estimates etc) and confidence intervals for the parameters, respectively.

In order to be able to obtain output plots from BayesX, it seems that we need to create a new version of the response variable of type logical:

fakenews$typeFAKE <- fakenews$type == "fake"

Exercises

  • Perform an exploratory assessment of the fake news data set, in particular looking at the possible relationships between the explanatory variables and the fake/real response variable typeFAKE. You may wish to use the R function boxplot() here.

    Solution

    # Is there a link between the fakeness and whether the title has an exclamation mark?
    table(fakenews$title_has_excl, fakenews$typeFAKE)
    #>        
    #>         FALSE TRUE
    #>   FALSE    88   44
    #>   TRUE      2   16
    # For the quantitative variables, look at boxplots on fake vs real
    boxplot(fakenews$title_words ~ fakenews$typeFAKE)

    boxplot(fakenews$negative ~ fakenews$typeFAKE)

  • Fit a Bayesian model in BayesX using the fake news typeFAKE variable as response and the others as covariates. Examine the output; does the model fit well, and is there any evidence that any of the explanatory variables are associated with changes in probability of an article being fake or not?

    Solution

    # Produce the BayesX output
    bayesx.output <- bayesx(formula = typeFAKE ~ titlehasexcl + title_words + negative,
    data = fakenews,
    family = "binomial",
    method = "MCMC",
    iter = 15000,
    burnin = 5000)
    summary(bayesx.output)
    #> Call:
    #> bayesx(formula = typeFAKE ~ titlehasexcl + title_words + negative, 
    #>     data = fakenews, family = "binomial", method = "MCMC", iter = 15000, 
    #>     burnin = 5000)
    #>  
    #> Fixed effects estimation results:
    #> 
    #> Parametric coefficients:
    #>                     Mean      Sd    2.5%     50%   97.5%
    #> (Intercept)      -2.9970  0.7826 -4.5170 -2.9783 -1.4787
    #> titlehasexclTRUE  2.7250  0.8611  1.2617  2.6259  4.5855
    #> title_words       0.1132  0.0602 -0.0026  0.1133  0.2315
    #> negative          0.3283  0.1592  0.0249  0.3238  0.6496
    #>  
    #> N = 150  burnin = 5000  method = MCMC  family = binomial  
    #> iterations = 15000  step = 10
    confint(bayesx.output)
    #>                          2.5%      97.5%
    #> (Intercept)      -4.506552500 -1.4788143
    #> titlehasexclTRUE  1.261901000  4.5792860
    #> title_words      -0.002144448  0.2312184
    #> negative          0.025499198  0.6494650
  • Produce plots of the MCMC sample traces and the estimated posterior distributions for the model parameters. Does it seem like convergence has been achieved?

    Solution

    # Traces can be obtained separately
    plot(bayesx.output,which = "coef-samples")

    # And the density plots one-by-one
    par(mfrow=c(2,2))
    plot(density(samples(bayesx.output)[,"titlehasexclTRUE"]),main="Title Has Excl")
    plot(density(samples(bayesx.output)[,"title_words"]),main="Title Words")
    plot(density(samples(bayesx.output)[,"negative"]),main="Negative")

  • Fit the Bayesian model without MCMC using INLA; note that the summary output provides credible intervals for each parameter to help us make inference. Also, in INLA a Binomial response needs to be entered as type integer, so we need another conversion:

    fakenews$typeFAKE.int <- as.integer(fakenews$typeFAKE)

    Solution

    # Fit model - note similarity with bayesx syntax
    inla.output <- inla(formula = typeFAKE.int ~ titlehasexcl + title_words + negative,
      data = fakenews,
      family = "binomial")
    # Summarise output
    summary(inla.output)
    #> 
    #> Call:
    #>    c("inla.core(formula = formula, family = family, contrasts = contrasts, 
    #>    ", " data = data, quantiles = quantiles, E = E, offset = offset, ", " 
    #>    scale = scale, weights = weights, Ntrials = Ntrials, strata = strata, 
    #>    ", " lp.scale = lp.scale, link.covariates = link.covariates, verbose = 
    #>    verbose, ", " lincomb = lincomb, selection = selection, control.compute 
    #>    = control.compute, ", " control.predictor = control.predictor, 
    #>    control.family = control.family, ", " control.inla = control.inla, 
    #>    control.fixed = control.fixed, ", " control.mode = control.mode, 
    #>    control.expert = control.expert, ", " control.hazard = control.hazard, 
    #>    control.lincomb = control.lincomb, ", " control.update = 
    #>    control.update, control.lp.scale = control.lp.scale, ", " 
    #>    control.pardiso = control.pardiso, only.hyperparam = only.hyperparam, 
    #>    ", " inla.call = inla.call, inla.arg = inla.arg, num.threads = 
    #>    num.threads, ", " blas.num.threads = blas.num.threads, keep = keep, 
    #>    working.directory = working.directory, ", " silent = silent, inla.mode 
    #>    = inla.mode, safe = FALSE, debug = debug, ", " .parent.frame = 
    #>    .parent.frame)") 
    #> Time used:
    #>     Pre = 4.19, Running = 7.94, Post = 0.0258, Total = 12.2 
    #> Fixed effects:
    #>                    mean    sd 0.025quant 0.5quant 0.975quant   mode kld
    #> (Intercept)      -3.016 0.761     -4.507   -3.016     -1.525 -3.016   0
    #> titlehasexclTRUE  2.680 0.791      1.131    2.680      4.230  2.680   0
    #> title_words       0.116 0.058      0.002    0.116      0.230  0.116   0
    #> negative          0.328 0.154      0.027    0.328      0.629  0.328   0
    #> 
    #> Marginal log-Likelihood:  -100.78 
    #>  is computed 
    #> Posterior summaries for the linear predictor and the fitted values are computed
    #> (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')
  • Fit a non-Bayesian model using glm() for comparison. How do the model fits compare?

    Solution

    # Fit model - note similarity with bayesx syntax
    glm.output <- glm(formula = typeFAKE ~ titlehasexcl + title_words + negative,
      data = fakenews,
      family = "binomial")
    # Summarise output
    summary(glm.output)
    #> 
    #> Call:
    #> glm(formula = typeFAKE ~ titlehasexcl + title_words + negative, 
    #>     family = "binomial", data = fakenews)
    #> 
    #> Deviance Residuals: 
    #>     Min       1Q   Median       3Q      Max  
    #> -2.6169  -0.8659  -0.6566   1.1050   2.2124  
    #> 
    #> Coefficients:
    #>                  Estimate Std. Error z value Pr(>|z|)    
    #> (Intercept)      -2.91516    0.76096  -3.831 0.000128 ***
    #> titlehasexclTRUE  2.44156    0.79103   3.087 0.002025 ** 
    #> title_words       0.11164    0.05801   1.925 0.054278 .  
    #> negative          0.31527    0.15371   2.051 0.040266 *  
    #> ---
    #> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    #> 
    #> (Dispersion parameter for binomial family taken to be 1)
    #> 
    #>     Null deviance: 201.90  on 149  degrees of freedom
    #> Residual deviance: 169.36  on 146  degrees of freedom
    #> AIC: 177.36
    #> 
    #> Number of Fisher Scoring iterations: 4
    # Perform ANOVA on each variable in turn
    drop1(glm.output,test="Chisq")
    #> Single term deletions
    #> 
    #> Model:
    #> typeFAKE ~ titlehasexcl + title_words + negative
    #>              Df Deviance    AIC     LRT  Pr(>Chi)    
    #> <none>            169.36 177.36                      
    #> titlehasexcl  1   183.51 189.51 14.1519 0.0001686 ***
    #> title_words   1   173.17 179.17  3.8099 0.0509518 .  
    #> negative      1   173.79 179.79  4.4298 0.0353162 *  
    #> ---
    #> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Bayesian Poisson Regression

Model Formulation

To summarise the model formulation presented in the lecture, given a response variable \(Y_i\) representing the counts occurring from a process with mean parameter \(\lambda_i\):

  • \((Y_i \mid \boldsymbol \lambda_i) \sim\mbox{Po}(\lambda_i),\,\, i.i.d.\,\,, i=1, \ldots, n\) \[\begin{align*} \mbox{log}(\lambda_i) & =\eta_i \nonumber\\ \eta_{i} & =\beta_0+\beta_1 x_{i1}+\ldots+\beta_p x_{ip}=\boldsymbol x_i\boldsymbol \beta\nonumber \end{align*}\] assuming the log link function and with linear predictor \(\eta_{i}\).

Example: Emergency Room Complaints

For this example we will use the esdcomp data set, which is available in the faraway package. This data set records complaints about emergency room doctors. In particular, data was recorded on 44 doctors working in an emergency service at a hospital to study the factors affecting the number of complaints received.

The response variable that we will use is complaints, an integer count of the number of complaints received. It is expected that the number of complaints will scale by the number of visits (contained in the visits column), so we are modelling the rate of complaints per visit - thus we will need to include a new variable log.visits as an offset.

The three explanatory variables we will use in the analysis are:

  • residency, whether or not the doctor is still in residency training (values N or Y);

  • gender, the gender of the doctor (values F or M); and

  • revenue, dollars per hour earned by the doctor, recorded as an integer.

Our simple aim here is to assess whether the seniority, gender or income of the doctor is linked with the rate of complaints against that doctor.

We can use the following code to extract the data we want without having to load the whole package:

esdcomp <- faraway::esdcomp    

Fitting Bayesian Poisson Regression Models

Again we can use BayesX and INLA to fit this form of Bayesian generalised linear model.

If not loaded already, the packages must be loaded into R:

In BayesX, the syntax for fitting a Bayesian Poisson Regression Model with one response variable, three explanatory variables and an offset will be like so:

model1 <- bayesx(formula = y~x1+x2+x3+offset(w),
                 data = data.set,
                 family="poisson")

As noted above we need to include an offset in this analysis; since for a Poisson GLM we will be using a log() link function by default, we must compute the log of the number of visits and include that in the data set esdcomp:

esdcomp$log.visits <- log(esdcomp$visits)    

The offset term in the model is then written

offset(log.visits)

in the call to bayesx.

Exercises

  • Perform an exploratory assessment of the emergency room complaints data set, particularly how the response variable complaints varies with the proposed explanatory variables relative to the number of visits. To do this, create another variable which is the ratio of complaints to visits.

    Solution

    # Compute the ratio
    esdcomp$ratio <- esdcomp$complaints / esdcomp$visits
    # Plot the link with revenue
    plot(esdcomp$revenue,esdcomp$ratio)

    # Use boxplots against residency and gender
    boxplot(esdcomp$ratio ~ esdcomp$residency)

    boxplot(esdcomp$ratio ~ esdcomp$gender)

  • Fit a Bayesian model in BayesX using the complaints variable as Poisson response and the others as covariates. Examine the output; does the model fit well, and is there any evidence that any of the explanatory variables are associated with the rate of complaints?

    Solution

    # Fit model - note similarity with bayesx syntax
    esdcomp$log.visits <- log(esdcomp$visits)
    bayesx.output <- bayesx(formula = complaints ~ offset(log.visits) + residency + gender + revenue,
      data = esdcomp,
      family = "poisson")
    # Summarise output
    summary(bayesx.output)
    #> Call:
    #> bayesx(formula = complaints ~ offset(log.visits) + residency + 
    #>     gender + revenue, data = esdcomp, family = "poisson")
    #>  
    #> Fixed effects estimation results:
    #> 
    #> Parametric coefficients:
    #>                Mean      Sd    2.5%     50%   97.5%
    #> (Intercept) -0.4030  0.6855 -1.7370 -0.3962  0.9549
    #> residencyY  -0.5626  0.1890 -0.9346 -0.5606 -0.1851
    #> genderM      0.1427  0.2121 -0.2665  0.1388  0.5748
    #> revenue      0.0065  0.0027  0.0011  0.0065  0.0119
    #>  
    #> N = 44  burnin = 2000  method = MCMC  family = poisson  
    #> iterations = 12000  step = 10
  • Produce plots of the MCMC sample traces and the estimated posterior distributions for the model parameters. Does it seem like convergence has been achieved?

    Solution

    # An overall plot of sample traces and density estimates
      #  plot(samples(bayesx.output))
    # Traces can be obtained separately
    plot(bayesx.output,which = "coef-samples")

    # And the density plots one-by-one
    par(mfrow =c(2, 2))
    plot(density(samples(bayesx.output)[, "residencyY"]), main = "Residency")
    plot(density(samples(bayesx.output)[, "genderM"]), main = "Gender")
    plot(density(samples(bayesx.output)[, "revenue"]), main = "Revenue")

  • Fit the Bayesian model without MCMC using INLA; note that the summary output provides credible intervals for each parameter to help us make inference.

    Solution

    # Fit model - note similarity with bayesx syntax
    inla.output <- inla(formula = complaints ~ offset(log.visits) + residency + gender + revenue,
      data = esdcomp,
      family = "poisson")
    # Summarise output
    summary(inla.output)
    #> 
    #> Call:
    #>    c("inla.core(formula = formula, family = family, contrasts = contrasts, 
    #>    ", " data = data, quantiles = quantiles, E = E, offset = offset, ", " 
    #>    scale = scale, weights = weights, Ntrials = Ntrials, strata = strata, 
    #>    ", " lp.scale = lp.scale, link.covariates = link.covariates, verbose = 
    #>    verbose, ", " lincomb = lincomb, selection = selection, control.compute 
    #>    = control.compute, ", " control.predictor = control.predictor, 
    #>    control.family = control.family, ", " control.inla = control.inla, 
    #>    control.fixed = control.fixed, ", " control.mode = control.mode, 
    #>    control.expert = control.expert, ", " control.hazard = control.hazard, 
    #>    control.lincomb = control.lincomb, ", " control.update = 
    #>    control.update, control.lp.scale = control.lp.scale, ", " 
    #>    control.pardiso = control.pardiso, only.hyperparam = only.hyperparam, 
    #>    ", " inla.call = inla.call, inla.arg = inla.arg, num.threads = 
    #>    num.threads, ", " blas.num.threads = blas.num.threads, keep = keep, 
    #>    working.directory = working.directory, ", " silent = silent, inla.mode 
    #>    = inla.mode, safe = FALSE, debug = debug, ", " .parent.frame = 
    #>    .parent.frame)") 
    #> Time used:
    #>     Pre = 3.94, Running = 0.568, Post = 0.0168, Total = 4.52 
    #> Fixed effects:
    #>               mean    sd 0.025quant 0.5quant 0.975quant   mode kld
    #> (Intercept) -7.171 0.688     -8.520   -7.171     -5.822 -7.171   0
    #> residencyY  -0.354 0.191     -0.728   -0.354      0.021 -0.354   0
    #> genderM      0.139 0.214     -0.281    0.139      0.559  0.139   0
    #> revenue      0.002 0.003     -0.003    0.002      0.008  0.002   0
    #> 
    #> Marginal log-Likelihood:  -111.90 
    #>  is computed 
    #> Posterior summaries for the linear predictor and the fitted values are computed
    #> (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')
  • Fit a non-Bayesian model using glm() for comparison. How do the model fits compare?

    Solution

    # Fit model - note similarity with bayesx syntax
    esdcomp$log.visits <- log(esdcomp$visits)
    glm.output <- glm(formula = complaints ~ offset(log.visits) + residency + gender + revenue,
      data = esdcomp,
      family = "poisson")
    # Summarise output
    summary(glm.output)
    #> 
    #> Call:
    #> glm(formula = complaints ~ offset(log.visits) + residency + gender + 
    #>     revenue, family = "poisson", data = esdcomp)
    #> 
    #> Deviance Residuals: 
    #>     Min       1Q   Median       3Q      Max  
    #> -2.3356  -0.9403  -0.4460   0.8726   2.0306  
    #> 
    #> Coefficients:
    #>              Estimate Std. Error z value Pr(>|z|)    
    #> (Intercept) -7.157087   0.688148 -10.401   <2e-16 ***
    #> residencyY  -0.350610   0.191077  -1.835   0.0665 .  
    #> genderM      0.128995   0.214323   0.602   0.5473    
    #> revenue      0.002362   0.002798   0.844   0.3986    
    #> ---
    #> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    #> 
    #> (Dispersion parameter for poisson family taken to be 1)
    #> 
    #>     Null deviance: 63.435  on 43  degrees of freedom
    #> Residual deviance: 58.698  on 40  degrees of freedom
    #> AIC: 189.48
    #> 
    #> Number of Fisher Scoring iterations: 5
    # Perform ANOVA on each variable in turn
    drop1(glm.output, test = "Chisq")
    #> Single term deletions
    #> 
    #> Model:
    #> complaints ~ offset(log.visits) + residency + gender + revenue
    #>           Df Deviance    AIC    LRT Pr(>Chi)  
    #> <none>         58.698 189.48                  
    #> residency  1   62.128 190.91 3.4303  0.06401 .
    #> gender     1   59.067 187.85 0.3689  0.54361  
    #> revenue    1   59.407 188.19 0.7093  0.39969  
    #> ---
    #> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1