logo

Equine-Assisted Activities and Therapies and Autism Ranking: Limited positive evidence

Status Research

There are a number of limitations to all of the research studies on equine-assisted activities and therapies published to date. 

Type of study

Some studies used relatively weak methodologies or did not adequately describe the methodologies they used. For example

  • Eight of the studies used single-case designs, that is, they did not have a control group of participants who did not receive the intervention.
  • Some of the single case design studies used especially weak methodologies. For example, Keino et al, 2009 used a retrospective case series and Taylor et al, 2009 used a simple ABA design.
  • Five of the controlled group studies (García-Gómez et al, 2014; Gabriels et al, 2012; Harris and Williams, 2017; Lanning et al, 2014; Jenkins et al, 2013) were non-randomised.
  • Some of the randomised controlled studies (such as Steiner and Kertesz, 2015) did not describe the randomisation process. This means that we do not know if the participants in these studies were actually randomised or if they were randomised in an adequate manner.

Participants

Some studies had limited numbers of participants, selected participants in ways that could have biased the outcomes or did not adequately describe the participants or how they were selected. For example

  • Five of the single-case design studies (Ajzenman et al, 2013; Holm et al, 2014; Keino et al, 2009; Memishevikj and Hodzhikj, 2010; Taylor et al, 2009) included six or fewer participants. 
  • Two of the controlled trials (García-Gómez et al, 2014; Jenkins et al, 2013) included fewer than ten participants.
  • One of the studies (Bass et al, 2009) had a very high dropout rate (nine of 34 participants).  
  • Some of the studies included a wide range of participants. For example, Anderson and Meints, 2016 included participants aged from five to sixteen years old.
  • Half of the studies did not independently verify the diagnosis of autism using established tools like the ADOS or ICD-10.  
  • Some of the studies did not provide enough details about the participants (such as the ratio of males to females, their intellectual and verbal abilities, their ethnicity, or whether they had any co-occurring conditions that could have affected the outcomes).

Intervention/s

Some of the researchers delivered the interventions in ways that could have biased the outcomes or did not adequately describe how the interventions were delivered. For example,

  • Some of the studies (such as Ajzenman et al, 2013) did not appear to use a recognised protocol for how the intervention should be delivered and did not provide any information on how well the researchers followed the protocol if they did use one.
  • Some of the studies did not provide enough information about the intervention and how it was delivered, so that other researchers could understand what was being delivered and how. For example, Keino et al (2009), did not provide any information about the frequency and length of the horse riding sessions and Memishevikj and Hodzhikj, 2010 did not state where the horse riding sessions took place.
  • One of the studies (Steiner and Kertesz, 2015) ran for a relatively short period of time (one month) and it is not clear how many sessions actually took place during that month. 
  • Many of the studies included participants who received one or more other interventions (such as speech and language therapy or a medication) at the same time as they received the EAA. The researchers did not always provide details of what those other interventions were. 

Comparators

Some group studies did not compare like with like or did not provide sufficient detail about any differences in the experimental and control groups and the interventions each received. For example,

  • Some of the studies did not provide enough details about the participants in the control group (such as the ratio of males to females, their intellectual and verbal abilities, their ethnicity, or whether they had any co-occurring conditions that could have affected the outcomes).
  • Some of the studies (such as Borgi et al, 2016) included participants in the experimental group who were very different in some respects to the participants in the control group when the study began.
  • Some of the studies did not provide enough details about the intervention received by the control group. For example, García-Gómez et al, 2014 did not state what the control group actually received while Steiner and Kertesz, 2015 stated that the “control group had special pedagogical sessions for autistic children” but did not provide any details about what this involved.
  • Five of the group studies used a wait-list control group (who received nothing) rather than an active control group (who would have received an active intervention of some kind).

Outcomes and Measures

Some researchers did not use the most appropriate outcome measures or did not adequately describe those outcome measures and any limitations they might have had. For example,

  • Four of the studies used only one outcome measure and three of the studies used only two outcome measures.
  • Some of the studies used non-standard, less robust outcome measures. For example, Keino et al, 2009 used the HEIM (Human-equips Interaction on Mental Activity) scale and Taylor et al, 2009 used the Pediatric Volitional Questionnaire.
  • None of the studies appeared to use any kind of adverse effect measure.
  • Only two studies (García-Gómez et al, 2014; Lanning et al, 2014) appeared to use a quality of life measure.
  • Only three of the studies (Borg et al, 2016; Gabriels et al, 2015; Kern et al, 2011) used blinded assessors, that is, assessors who did not know which participants received which treatment.
  • The blinded assessors in the study by Gabriels et al, 2015 did not assess the participants against the primary outcome measures (the ABC and VABS).

Data Analysis

Some researchers did not use the most appropriate statistical tools and techniques to analyse the data from their studies or did not adequately describe those tools and techniques and any limitations these may have had.

  • Some of the studies did not appear to use any kind of statistical tools or techniques or, if they did, they used very simplistic ones. For example, Memishevikj and Hodzhikj (2010) simply compared the percentage difference between the baseline and endpoint scores on each item on the ATEC for each participant but without any kind of robust analysis.
  • One of the studies (Steiner and Kertesz, 2015) reported using a range of statistical tools and techniques but failed to adequately explain how these were actually used.
  • Some of the studies did not provide any kind of statistical analysis of the outcomes (such as the statistical significance) and almost none of the studies provided any data on the effect sizes.
  • Most of the studies did not record if EAAT had any beneficial or harmful effects in the medium (three to six months) or longer term (six months or longer).

Other flaws

Some research studies included other flaws that could have biased the outcomes. For example,

  • None of the studies appeared to involve autistic people and parents and carers in the design, development and evaluation of those studies.
  • One of the researchers (Gabriels) has received funding from a range of sponsors (including MARS /WALTHAM and the Human–Animal Bond Research Institute Foundation). This means that she may have been biased towards the intervention, however unconsciously.

For a comprehensive list of potential flaws in research studies, please see Why some autism research studies are flawed

Updated
17 Jun 2022
Last Review
01 Dec 2018
Next Review
01 Sep 2024