logo

Why Some Intervention Research Studies are Flawed

This page provides an explanation for some of the flaws found in research studies which are designed to report on the effectiveness of interventions for autistic people.


The Flawless Study

There is no such thing as a flawless research study but the best research studies are designed to reduce any flaws to a minimum.
 
Good research studies are based on the scientific method: the use of observable, empirical, measurable evidence. Good research studies also use methodologies that are designed to reduce any errors in how that evidence is generated, evaluated and reported.

Reducing the Flaws

There are several things that researchers can do to reduce the number of flaws in a study. These include
  • Seeking ethical approval for the study from an appropriate body, usually a university or similar organisation. This ensures that someone else has checked what they are planning to do before they do it.
  • Registering the study with an appropriate organisation, such ClinicalTrials.gov, before the study commences. This helps to ensure that the researchers stick to the original study protocol rather than manipulating the study to their own advantage once it has begun or once it has completed.
  • Following established guidance on how to run and report on specific types of research study. This helps to ensure that they follow established best practice for that type of study.
  • Publishing information about their study in an authoritative and trusted peer-reviewed journal. This helps to ensure that they have followed best practice, reported everything that they should have reported and that this has been checked by someone outside of their own organisation.
Doing all of this may help to remove some of the worst flaws but even this is no guarantee that some flaws will not slip through.

Back To Top


Researchers

The researchers who carry out the study may not be completely independent of the intervention.

  • Some researchers may have invented or developed the intervention, investing considerable time and effort in making it work. This means that they may be less willing to believe or report any evidence that suggests it does not work.
  • Some researchers may stand to gain financially if they claim that an intervention is successful. This may be because they manufacture or supply the intervention or because they are funded by the manufacturer / supplier of the intervention.

The ethics board which gives approval for the study to run may not be completely independent of the intervention being studied. This means that the members of that board may have a vested interest in seeing the study take place and seeing it report successful results.

If only one researcher or one group of researchers has investigated an intervention the results are likely to be less credible than if the same intervention has been investigated by other research groups based elsewhere. This is because the first researchers may have unduly influenced the results of the study, sometimes without even realising it.

Back To Top


Hypothesis

The hypothesis is the idea behind the use of the intervention, that is, the explanation of why the intervention is supposed to help autistic people. The way that some researchers formulate or report their hypothesis can sometimes leads to flaws. 

  • Some studies do not set out a clear and verifiable hypothesis, that is, a testable explanation of what the intervention is supposed to achieve and why. If the hypothesis is not clear and verifiable it may be difficult to prove or disprove.
  • Some studies do not set out a socially valid hypothesis, that is, a hypothesis of how the intervention is likely to make a practical difference to autistic people in the real world.
  • Some researchers may have a biased hypothesis. This means that they are setting out to prove something they already think is right rather than trying to objectively determine if the intervention works or not.
  • Most researchers set out to demonstrate the validity of their hypothesis by citing other studies which examine that hypothesis. However, some researchers cite only those studies which support their hypothesis or downplay those studies which do not support it.

Back To Top


Study Types and Bias

There are many different kinds of study and each type has its own limitations. For example,

  • An observational study (one in which the researchers simply observe any effects of an intervention) is more likely to be flawed than an experimental study (one in which the researchers manipulate or control the intervention to see what effects it produces).
  • A retrospective study (one in which the researchers decide on the outcome measures and/or take the measures after the intervention happens) is more likely to be to be flawed than a prospective study (one in which the researchers decide these things beforehand).

Single-case designs

Single-case designs are studies in which there is a single group of participants rather than two or more groups of participants. The advantage of single case designs is that they are relatively simple and straightforward to run and also much cheaper to run than group studies. The disadvantage of single-case designs is that they all use relatively weak methodologies.

  • Weak single-case designs include withdrawal or reversal designs (such as A-B-A; A-B-A-B; A-B-A-C-A-D); Multiple baseline designs; Alternating treatment designs; Changing criterion designs.
  • Very weak single-case designs include bi-phase designs (such as A-B); 1-phase designs (such as B-phase training designs); Pre-post designs; Case descriptions.

Group studies

Group studies compare two or more groups of participants with each other. The advantage of group studies is that they allow the researchers to compare the effect on the group which received the intervention (experimental group) with the effect on the group that did not (control group).The disadvantage of group studies is that they are relatively complex to run and also much more expensive to run than single-case designs.

Randomised, blinded, controlled studies are an especially strong type of group study.

  • Randomisation: the process by which the participants are randomly allocated to the experimental and control groups.
  • Blinding: the process by which participants and/or the researchers are kept blind (unaware) of which participants are in which group.

Bias

There are a number of ways in which a study can become biased.

  • Selection bias may occur in studies when the researchers (or sometimes the participants themselves) decide which participants should go into which groups or into which treatment periods or blocks. This means that the participants and/or the interventions are not truly randomised, which could affect the outcomes.
  • Detection bias may occur in studies when the researchers and/or the participants are able to work out which participants are in which group or in which treatment periods or blocks. This could affect how the researchers or the participants behave, which could affect the outcomes.
  • Performance bias may occur in studies when the way those studies are run introduces differences between the groups or between the treatment periods or blocks that is nothing to do with the actual intervention. For example, if one group of participants gets more attention from the research team, they may behave differently, which could affect the outcomes.
  • Attrition bias may occur in studies where there is an unequal loss of participants from the different groups within the study or if the participants do not complete all of the treatment periods or blocks. This could affect the statistical interpretation of the outcomes.

.Back To Top


Participants

Some researchers select participants in ways that could influence the outcomes of their studies or do not adequately describe how the participants were selected. For example,

Numbers of participants

  • Some researchers do not explain how they decided on the number of participants to include and or how those participants were recruited to the study. This means that there might be some hidden bias in the participants. For example, they might all have previously told the researchers that they believe the intervention is likely to work.
  • Some studies have a small number of participants. The smaller the number of participants, the less statistically significant the result is likely to be.
  • Some studies may have a large number of participants but only have a small number of autistic participants. This means that the results for the autistic participants are less likely to be statistically significant.
  • Some studies have high dropout rates, that is, a large number of participants who did not complete the intervention for some reason. A high dropout rate (30% or more) means that the results are less likely to be credible.
  • A high dropout rate could also mean that some of the participants dropped out as a direct result of the intervention. This could mean that the intervention was not effective for those participants.

Characteristics of participants

  • Some studies do not independently verify the diagnosis of autism using established diagnostic tools like the ADOS or ICD-10. This means that some of the participants may not actually be autistic .
  • Some studies are limited to a very specific group of autistic participants (such as children, males, or individuals with a high IQ etc.). This means that we do not know if the intervention works for individuals outside that group.
  • Some studies do not provide enough details about the participants in the study (such as whether there are any participants with other conditions like ADHD etc.). This means that we do not know if the intervention is likely to work for other individuals who share those specific characteristics.
  • Some studies do not provide enough details about anyone who was excluded from the study (such as people with specific medical conditions like epilepsy). This means that we do not know if any outcomes from the study would not apply to individuals with those specific characteristics.

Back To Top


Interventions

Some researchers deliver interventions in ways that could influence the outcomes of the study or do not adequately describe how the interventions were delivered. For example,

Protocols

  • Some studies do not state whether they followed the agreed protocol (set out in a manual or via written instructions) for the intervention and/or how well they followed it. This means we cannot be sure if the intervention was delivered in the way that it was intended and in the same way as in any other studies of the same intervention.
  • For some interventions, such as weighted items or vitamin supplements, there is no universally agreed protocol. This means we cannot be sure if the intervention is being delivered in the same way as in other studies, which means it can be difficult to compare them.

Administration

  • Some studies do not provide enough details about the intervention and how it is delivered. This means it can be more difficult for other researchers to understand what is actually being delivered, to replicate the study and to check the accuracy of the findings.
  • Some studies do not provide enough information about the people who provided the intervention, such as how many of them there were, how often each of them provided the intervention, whether they were qualified to provide the intervention etc. This means we cannot be sure if anything about those people is important to how the intervention works or does not work.
  • Some studies do not provide enough information about the setting (the location) where the intervention was delivered. This means we cannot be sure if anything about the setting is important to how the intervention works or does not work.
  • Some studies use a limited amount of the intervention, such as a single dose of medication or a single training session. This limited amount of the intervention may not be enough to demonstrate any effects.
  • Some studies deliver interventions in ways that may be difficult or inconvenient to reproduce in real-world settings. For example, immunoglobulin is sometimes delivered intravenously via a pump over a period of several hours in a hospital and at considerable cost.

Timing

  • Some studies do not provide enough information about the length, frequency, intensity or dosage of the intervention. This means we cannot be sure if any of these things are important to how the intervention works.
  • Some studies last for relatively short periods, such as a few hours, days or weeks. This means that there may not have been time for any effects of the intervention to appear.
  • Some studies last for relatively long periods, such as a year. This means that any effects could have been caused by a range of other factors, including the normal growth and development of the participants.
  • Some studies do not provide a run-in period, a short amount of time to allow the participants to get used to the intervention before evaluating any effects it may have. This means that any effects may be due to the novelty of the intervention rather than to the intervention itself.
  • Some crossover design studies (where the same group of participants switches from one intervention to another during the study) do not include a washout period (a period in which no intervention is administered) or do not include a washout period that is long enough. This means that the participants who receive the second intervention could still be showing the effects of the first intervention.

Other Activities

  • Some studies may not take account of things happening at the same time as the intervention which are not considered to be part of the intervention but which could affect the outcomes. For example, in dolphin therapy, any outcomes could be due to a range of things other than the dolphins (such as being in the water, swimming outdoors, interacting with therapists, being on holiday with the family).
  • Some studies may not record information about other interventions that the participants may be receiving that could affect the result. For example, the participants could be taking one or more medications or could be receiving speech and language therapy.
  • Some studies do not provide enough details about the things that are excluded as part of the intervention. For example, some special diets require other foodstuffs to be removed from the diet which could affect the result but these other foodstuffs may not be described.

Back To Top


Comparators

Some group studies do not compare like with like or do not provide sufficient detail about any differences in the experimental and control groups and the interventions each is receiving. For example,

  • Some group studies have unequal numbers of participants in the control group and the experimental group. This means that any differences in outcomes between the groups may be due to the difference in numbers. It can also mean that any reported outcomes may not be statistically valid.
  • Some studies may have participants with different characteristics within the different groups or may not provide enough details about the participants in the control group (such as their age, gender, IQ etc.).This means that any differences in outcomes between the groups may not be due to the interventions they received but to the differences between the groups.
  • Some studies do not provide detailed information about the intervention received by the control group. This means it is it more difficult to compare the effects of the intervention on the experimental group with the intervention received by the control group.

Back To Top


Outcomes and Outcome Measures

Some researchers use outcome measures in ways that could influence the outcomes of the study or do not adequately describe those outcome measures. For example,

  • Some studies may use a very limited number of outcome measures or use none at all. This means that we can't really be sure what happened in the study because we do not have a full picture of all of the outcomes from the study.
  • Some studies may use outcome measures that are not widely recognised or accepted by other researchers. This means that we cannot be sure how robust (valid and reliable) the reported outcomes actually are.
  • Some studies may use highly subjective outcome measures, such as feedback from parents. This means that we may not be able to make an objective assessment of any reported outcomes.
  • Some studies may use tools which are not really designed to produce meaningful outcome data, such as some diagnostic tools. This means that we cannot be sure how meaningful the reported outcomes actually are.
  • Some studies provide information on physiological changes, such as levels of glucose in the blood, but do not provide information on behavioural outcomes. This means it is difficult to know if the intervention was effective in changing behaviours.
  • Some studies provide information on behavioural outcomes but do not provide information on physiological changes, such as levels of glucose in the blood. This may be important information for interventions which are believed to work by bringing about those physiological changes.
  • Some studies do not use outcome measures which record any adverse or harmful effects. This means we cannot be sure if any adverse or harmful effects took place or the nature and extent of those effects.
  • Some studies do not measure outcomes in the medium term (three to six months) or long term (longer than six months).This means that we can't judge the effect of the intervention in the medium term or in the long term.
  • Some studies do not use blinded assessors, that is, assessors who are unaware of which intervention or which treatment period or block a participant received. This means that those assessors may unconsciously rate them differently.
  • Some studies do not ask the participants in the study (and/or their carers) what they think about the intervention being studied and its effect on them. This means that we may not have a full picture of the intervention and its effect on the people most affected by it.

Back To Top


Data Analysis and Presentation

Some researchers do not use the most appropriate statistical tools and techniques to analyse the data from their studies, do not adequately describe those tools and techniques or do not present that data in the most useful way. For example,

  • Some researchers do not use the most appropriate statistical tools to analyse the data from their studies or do not explain why they have used these particular statistical tools. This means that the reported data may not be statistically valid.
  • Some researchers do not explain any limitations to the statistical tools they use and do not take appropriate measures to counteract those limitations. This means that the reported data may not be statistically valid.
  • Some researchers may not take account of any statistical anomalies that occur in their studies. For example, they may not have taken account of statistical anomalies such as ceiling and floor effects, regression to the mean, Type I or Type II errors etc. This means that the reported data may not be statistically valid.
  • Some researchers do not provide an adequate statistical analysis of the data from their studies. For example, they may fail to provide the significance levels and the confidence intervals of any data that they report. This means that the reported data may not be statistically robust.
  • Some researchers do not provide details of the effect size of the interventions in their studies and how this was calculated. This means that we cannot tell what the effect size was or whether it is statistically valid.
  • Some researchers misunderstand or misrepresent the data from their own studies. For example, sometimes researchers will wrongly state that the results of their study were positive when, in fact, the results were limited or mixed.
  • Some studies do not present all of the data from all of the outcome measures or only present the data that suits their hypothesis. This means that we cannot really be sure what happened in the study because we do not have a full picture.
  • Some studies do not present the data from the outcome measures in the most useful way. For example, some studies provide a narrative description of the data but do not provide the data in tables or graphs so that other people can easily interpret them by comparing one set of data against another.

Back To Top


Other flaws

There are a number of other flaws that can occur in research studies that could influence the outcomes. For example,

  • Some studies do not appear to involve autistic people and/or their parents or carers in the design, implementation and evaluation of the study. This means that the studies may not have addressed concerns that matter to these groups.
  • Some researchers do not provide details of any limitations within that study. If researchers do not do this, the study is less credible because it suggests that the researchers have not analysed and understood their own study properly or are not prepared to be transparent about any limitations.
  • Some researchers do not provide details of any further research that is required. If researchers do not do this, the study is less credible because it suggests that the researchers have not analysed and understood their own study properly.

Back To Top


Notes

  • A number of organisations (such as the CONSORT Group) have published guidance on best practice in research (which enables you to identify potential flaws in individual research studies) For more information please see the separate page on Publications About Research.
  • Some researchers claim that some types of single-case design studies are experimental in nature because the researcher manipulates or controls the intervention. However, other researchers claim that only randomised, controlled trials really count as proper experimental studies and that non-randomised controlled studies only count as quasi-experimental studies. 

Back To Top


Related Pages

Related Glossaries


Quick link:
https://www.informationautism.org/research-study-flaws
Updated
16 Jun 2022