Research results: why do they sometimes seem contradictory?

Research results: why do they sometimes seem contradictory?

Health and medical research results can often appear to contradict each other. Different findings can emerge, depending on the design of the study, the selection of participants, and the fundamental questions the research project is trying to answer. We look at some of the reasons here:

  • Some projects that take a "big picture" public health perspective may state that a particular treatment is not effective, because they are considering it in terms of its overall cost and benefit within a large population group. However the same therapy, at a personal level, may have resulted in very beneficial outcomes for some of the participants.
  • Taking a broad perspective can also lead to misleading results. For example, a review of research on the treatments for "urinary incontinence" is too broad to be useful, since the different types of incontinence are different conditions, with different symptoms, patient groups and treatments.
  • Research results don't necessarily reflect the behaviour of real people. For example, a particular therapy might be so successful that the participants who consider themselves "cured" drop out before the project is complete. When project results are based on a comparison between the remaining participants (therapy group and control group) at the end of the study period, the therapy can be assessed as relatively unsuccessful, because the "successes" are no longer included.
  • When discussing urinary incontinence, a discussion of research results can be hampered by disagreement about what constitutes "cure" or "improvement". Some researchers accept a patient's own reports of improvement, whereas others regard this as too subjective and only accept results based on objective measures, such as ultrasound reports, emg muscle tests, incontinence pads weighed to determine urine leakage etc. Differences of this sort can create what look like differing results for similar research, but in fact the methodology is so different that the results are not really comparable at all.
  • In any research project, researchers can bring with them their own priorities. Health research often relates to particular areas of practice expertise, and it's only natural that practitioners will tend to promote research that demonstrates the importance of their own area of practice. Using an example from the world of finance, you would not expect a bank manager to co-author a report suggesting that customers would be better off using credit unions rather than banks.
  • In recent years, the move towards evidence-based health practice has lead to a focus on research reviews, which often receive wide media coverage when they are published. Systematic reviews examine large numbers of research papers on a theme to determine whether firm conclusions can be drawn from the whole body of research. Although this sounds a very scientific approach, the standards for inclusion in a research review can be set very rigidly, and many smaller research projects are excluded, even though their results can be a valuable insight into the issues.
    Large-scale funded research that meets the criteria for inclusion in review papers tends to focus on what health practitioners deliver to patients, with home-based therapies, self-help therapies, and alternative therapies largely excluded. Review conclusions can therefore be compromised by what is excluded.

    Some researchers have been driven to question the whole value of systematic reviews in the area of stress incontinence, saying "systematic reviews of conservative treatments of SUI are not always suitable to generate robust recommendations for practice as they are weak in methodological quality or lack power to produce reliable results". (Latthe PM & others)

    Finally it is difficult to report health research accurately. Whatever the medium, there are pressures to make a story "stand out". These pressures can lead to sensationalism and to writers of both media releases and stories suggesting more conflict and disagreement than the story actually merits. Even the most respected media platforms can fall into this trap at times in the effort to grab headlines.

     Whatever the media-grabbing headline, it is always worth keeping an open mind, reading and consulting widely, and making an informed decision for yourself.  

    PEDro is the Physiotherapy Evidence -based Database

    Trials and research papers a rated according to the PEDro scale  and subject to inclusion criteria

    You can calculate confidence intervals for a mean, the difference between two means, a proportion or odds, comparisons of two proportions ( the absolute risk reduction , nymber needed to treat, relative risk  reduction and odds ratio ) sensitivity , specificty and two-level likelihoods using this XL spreadsheet from PEDro