Systematic Reviews

Apologies for the delay in getting this post out. Having focused in previous posts on primary research, this post will focus more on secondary research. I aim to introduce what systematic reviews are, why do one, how to do one and most importantly how to read one. As always, comments, critique and discussion welcome!

What is a Systematic Review?

A systematic review (SR) is a scientific tool which can be used to summarise, appraise and communicate the results and implications of otherwise unmanageable quantities of research. A systematic review is different from a review in the sense that, a review is often narrative with the aim to synthesise results of some publications on a topic whilst an SR attempts to comprehensively identify all literature on a given topic. Reviews are subject to criticism because of their high risk of bias due to their freedom of selection and thus the validity of the findings is often questioned.

Why conduct a Systematic Review?

A SR allows for a comprehensive analysis of literature on a given topic that takes quality into consideration. In doing so, a SR provides more precise conclusions by synthesising the results of a number of smaller studies; note that SR provide more precision not more power, power relates to primary research. This increased precision in turn allows for enhanced confidence in the effectiveness/reliability/validity of the item under investigation. Notice how I didn’t just say effectiveness?  SR were historically only undertaken with regard to the effectiveness of an intervention however it is now recognised that SR can be performed for almost any primary research design such as reliability, validity, adherence, risk factors to name but a few.

Advantages of SR (Greenhalgh 2010)

  •       Explicit methods used limit bias in study selection.
  •       Large amount of information identified and synthesised.
  •       Therefore, conclusions are more reliable & accurate.
  •       Different studies formally compared to establish generalisability and consistency

OR

  •       Reasons for inconsistency identified and new hypothesis formulated.
  •       With meta-analysis, precision of the overall result is increased.

It is clear that there are very good reasons as to why SR should be conducted and there are some strong advantages. As clinicians they should be our first port of call and the reason why should be quite obvious – they’ve done all the hard work for you! The literature has been searched, the literature has been appraised, the literature has been summarised and the findings have been synthesised so you don’t have to!

Not only do SR answer questions, they also highlight gaps in current knowledge and formulate new hypotheses. When proposing a new trial a SR is initially conducted to demonstrate the current limitations of practice or understanding and in turn justify the need for the trial. They are therefore useful not only for clinical purposes but serve a research role also.

SR sit quite smugly at the top of all hierarchies of evidence and when done well, quite rightly so. I’ll come on to how to determine whether or not a SR is done well in a short while but I’ll first highlight a few limitations.

Limitations of SR

  •       Publication Bias (Easterbrook et al. 1991)
  •       Poor quality of primary studies limit the review (Liberati 2001).
  •       Heterogeneity of Primary Studies
  •       Combining statistics is complex in the absence of homogeneity.

Publication Bias is a bias with regard to what is likely to be published, among what is available to be published. Studies that demonstrated statistically significant results were more than twice as likely to be published that those finding no difference between the study groups; furthermore those with statistically significant results were more likely to be published in journals with a high citation impact factor. Impact factor relates to the average number of citations to recent articles published in that journal, the higher the impact factor the more important that journal is regarded.

Physiotherapy Journal 1.49, BMJ 17.21 – I’ll leave you to make your own conclusions from that…

It has been shown the trials of higher quality are less likely to demonstrate positive outcomes whilst trials of poorer quality are more likely to demonstrate positive outcomes; therefore if the quality of the primary studies included in the review are poor, then this reduces confidence in the results, and produces weak conclusions even when the results are consistent.

Now, the authors of the SR should do this for you and demonstrate their appraisal of the quality of the included trials and what impact this has on the strength of the evidence. You may have seen in the past things like, conflicting, limited, moderate evidence etc. More recent systematic reviews are using the GRADE whereby the authors are trying to define the risk of bias and in turn take this is into consideration when summarising the findings of the review.

Methodology

There are 10 stages to conducting a good systematic review:

  1. Starting out
  2. The protocol
  3. Source identification
  4. Search strategy
  5. Screening the literature
  6. Data extraction
  7. Quality assessment
  8. Synthesis
  9. Results
  10. Summary

1 – Starting Out

As with any research, a SR starts with an idea. A survey of the literature is then required to check that the idea is both necessary and feasible – checking whether it has been done before and are there enough studies that can be included in the review.

From this a well-defined research question needs to be created using a PICO/PIOS framework.

–          Population or Problem

–          Intervention/Comparator

–          Outcomes

–          Study Design

2- The Protocol

As with any research, the method of data collection needs to be determined prior to commencing the research for both ethical and methodological reasons. If the review protocol isn’t established apriori and in turn followed, bias can be introduced into the review limiting its conclusions. The protocol should describe the following:

  •       Sources of Literature
  •       Search Strategy
  •       Inclusion/Exclusion Criteria (PIOS)
  •       Screening Strategy
  •       Quality Assessment Tool (Predetermined criteria to be used which should then be operationalised)
  •       Data extraction method
  •       Data synthesis method (Qualitative or Quantitative Synthesis? i.e. Narrative or Meta-Analysis)

With regard to operationalising the method, this should be conducted for the quality assessment of the included primary studies, the data extraction method as well as the outcomes to allow for comparison. Operationalise refers to defining a variable in a way that it is measurable. For example if looking at function in LBP, primary studies may have used the Oswestry Disability Index (scored out of 100) or the Roland-Morris Disability Questionnaire (scored out of 24). A predetermined way to operationalise the outcomes could be to multiply all RMDQ scores by 3.6 to allow comparison between studies for example.

3- Source Identification

Pragmatically not all electronic databases need to be searched, only those most pivotal to the review topic. Medline produces the most hits when searching and I therefore recommend the inclusion of Medline in most search strategies. Physiotherapy Evidence Database (PEDro) offers the advantage of the study already being rated for you however, only includes intervention studies on its database; if you are undertaking a SR on the reliability of the a classification system, clearly PEDro doesn’t need to be searched.

You may read search strategies that have limited the timespan of the review, this shouldn’t really occur unless the intervention was only introduced in a certain year e.g. if you were reviewing cognitive functional therapy you wouldn’t receive any hits of papers from the 1990s as it is a relatively contemporary management approach. It isn’t good practice to limit the data being included in the review unless there is a technical reason, otherwise it is purely arbitrary, a limit of the search should however be given e.g. papers until November 2013.

There is some debate with regard to SR as to whether you can limit the timespan of a review to the year when the last SR was undertaken. A SR looking at the effectiveness of PRP for long bone healing was conducted in 2012, to update this review in 2014 the authors could limit their search to papers published after their original SR. I would argue that they should run the full search again and include all available literature in keeping with the definition and purpose of the review.

Snowballing, the process whereby the reference lists of included papers are searched should be undertaking, especially of the more recent literature or other similar reviews or hand searching of specialist journals creates a comprehensive search strategy. Should grey literature be sought? The inclusion of grey literature creates a complete data set however does introduce some limitations to the review methodology in the sense that it isn’t reproducible and thus introduces bias as well it not being peer reviewed and in turn potentially being poor quality.

Trials that have been registered could also be searched by using sources such as Current Controlled Trials, this data may not have been published however there will usually be a lead author identified who can be contact and may provide the data. If they don’t release this information to you, this can still be mentioned in the write up.

Search Strategy

I’ve written about literature searching before here.  Search terms should be devised to identify the literature to allow the research question to be answered; using the PICO/PIOS frameworks can aid this. Incorporating Boolean logic into the search strategy is essential to increase the sensitivity and efficacy of the search.

OR is used between synonyms

AND is used to combine other terms.

The search strategy can be piloted or ‘test driven’ to determine whether it is comprehensive enough and in turn allow it to be revised if appropriate, for example if you know some papers that are in existence on the topic have been missed. Despite even the most robust search strategy, you will inevitably receive some hits on your search that have no relevance to your topic what so ever! This can be limited by using the ‘filters’ on the database such as study design, human, timespan (if relevant) etc.

At this stage in the review process liaising with the University librarians can be a God send!

Screening the Literature

Pre-defined inclusion/exclusion criteria should be used to filter the literature returned from your search, the process for filtering your search results should be described clearly in the protocol.

Abstract and Tiles are usually checked first to see whether the studies are relevant, potentially relevant, irrelevant or unclear due to insufficient detail. Those papers not excluded at this stage are then obtained in full text before having the full inclusion/exclusion criteria applied to them.

Whilst abstracts only and conference proceedings are legitimate reasons for exclusion due to the lack of detail, ‘English Language Only’ shouldn’t be an exclusion criterion. Globalisation makes it possible for research groups in one country to liaise with groups in other countries for papers to be translated as necessary and in turn limiting language bias.

This process can be summarised as such:

Check Abstracts

Exclude those completely inappropriate and keep those that look appropriate as well as studies that you aren’t sure about or have insufficient detail to make a decision and then obtain full text.

Review Full Text against the Inclusion/Exclusion Criteria

Exclude inappropriate articles and keep those that are relevant.

This stage (as well as data extraction and quality assessment) is usually conducted by two reviewers as misunderstandings and misinterpretations do occur when extracting data or making a judgement on the quality of included studies; some even include a kappa value to show reliability between those involved which adds to the transparency of the data collection and review process, improving the trustworthiness of the review.

Data Extraction

Develop a data extraction form, pilot it and then make necessary changes; standardised forms are available which make this process easier.

The information to be extracted can include the following:

  •       Study Design
  •       Sample Size per trial arm
  •       Participant Characteristics
  •       Description of Intervention
  •       Setting
  •       Outcome Measures
  •       Follow Up
  •       Attrition and Missing Data
  •       Results – Binary and Continuous Data for all outcomes
  •       Author conclusions

This process as mentioned above should be carried out by two reviewers independently; usually one reviewer undertakes the process with the second reviewer then checking.

Quality Assessment

Pre-defined quality criteria exists for the majority of study designs and therefore it usual for these published critical appraisal checklists to be utilised for determining the quality of included studies, such as the PEDro scale or the Cochrane Risk of Bias tool.

When conducting a review it may be appropriate to operationalise the checklist so it is clear how to interpret the criteria. For example, “is there long term follow up?” could be defined to give a time period. This in turn should allow a standardised assessment of quality. If possible get your interpretation checked by a fellow researcher; this process can be enhanced by calculating a kappa coefficient as detailed previously.

Synthesis

To synthesise the results of studies, pre-defined approaches can be used also, such as the Levels of Evidence system. However, a decision needs to be made as to whether or not to undertake a quantitative or qualitative synthesis. The Levels of Evidence approach or a GRADE approach are regarded as a narrative or qualitative synthesis.

Levels of Evidence

Strong – consistent findings among multiple high-quality RCTs

Moderate – consistent findings among multiple low-quality RCTs and/or controlled clinical trials (CCT) and/or one high quality RCT.

Limited – One low-quality RCT and/or CCT

Conflicting – Inconsistent finding among trials (RCTs or CCTs)

No evidence – no trials.

This approach can also be operationalised e.g.  What consists multiple? Or words could be changed if you are systematic reviewing the reliability or diagnostic validity of an orthopaedic test as opposed to an intervention where you’d be unlikely to find RCTs/CCTs. A ‘GRADE’ approach is more commonly used in the current literature in attempt to define the risk of bias as opposed to the level of quality (a higher risk of bias is associated with a lower quality paper and vice versa); such an approach makes it easier for the quality of the included studies to be taken into account when summarising the results.

A meta-analysis increases the precision of estimates of a treatment effect, it is a statistical analysis of data from a number of studies to synthesise results. A meta-analysis can be regarded as a quantitative synthesis as a single numerical statistic is created, there are however criteria that need to be satisfied, if these criteria are not fulfilled and a meta-analysis is not indicated, a qualitative or narrative synthesis should be conducted.

Criteria:

  1. Are interventions, populations receiving interventions and outcomes sufficiently similar across studies?
  2. Can results be expressed as a single measure of effect as a numerical value?
  3. Does it make clinical sense to combine results into a single estimate?

The process of a meta-analysis is the same as for a systematic review, however the outcome data is presented differently. Individual results are presented using a Forest Plot as mean effect sizes with 95% confidence intervals on either side of a line of ‘no effect’. The line of no effect is between ‘favours treatment’ and ‘favours control’.

Presentation of Results

The main features of each included study  should be presented, such as patient demographics, sample size, control group and outcomes; the outcomes are dependent on the study and the review question e.g. 95% CI, P-Value, Kappa Coefficient, sensitivity/specificity etc. There is usually a text commentary on the quality of the included studies, the main outcomes as well as a summary table. Tables are usually essential to convey the sizeable amount of data in an understandable and clear format. Tables may portray the following:

  •       Search strategies
  •       Search terms/databases
  •       Yield from search strategy
  •       Total publications
  •       Relevant publications
  •       Main features of included studies
  •       Quality of included studies
  •       Outcomes

Summary

The discussion or summary paragraphs should present the major conclusions of the review with regard to answering the initial research question. The sources of the evidence should be portrayed with relation to the quality/strength of the evidence to allow you to determine the confidence you can make about the conclusions i.e. the summary needs to be linked to the quality of the primary work. Limitations of the review should be considered such as any biases in the primary data (e.g. publication bias), missing data, search strategy etc.  The review should then finish with a summary of the clinical implications of the main review findings before stating the implications for further research.

How to read and appraise a systematic review.

Understanding the methodology and what needs to be done is a good basic starting point for appraising a review, so checking whether the process detailed above and being familiar with the process is crucial. Until relatively recently, not much consideration was given to the appraisal of systematic reviews. However, there are now criteria available to critique these types of studies rather than just blindly accepting them – these criteria such as AMSTAR (Shea et al. 2007) and CASP can be used to help you critique the paper. However, the salient points I will look at in a little more detail using points adapted from Greenhalgh (2010) and Littlewood and May (2013). The appraisal of SR has been made easier due to the PRISMA statement – a standard, structured format for writing up and presenting SR. For an example of a poor systematic review see here.

  1. Is the clinical question addressed by the review clear and was the method given a priori?
  2. Was a thorough search of the appropriate databases done and were other potentially important sources explored? i.e. Language bias.
  3. Was methodological quality assessed and the trials weighted accordingly?
  4. Were at least two reviewers involved in the study selection/data extraction process?

You will have all written essays and know that when you are trying to synthesise a multiple of papers you go down routes you never intended to, when undertaking a SR the research question needs to be clearly defined and obvious to allow the appropriate papers to be included/excluded or the review will not answer the question it set out to answer.

A thorough search needs to be clearly visible, this may include hand searching, snowballing reference lists and if a meta-analysis is to be conducted, sometimes contacting the lead author for data that may not have been included in the original write up.

As detailed above, aspects of the methodology (study selection/quality assessment/data extraction) need to be conducted by more than one reviewer. Misunderstandings & misinterpretations do occur when extracting data or making a judgement on the quality of included studies; a kappa value may be utilised as previously discussed.

Pure Dynamics Upper Limb Symposium: The Shoulder October 7th

Hey Guys,

I’m about 50% through my next post looking at Systematic Reviews; what makes a good one, what makes a bad one and how do you tell? This should hopefully be up in the next month or so. As usual, I will post the link on twitter, so if you aren’t following my site, you can find the link there.

I want to take this opportunity to promote a conference/symposium that I’m due to be presenting at being hosted by Pure Dynamics; I believe it is their inaugural event and if you know Amanda and Catrin or follow them on twitter, then you know it will be a fun day! 

This symposium is aimed towards the clinician and will therefore be packed with relevant information that you can transfer directly into clinical practice, with particular focus on the shoulder (I do believe that they will eventually be making their way around the body?)

More information about the conference can be found here, however some of the highlights include:

  • How to transfer Evidence/Research into clinical practice – I’ll be delivering this talk.
  • A review of current tendinopathy research and its relevance to clinical practice.
  • Evidence Based Assessment and Treatment of the Rotator Cuff – a talk delivered by Mr Adam Meakins and therefore guaranteed to be enjoyable!
  • Clinical implications of surgery and post-operative care of the shoulder.

For an information rich, enjoyable CPD event that will positively influence your practice, I encourage you to get involved!

I’ll get the post up on Systematic Reviews soon!

A

 

 

 

Sources of Bias in Clinical Trials

Sorry for the delay in getting this post up, I’ve been writing it on/off for a couple of weeks now. Having successfully moved house, started a new job, finished some MSc exams and finalising paperwork for a second job, I’ve now had the time to sit down and finish this post!

As well as working clinically, I’m excited to say that I will now be actively involved in research, having been offered a Research Physiotherapist post at the University of Warwick starting next month. I hope that this will both inspire a new series of posts for Applying Criticality, and also give me a different perspective to my writing – inside out as well as outside in!

This post is going to introduce some basic sources of bias to look out for within RCTs. To recapitulate, RCTs are the gold standard for determining the efficacy of an intervention however, not all RCTs will be of the same quality and in turn, won’t all provide the same strength of evidence i.e. some trials are better than others. This post has the aim to act as a reference piece for when you are reading paper or reviews.

I’d like to thank the University of York and Sheffield Hallam University for some of the material included in this post.

Some forms of bias are well known, others less so. Below is a list of potential forms of bias that I will be mentioning throughout the post – this list is not exhaustive and I won’t cover everything associated with each form (Primarily, because I don’t know everything associated with each and secondly, I’d be writing a book!). For more information on critical appraisal and subsequently, bias have a look at this from Cochrane.

  • Selection Bias
  • Subversion Bias
  • Technical Bias
  • Attrition Bias
  • Consent Bias
  • Ascertainment Bias
  • Dilution Bias
  • Recruitment Bias
  • Resentful Demoralisation
  • Delay Bias
  • Chance Bias
  • Hawthorne Effect
  • Analytical Bias

From the above list, forms of bias are often categorised into those which can occur pre-randomisation and those that can occur post-randomisation. I find this personally to be quite a useless categorisation as generally, only Selection Bias can occur before the patients are randomised!

Selection Bias limits the internal validity of a trial (I’ve briefly written about Internal Validity here; how well can the results of the trial be trusted?).

It describes a systematic difference in characteristics between those selected for inclusion in a study and those who aren’t. It occurs when the study sample does not represent the target population for whom the intervention is being researched and may be due to participants being selected for inclusion on the basis of a variable that is associated with outcome of the intervention.

This can be minimised (or even abolished) by implementing methods such as randomisation, matching pairs (see here), a clear inclusion/exclusion criteria (with justification) and a strong recruitment strategy.

Ever read a paper and thought that the equivalence at baseline was too perfect, especially if the randomisation was simple? Allocation may have been subverted.

Subversion Bias always conjures up fantastic mental pictures for me, this occurs when the researcher manipulates the recruitment of the participants, and as such, the groups aren’t equivalent at baseline. An old qualitative paper by Schulz demonstrated this to be quite widespread in healthcare research, with some accounts of researchers x-raying envelopes in order to obtain the randomisation code or even breaking open locked cabinets to subvert allocation!

Quantitative evidence exists also from this paper; 250 RCTs were reviewed and classified into ‘Adequate Concealment’ (Difficult to subvert), ‘Unclear’ or ‘Inadequate Concealment (Subversion was able to take place). The findings demonstrated that badly concealed allocation produced larger effect sizes..

Larger trials in theory should demonstrate greater effect sizes as they have greater power, smaller trials should have smaller effect sizes, if at all, as they are not as well powered, right? It shouldn’t be the other way round, theoretically speaking. This paper shows that when it is, it is often due to poor concealment of allocation in small trials; if trials were to be grouped by their methods of allocation, adequate concealment reduced effect sizes by 51%.

Secure allocation can prevent subversion occurring, and it need not be too expensive, however, it is essential to prevent researchers manipulating recruitment and skewing the outcomes of trials. Examples of which include telephone allocation from a dedicated unit, or utilising an independent researcher/person to undertake allocation.

Technical Bias in Physiotherapy is quite rare, I can’t think of an example but I would love to hear of one if any of you reading this know of any? This occurs when the allocation system fails, often, although not exclusively, due to computer error.

A commonly cited example of this occurring is during the COMET I trial which was investigating the effects of two different types of epidural anaesthesia for women during labour. The trial was using ‘Minimisation’ (a method of allocation used to overcome the issues surrounding Blocked Randomisation) through computer software. The groups were minimised on the basis of Mother’s age and ethnicity however, the programme had a fault.

1000 women were recruited, 3% were allocated to one intervention arm, 53% into another, and 52% into the third. Subsequently, the trial had to be restarted (the birth of COMET II) with 1000 new women recruited and randomised. If you are conducting research, and using a computer programme to allocate your participants, check the balance of your groups as you go along!

One of the most common types of bias seen in published trials is that of Attrition Bias (I assume you have all read the Antibiotics and CLBP trial; their failure to undertake an Intention-to-Treat analysis meant that their results were affected by Attrition Bias).

If a treatment has side effects e.g. Gastrointestinal, this may make drop outs higher amongst the less well participants, which can make a treatment appear to be effective when it is not.

As previously eluded to, Attrition Bias can be minimised by conducting an Intention-to-Treat Analysis, this is when as many patients as possible are kept in the study even if they aren’t receiving any intervention.

Alternatively, the analysis of trial results can be subjected to a sensitivity analysis. This is whereby those participants that drop out in one arm, are assumed to have the worst possible outcome, whilst those in the other arm, are assumed to have the best possible outcome. If the findings are the same, then you can be reasonably confident that the results aren’t subject to Attrition Bias.

Another less well known source of Bias is that of Consent Bias which most frequently occurs when consent to take part in a trial is gained after randomisation, often only seen in cluster trials. Whilst a Physiotherapy example evades me, a good example comes from Graham et al 2002. In this trial, schools were randomised to a teaching package for emergency contraception however, due to consent being gained post-randomisation; more children participated in the intervention than in the control.

This can induce a volunteer effect whereby both the internal and external validity of the trial become limited.

Ascertainment Bias occurs when the person reporting the outcome can be biased. I’ve been looking forward to writing this paragraph as I’m able to cite homeopathy as my example. Those of you who know me, will know that I have little time for the water salesman/quacks as not only do I believe their approach is implausible, unethical and complete voodoo – they also manipulate evidence to market their claims. Stepping off my soap box, Ascertainment Bias has occurred in the following situation.

Homeopathic dilution of histamine was shown in an RCT of cell cultures to have significant effects upon cell motility (motion); the measurement of cell motility however, was not blinded. When the study was repeated with the assessors blind to which petri dish had been treated by distilled water and which petri dish has been treated by distilled water, sorry, I mean homeopathic dilution of histamine, the significant effect was nowhere to be seen.

Having just spoken about Homeopathy, it’s ironic that the next source of bias that I’m going to mention is Dilution Bias. This is when either the intervention or control group, get the opposite treatment; this source of bias is in all trials whereby non-adherence to the intervention has occurred.

A hypothetical example, in a trial investigating the effects of glucosamine and chondroitin for management of Knee OA, 6% of the Control group is receiving the intervention as they’ve bought supplements themselves over the counter. 46% of the intervention group have stopped taking their treatment. Thus, any apparent treatment effect will be diluted. How can this be controlled for? Dilution Bias will always be a problem for any active treatment intervention e.g. low level aerobic exercise, seeking a control therapy; it can be partially prevented by refusing access to the experimental treatment for the control group.

I put a little teaser out on Twitter today to see if anyone could define the next source of bias, Resentful Demoralisation. A rarely used term however again, one that feels my mind with fantastic images. This source of bias even baffled the incredibly brainy @neuromanter (99% of his comments are intelligent, insightful and analytical. Unfortunately, its the 1% that make it into blog posts!) @PDPhysios2 was closest with her attempt however; I know @neuromanter is waiting for me to write about it on here! Resentful Demoralisation can occur when patients are randomised to the trial arm that they don’t want; in response they may in turn report outcomes inaccurately, in revenge almost! The effects can be removed by utilising a patient preference design prior to recruitment/randomisation; only those patients who are indifferent to the treatment they received are subsequently allocated. Next time you write a critical summary of an article, throw in the RD, the person marking your work will clearly see that you know your stuff!

The Hawthorne Effect is a commonly known source of bias; it is when an effect occurs by being part of the study rather than the intervention itself. Often, if an intervention requires more contact time than the control, an effect is more likely to be seen – could this be due to the therapeutic relationship? Placebo?

This effect is usually countered for by the Placebo effect in the control group, and should be considered when designing the control intervention.

As you can probably tell, we are moving to the more obscure, less well known sources of bias! Delay Bias is a source of bias that I hadn’t heard of before starting to research for this post. It occurs when there is a delay between randomisation and the participants receiving the intervention. It is said that in turn, this will dilute any treatment effects the intervention may have (interesting when we consider waiting lists for some interventions in clinical practice..). Delay bias can be accounted for by beginning the analysis for both the intervention and control arms of the trial from the time when treatment is received.

Chance Bias – yes, we are going that obscure! As the name suggests, by chance, the groups can be unequal at baseline for certain characteristics simply due to chance. This can be minimised by using post-hoc ANCOVA analysis or by using stratification – it is probably better to use ANCOVA as stratification could potentially introduce Subversion or Technical Bias, remember those?

While the sources of bias I have written about so far generally occur before data is gathered, or at least have their influence before data is collected. However, once a trial is completed and data collected, is it still possible for the wrong conclusions to be drawn by analysing the data incorrectly.

With regard to the Analytical Bias It is most important to ensure that Intention-to-Treat analysis is undertaken, but be wary of inappropriate sub-group analyses, CLBP anyone..

With regard to analysis of data, it must be by groups as randomised; when per protocol or active treatment analysis is conducted, this can lead to bias being introduced which may inflate effects seen. Those patients that do not receive the full treatment i.e. drop outs, are usually quite different to those who do receive the full treatment; restricting analysis is therefore a large source of bias.

When the main analysis is completed, it is very tempting for researchers to investigate if the effects differ by group – particularly if the main analysis hasn’t shown the effects that the researchers wanted! CLBP anyone..

Examples included is treatment more or less effective for men? Is it better or worse amongst younger people with that condition?

These are of course legitimate questions to ask, and as clinicians, are questions that we want answered! However, from a scientific point of view, the more subgroups the researchers investigated, the higher the chance that a false or spurious effect is found. When the study design is submitted for proposal, the sample size calculation, subsequent recruitment strategy and statistical tests are based on usually one comparison only.

A rather puerile example of this can be seen in a paper in the Lancet; a large RCT investigated the use of aspirin for heart attacks. Subgroup analysis revealed that aspirin for people with the star signs of Gemini or Libra, was ineffective. This highlights quite nicely the dangers of sub group analysis.

Some other examples demonstrating the dangers of sub group analysis;

  • Tamoxifen was ineffective in women younger than 50 years old – erroneous.
  • 6 hours after a heart attack, streptokinase was ineffective – erroneous.
  • Aspirin is ineffective in preventing secondary heart attacks amongst women – erroneous.
  • Antihypertensive medication is an ineffective primary prevention for women – erroneous.
  • Beta-blockers are ineffective in older people – erroneous.

In summary, if sub group analysis is to be used, to avoid false, erroneous or spurious findings, they should be based on a sound hypothesis and stated prior to commencing the trial. This is important, as the more you manipulate data or look for evidence of effect, you will eventually find one.

So, what have I hoped you have gained from reading this post? If you are involved in conducting research, then care must be taken to avoid as many of the aforementioned biases affecting the quality, internal/external validity and the strength of your trial. If you are a clinician, I hope you have gained awareness and understanding of some new, and some old terms. RCTs are the gold standard for intervention research however; this does not mean that all RCTs are reliable or perfect!

As always, please comment below or feel free to contact me on twitter!

A

Physiotherapy E-Petition

Hi Guys,

My next post on ‘Bias in Clinical Trials’ will be up next week – busy with MSc viva examinations at present!

Those of you who follow @AdamMeakins will know that he has started a government petition in order to help protect our profession. Please can all who read this site sign this petition here.

Recently, there have been cases of people calling themselves a Physio or Physiotherapist etc. when they are not HCPC registered.

These are of course protected titles by law however, currently a loop hole exists whereby these can be used, as long as you have put a disclaimer somewhere/someplace! The issue of course is – who actually reads the small print!

Thanks for this and I hope you are all looking forward to my next post!

A

Quantitative Experimental and Quasi-Experimental Research

Having set the context and laid the foundations of Evidence Based Practice (EBP) in the previous posts, I will now focus attention on various types of study design, what they can tell you and how to appraise them. Please see my previous post for the outline of future posts to come!

In a previous post, I referred to the definition of EBP from Sackett et al. 2000 who mentioned the importance of conscientious and judicious use of the best research evidence; this would in turn imply that as clinicians, before we use or apply evidence, we need to make a decision about the quality of that evidence (you can see where I’m going here, can’t you..)

This is where the notion of ‘Critical Appraisal’ enters the party; a very good blog post from @AdamMeakins on his website about critical thinking can be found here and deserves a read!

I think Adam makes a couple of very pertinent points – Critical Appraisal does not have to be negative and it isn’t a criticism; Critical Appraisal is an assessment of the methodological quality (Greenhalgh 2010).

There are numerous research designs as I am sure you are very much aware! The table below (click to enlarge) shows the majority of primary research designs in a digestible format and this will be how they will be considered in future posts.

TableNS

(Thank you to @NickySnowdon, Senior Lecturer at Sheffield Hallam University and Physiotherapy Research Society committee member for allowing me to kindly use her table!)

For the remainder of this post, I will be focusing on the Experimental/Quasi-Experimental study designs.

When reading a paper, it is useful to approach the article with a systematic method in order to ensure that you understand what the paper is about, and what the authors were trying to show. The suggested method that I use frequently (I think it may have been taught to me by Nicky in fact!) is as follows:

  1. Is this an original study?
  2. What was the research question and have the authors used the right study design?
  3. Who were the participants and how were they acquired?
  4. What type of methods were used, are they appropriate for the design?
  5. What type of analysis was used?
  6. Were the results interpreted logically from the method described?

Applying these questions, or keeping these questions in mind when reading the paper will help you to produce a logical critical appraisal that allows you to determine the quality of the paper you are reading.

The methods used by the researchers indicate how good the study is; don’t be fooled by the results! The critical appraisal process is essentially the means by which you establish what the researchers found and in turn how sure can you be that what they found is true.

Experimental research seeks to determine the efficacy of an intervention, the gold standard design for this is the Randomised Controlled Trial (RCT); the process should look like this:

RCT

Within Group Analysis – This is where the baseline data from the participants is compared to their outcome data to determine whether any change within each respective group is statistically significant.

Between Group Analysis – This is where the data from the participants in the intervention arm is compared with the data from the participants in the control group, to see if there is any statistical significance and in turn judge the efficacy of the intervention.

Be wary of statements in papers such as this:

“There were no statistically significant differences between groups however, within group analysis of the intervention arm demonstrated a statistically significant improvement”.

I’ll break this sentence down to demonstrate how some authors attempt to ‘spin’ their results:

No statistically significant differences between groups – No difference at the end of the trial between the intervention group and the control group, the intervention can therefore be deemed ineffective.

Within group analysis of the intervention arm demonstrated a statistically significant improvement – This is essentially a quasi-experiment (see below) and therefore would only indicate that the intervention MIGHT have caused the changes seen. The presence of the control group however, would indicate that this is not the case.

If this doesn’t make sense as of yet, carry on reading and I assure you it will!

This leads me on nicely to non-experimental study designs.

Some experimental study designs do not however include a control group – these can be seen in the second column of the table and are known as Quasi-Experimental study design; this will naturally alter the design process somewhat:

Quasi

This process is typical of a pretest-posttest study; a sample of participants are measured at baseline, they all then receive the intervention being investigated before then being re-measured. Whilst this study design can show us that any change MIGHT be due to the intervention, we cannot make the conclusion that it IS the intervention that has bought about change – a control group is needed for that!

  • Have the participants gone away and practiced the assessment procedures?
  • Have the patients ‘regressed to the mean’?

This study design is deemed ‘Quasi’ as it isn’t a real experiment; whilst it is not possible to conclude that any change seen is due to the intervention, they do off some useful insights. Quasi-Experimental studies often act a pre-cursor to ‘real’ experimental designs and in turn may give an indication of time needed for intervention. Furthermore, whilst efficacy of the intervention is unknown, the safety of the intervention can be determined.

I haven’t portrayed it visually however, another study design to consider is that of a ‘Comparison Trial’. If you consider the RCT process outlined above, another intervention arm would be added. This is usually when the efficacy of both interventions has been established and the researcher want to determine which is more effective, how to choose between the interventions or to determine who may respond to each.

As a clinician reading a paper, it is important that your appraisal process relates to your own clinical practice – this is a bit of a bugbear of mine. Critical Appraisal tools such as those produced by CASP or seen in Cochrane papers are useful in determining methodological quality and are integral in the production of high quality review papers. However, from my own experience, I find them difficult to relate into the clinical environment.

A score of 4, 8, 9 etc. have a place in secondary review papers as they allow the reader to judge the quality of the evidence that conclusions are produced from, when it comes down to asking the ‘so what’ questions of a paper; will this work in my patients, are the results due to chance etc. These tools lack value.

So, what issues regarding methodological quality, do clinicians need to be aware of?

  • How can I be sure that the results seen in these patients, will work for my patients?
  • How can I be sure that the researchers haven’t biased the results?
  • How can I be sure that the results seen aren’t just due to chance?

The answer to the third question is through statistics however, this is beyond the scope of this current post. I hadn’t planned to include a statistics post this year however, if the demand is there for one I am sure I can squeeze a post in looking at some basic, experimental statistics. Please leave a reply or contact me on twitter (@AndrewVCuff) if this is something you would like me to do.

In the meantime, have a look at this article from Oxford.

The first two questions relate to the sampling process and the allocation process.

Sampling – What it is and Why it occurs.

Sampling is the process by which a group of research participants are obtained from a target population, in order to be ‘studied’.

I like to keep things simple so that I can understand them; a population refers to everyone, a sample is part of that population.

The reason why sampling occurs is both theoretical and practical. In theory (and reality), the target population is going to be diverse; some people will have mild pathology, others severe pathology, different socioeconomic status, culture etc. Whilst practically, who the researcher can actually access is important.

Within sampling, the concept of validity needs to be considered. Is the sample representative of the target population? This in turn leads to the external validity of the paper, are the results generalisable to the wider population i.e. Can I use the results of this paper to guide my clinical practice? Ideally you would want the sample to include such a breadth that the results are valid to the target population as a whole.

It is through considering these factors that you determine the transferability/generalisability of the results to your clinical practice.

There are many different methods of sampling, with the method used often dependent on the aim of the research and the resources available to the researchers. Some of the main ones you may come across can be categorised into ‘Random’ and ‘Non-Random’ sampling:

Random

This method gives each person within the target population a measurable probability of been selected as part of the sample; this method is seen as the theoretical ‘gold standard’ as the representativeness of the target population within the sample is increased.

Non-random

Convenience sampling – you’ll see this a lot! It is essentially who the researchers have access to, those people near at hand (it is also cheap and easy!)

Purposive sampling – this method is dependent upon the characteristics needed to be representative of the population with the aim to obtain a sample with a particular characteristic.

E.g. Neck pain patients with experience of failed ultrasound and acupuncture.

Allocation Process

Once the sample is obtained, if there are different ‘arms’ to the study i.e. in the RCT above there was an intervention group and a control group, the sample needs to be split into the relevant groups. If there aren’t different arms to the study i.e. in the pretest-posttest study outlined above, then allocation is not required.

How can the sample be split?

Randomisation  – Seen as the gold standard as it allows a cross section of variables and the groups will usually be equivalent at baseline.

Randomisation is more likely to work (i.e. produce equal groups at baseline) in a large sample (>100), not so much in a smaller sample.

Matching Pairs – More commonly used in studies with a small sample size (<50); this process attempts to equalise confounding factors e.g. height, weight, gender, baseline measurements.

When reading an RCT, this area is crucial as it can introduce a high level of bias into the study which can really undermine its findings – bias will be covered in the next post.

You need to consider whether the randomisation method was appropriate and useful i.e. was there any way that the researchers could have potentially found out the randomisation order? Ideally the randomisation should be performed ‘off-site’ and the subsequent randomisation concealed.

I usually read this section whilst considering a mental tick box:

–          Was the sample randomised?

–          Was this appropriate?

–          Was the Method of Concealment explained?

Summary

I hope this post has allowed you to begin to understanding the critical appraisal process slightly better, or at the very least added a new dimension to your existing appraisal method. Critical appraisal is naturally an individual process, and one that will continue to evolve and develop. In the next post I will look at bias in experimental study designs.

A few tips to finish:

  • If you are writing or presenting an appraisal, don’t be descriptive – set the scene but don’t go into immense detail.
  • Consider the negative and positive aspects of the paper – Remember critical appraisal isn’t necessarily a criticism/negative evaluation.
  • Decide whether the paper is good/moderate/poor quality and the strength of the evidence that is provides you with.

Thanks for reading guys!

A

References

GREENHALGH, T. (2010). How to READ A PAPER: the basics of evidence-based medicine. 4th ed., Oxford, BMJ Books.

Sackett, D. L., Straus, S.E., Richardson, W.S., Rosenberg. W. & Haynes, R.B. (2000). Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone

Diagnostic Validity

Hi Guys,

As some of you may be aware, I recently wrote a guest blog for RunningPhysio looking at Diagnostic Validity.

I will be following this post up with a post on here looking at how to appraise Reliability papers, in the mean time you can read my post here.

You should also take this opportunity, if you don’t already, to follow Tom – the creator of RunningPhysio on twitter; it turns out Tom is a fellow Oxford Brookes graduate! As well as following Joe – former Sheffield United Physiotherapist and fellow MSc student who provided me with material to write the post..

My next post looking at Quantitative Experimental/Quasi-Experimental research and some considerations for critical appraisal is written and will be posted on Monday.

Enjoy your Bank Holiday!

A

Future Posts

Hi Guys,

Thank you for all the comments, retweets, mentions etc. since I started the blog a couple of months ago; they’ve been both incredibly supportive and constructive.

I was surprised nobody commented on the video of laughing Goats!?

I’ve created a list of the posts that I currently have planned and aim to have completed by the end of 2013 at the latest:

1 – Clinical Reasoning and Research Questions

2 – What is Evidence Based Practice?

3 – Literature Searching and Strategy Formation

4 – Quantitative Experimental Research and How to critically read an article

5 – Qualitative Research

6 – Quantitative Non-Experimental Research

7 – Pulling your critical appraisal together

8 – Systematic Reviews

9 – Synthesising Evidence

10 – Sources of Bias in Quantitative Experimental Research

As you can see, I’ve completed three already which is good going! I’ve been asked by a couple of people to write a post looking at diagnostic validity (i.e. specificity, sensitivity and the like) which I hope to be able to get up within the next week.

This leads me on nicely to the bigger picture of what I have planned for when I’ve completed the currently proposed series of posts. In 2014, I hope to produce a series of posts which introduces the concepts of psychometric properties of outcome measures and measurement tools that we each use regularly in clinical practice as well as dedicating some time and effort to stats (even the word makes me cringe!).

I hope this gives you a bit of a clearer view of where the blog is at present and where it’s going. Any comments, criticisms, suggestions etc. are all welcomed and I would love to hear from you.

A

 

 

Searching the Literature: That’s only in Systematic Reviews, right?

As a clinician it is important to ensure that you can undertake a relative simple, but effective, literature search or at the very least, know your way around a few electronic databases!

(A more detailed understanding of the search process in research will be considered when we cover the systematic review study design and appraisal).

Literature searching is a great CPD skill and one which you will use (or should be using!) monthly for certain throughout your career. I recall receiving lectures from the Librarian as an undergraduate each year and finding them incredibly boring as she spoke in a monotone voice about truncation and Boolean operators – all foreign language before a two week cram for my dissertation! I hope that this blog post will help you formulate a literature search in a slightly less monotone manner; I don’t think anybody could make it not boring, unfortunately!

So, literature searching, looks complex and incredibly confusing when reading a large systematic review or Cochrane review, eh?

They can be complex, I’m not going to lie, but I am hoping to now give you the fundamentals with regard to finding literature to inform your practice from a clinician’s perspective.

Where will you find the literature?

A good place to start if you are still an undergraduate (or postgraduate), is the library catalogue however, from there electronic databases are your best bet e.g. Medline, Cinahl, PEDro etc.

Most of them are a much of a much in terms of how to use them and you will no doubt develop a ‘favourite’. I’m a Medline Man (wow, what a term!) but must admit I find Google Scholar incredibly useful..shameful I know.

Or is it? A recent paper by Gehanno, Rollin and Darmoni (2013) showed that if all 29 Cochrane reviews published in 2009 had only utilised Google Scholar, no references would have been missed.

Key aspects of a search strategy?

  • Databases
  • Keywords
  • Inclusion/Exclusion for studies
  • What you found in each database*
  • Filtering processes i.e. how did you choose your final papers?*

*There are maybe more relevant to writing an assignment or writing up a piece of research; the first three are more applicable for the practicing clinician.

Defining the Literature Search

If you remember, in my previous post I presented Sackett’s definition of EBP and went on to say that an effective clinician will interpret the value or significance of research findings to theor practice i.e. individuals or specific circumstances.

You therefore need to be selective when gathering relevant literature and should consider an inclusion/exclusion criteria; this doesn’t need to be incredibly formal when searching literature to inform your practice (it’s not your dissertation after all!).

CLINICAL TIP: Inclusion/Exclusion can reduce the generalisability of findings of a paper, however, in research they are needed and are considered ‘good’ science.

A way of structuring or creating inclusion/exclusion criteria can be to draft a table, sometimes called a ‘PICO’ (Kahn et al 2011), which you may have seen used more formally in papers; it can also be used for research question formulation.

  Inclusion Exclusion
Population    
Intervention    
Control    
Outcome    

Population – Description of a group of participants or patients, their clinical problem and the healthcare setting.

Intervention – The main action(s) being considered.

Control – This can sometimes be a Comparison, what is the main alternative?

Outcome – The clinical changes in health state and other related changes.

Keywords

So, you know where you’re going to look and roughly what you are looking for from your search via the inclusion/exclusion criteria well, how are you going to find them? – The search strategy.

When choosing your key words to include in your search it is useful to think quite broadly, consider synonyms, different spellings, different ways of describing the same thing to help you maximise your search.

Using a table at this stage is again a great way to structure your thinking and allow your search to begin to take shape (it will also help you with using Boolean operators later on in the search process).

  Key Words
Population  
Intervention  
Outcome  
Study Design  

Study Design – The appropriate ways to recruit participants or patients in a research study give them interventions and measure their outcomes.

When you have listed a few key words it can be useful to enter them  into the Medical Subject Heading Terms (MeSH) vocabulary browser on the U.S. National Library of Medicine database to ensure papers that have used different terminology are included in your search (Randy and Austin 2012).

A link to the vocabulary browser can be found here: http://www.nlm.nih.gov/mesh/MBrowser.html

It also useful to consider the use of truncation at this stage; * is used to signify truncation ensuring that all possible variations of a word are identified by the search.

Truncation is used to replace letters in words, allowing you to get more results than you would with one word.

For example, you are conducting an assignment on ‘stretching’ or are wanting to look into the effectiveness of stretching within a rehabilitation programme. You could enter the word, ‘stretching’ into the database, and all the papers with that word in would return (along with a 1000 more for some reason!?).

However, if you were to truncate and enter ‘stretch*’ you would receive the papers that have included the words ‘stretch’, ‘stretches’ or ‘stretching’ (along with a 1000 more!!).

Truncation is a way of broadening your search.

I am aware that at this stage, I may be losing you with technical gumpf so please enjoy this video, it is of goats laughing like humans: http://www.youtube.com/watch?v=NXpNdZpc7fg

So, we’ve reached the final straight whereby I introduce Boolean logic; this helps you to maximise the efficiency and efficacy of the search and appears very complex on the surface but fear not:

Let us revisit the second table:

Key Words
Population Athletes, Sportsman, Sports People
Intervention Stretch*
Outcome Range of Motion, ROM
Study Design Controlled Trial

As you can see, I’ve added a few hypothetical words to aid my explanation of Boolean. If we considered each row as a group, we use the Boolean Logic word ‘OR’ within groups, for example:

Athletes OR Sportsman OR Sports People

We use the Boolean Logic word ‘AND’ between groups, for example:

Athletes OR Sportsman OR Sports People AND Stretch*

Blog 2

This print screen shows how these words may be added when using Pub Med (You may need to click on it to make it bigger!).

An alternative way to search the literature with Boolean logic, is to combine words with ‘OR’ as described as above however, without combining them with an ‘AND’; stay with me.

No Search term (s)
1 Valid*
2 Back pain OR back ache OR lumbar OR lumbar spine
3 Goniomet* OR Schober sign
4 Range* of motion* or Assess*
5 1 AND 2 AND 3 AND 4

So if we were searching the literature to determine which methods of measuring lumbar spine range of motion were valid we may have a search box which looks like this.

So, I’ve performed four comprehensive searches (1-4) using the ‘OR’ operator; I’ve then combined the searches not the group to give me my fifth, comprehensive search. This is the same method in theory as the one previously described however, from my experience, this second way produces a better search and is usually way seen in the literature.

Search Strategy for the Clinician – A Recap

  • Utilise electronic databases.
  • Use a ‘PICO’ system to draft an inclusion/exclusion criteria.
  • Use a ‘PIOS’ table to draft some key words; think broadly.
  • Make these key words better by using a MeSH browser and truncation (see above)
  • Utilise Boolean logic to maximise the effectiveness/efficacy of your search.
  • Use your inclusion/exclusion criteria to identify the papers relevant to your practice.

Phew, well that was a difficult write! I hope you ‘enjoyed it’ and you were able to take something away from reading the post even if it’s just me jogging your memories! Now that you have been able to gather a body of literature, the blog posts will now move more towards critical appraisal and various study designs.

I will attempt to get the next blog post up within the month and this will be on Critical Appraisal and Experimental/Quasi-Experimental Study Designs! I’ve managed to get the first of my guest bloggers on board who will be making an appearance over the coming months 🙂

Thanks for reading and please comment below!

A

References

Khan K, Kunz R, Kleijnen J and Antes G (2011) Systematic reviews to support evidence-based medicine: how to review and apply findings of healthcare research. London: Hodder Arnold.

Randy R and Austin T (2012) Using MeSH (Medical Subject Headings) to enhance PubMed search strategies for evidence-based practice in physical therapy. Physical Therapy. 92, 124-132.

What is Evidence Based Practice?

What is Evidence Based Practice?

Welcome back to ‘Applying Criticality’, may I first wish you all a Happy Easter and apologise for the time between my first and second posts; I’ve been incredibly busy with work, my MSc and various other activities I find myself unable to say no to, as well as being ill recently! All of which have conspired against me, but anyway, it’s written now!

Evidence based practice (EBP) has been a bit of a buzzword in healthcare since the 1990’s but what exactly is it? The textbook definition often quoted when discussing (EBP) is that from Sackett et al. 2000.

“Integration of the best research evidence with clinical expertise, the clients preferences & values an clinical circumstances”.

Image

Imms and Imms (2005) – This picture demonstrates the interconnected nature of EBP; in the centre of the image where the arrows converge, represents the patient.

What does this definition mean when applied clinically or, when you think about it is an individual clinician?

It means that to be an effective clinician one must:

  • Appraise/determine the quality of research presented in the literature (and in turn discarding poor quality research).
  • Interpret the value or significance of research findings to your practice i.e. individuals or specific circumstances.

I’m hoping you are starting to see that reading the conclusion of a paper is not enough for individual clinicians to demonstrate EBP, sorry (well, I’m not!).

CLINICAL TIP – When I read a paper, I keep the definition of EBP in mind; whilst reading a high quality paper is fantastic, and excellent for broadening ones outlook, if the population, intervention or follow up does not replicate my practice, then the transferability of the paper is often limited.

There lies one of the current tensions between research and practice but that’s for another day, it is just worth bearing in mind that whilst high methodological rigour is required to trust the findings, the paper may not be applicable to your practice.

Introduction to Types of Evidence

The middle bulk of the posts on this blog will look at different study designs and their appraisal however, now may be a useful time to introduce them and get you thinking about the “hierarchy of evidence”.

Research can be Primary or Secondary.

Primary can be divided into Quantitative or Qualitative.

Qualitative refers to ‘what’, ‘why’ or ‘how’? It is considered to be the study design that reaches the parts that other methods cannot reach (Pope and Mays 2006).

Quantitative can be further divided into Experimental (RCT, Controlled Trial, Uncontrolled Trial) or Observational (Case Series, Cross-sectional, Cohort, Case-control).

Whilst within Secondary, the main type of design will be systematic reviews (however, literature reviews and ‘masterclasses’ may be seen).

I know what you are thinking: “Great, he’s listed all these designs I vaguely remember hearing but we only need to know about RCTs, right? They’re the gold standard, right!?”

Unfortunately not, RCTs are indeed the gold standard for intervention studies (although, consider early on that just because a paper utilised a RCT design, does not mean it is automatically rigorous or high quality) however, whilst the ‘Hierachy of Evidence’ provides a nice framework, it is limited as it doesn’t consider the research question (remember that last post?) or even ethics at times.

Image

Depending on the research question, depends on what design should be used to answer the question and therefore it could be argued there are multiple gold standards.

Some Examples

Sensitivity/Specificity of shoulder special tests – Diagnostic Study

Risk factors – Cohort Study

Effectiveness of Mobilisation – RCT

Patients thoughts/satisfaction – Qualitative

As each of the study designs is considered in future posts, what they can tell us and for what research question they should be used for, will become apparent.

For more reading on What is Evidence Based Practice? Do check out this article from Sackett that many see as the foundation of EBP when started in the 1990s: http://www.bmj.com/content/312/7023/71

My next blog post will look at literature searching for the practicing clinician and I hope to have it up sometime over the next couple of days 🙂

Please leave a comment below.

A

References

Imms, W., & Imms, C. (2005) Evidence based practice: Observations from the Medical Sciences, implications for Education.

Pope, C,. & Mays, N. (2006). Qualitative Research in Health Care. 3rd Edition. London: BMJ Publishing Group.

Sackett, D. L., Straus, S.E., Richardson, W.S., Rosenberg. W. & Haynes, R.B. (2000). Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone.

Research Questions and Clinical Reasoning

Welcome to my first blog post! When this blog is finished I have three main aims that I want the series of posts to accomplish:

Allow the reader to..

  1. Understand, and be able to use an appropriate Strategy to establish a comprehensive body of literature.
  2. Undertake Evaluation of evidence and to reflect critically on how this may impact on clinical decision making.

And..

3.   In turn Facilitate the use of evidence to identify clinically appropriate solutions.

With these in mind this first post will look at why it is important for you as a physiotherapist to understand research, the purpose of the research question and how research fits into clinical reasoning.

Why is understanding Research important?

  • Understanding research may have some carry-over into developing logical reasoning and problem-solving processes in your clinical practice.
  • The ability of physiotherapist’s to read, understand and use research will help move the profession forwards, particularly in terms of knowledge, efficiency and effectiveness.
  • The ability to reach and critique evidence will help maintain your currency as a qualified physiotherapist and in turn allow you to implement EBP.

The Research Question

 I thought this would be a good place to start proceedings as this is arguably the fundamental aspect of any research project or search as it gives the trial focus, from which, the study design is formulated (more about this in future posts) and forms the basis of critiquing the paper.

What is a Research Question?

“A statement that identifies the phenomenon/topic to be studied”.

  • The question which the research sets out to answer.
  • The core or central point of the study, from which the rest of the discussion and new insights follow.

A topic of interest needs to be identified before a research question can be formulated; once the topic is determined it needs to be streamlined to become more focused – a research question can then be generated. The research question should then be accurately and clearly defined within the paper.

Examples of Research Questions

“Is Hydrotherapy effective for the Management of Knee Osteoarthritis and is Hydrotherapy more effective compared to Land-Based exercise?”

“The Effectiveness of PNF Stretching upon Athletic Performance: A Systematic Review”.

“Which Physiotherapy interventions are associated with improvements in fear avoidance beliefs?”

Why is the Research Question important?

It is the first step in writing a paper or an assignment and clearly expresses the intention of the paper. In turn, it allows the paper to remain focused and underpins the introduction and discussion elements. From a reader’s point of view, it allows you to follow the authors thinking as you make your way through the article.

Clinical Reasoning and Research.

Clinical Decision Making should be upon empirical evidence, logic, reasoning and justification, with clinical reasoning being the thinking that underpins clinical practice. Clinical reasoning can be defined as:

“A process of reflective enquiry, involving the client, which seeks to promote a deep and relevant understanding of the clinical problem, in order to provide a sound (evidence-based) basis for clinical intervention” – Higgs and Jones (2000).

Clinical Reasoning - Higgs and Jones 2000

Image: Clinical Reasoning Model for Physiotherapists (Higgs and Jones 2000).

As an undergraduate or newly qualified physiotherapist, research suggests that the clinical reasoning process utilised is hypotheticodeductive; simply put, this means that many hypotheses are generated and then tested with the aim to disprove.

This processing can be subject to a number of errors and biases. An awareness of these mental mechanisms and biases, termed ‘Metacognition’, may improve the quality of reasoning.

Part of professional development is becoming aware of your habitual patterns of thought e.g. favourite diagnosis, and your tendencies to use stereotypes or assumption; this process requires reflection as you progress through your career.

In the absence of sound Clinical Reasoning, clinical practice becomes a technical operation requiring direction from a decision maker. Using evidence to improve your clinical reasoning skills will inevitably mean that your decisions will be evidence-based.

Now, having got the deep stuff out of the way and showing where evidence sits within clinical reasoning and your professional development, the next few posts can begin to develop your critical appraisal skills.

What’s next?

The next post will cover ‘What is Evidence Based Practice?’ and Search Strategies: Understanding and Formulating.

I hope you enjoyed this first post, and I look forward to writing some more!

A

References

HIGGS, J. and JONES, M. (2000) Clinical reasoning in the health professions, 2nd ed., Oxford, Butterworth-Heinemann.