Hostname: page-component-7c8c6479df-fqc5m Total loading time: 0 Render date: 2024-03-27T10:11:31.072Z Has data issue: false hasContentIssue false

Evaluating qualitative papers in a multidisciplinary evidence-based journal club: a pilot study

Published online by Cambridge University Press:  02 January 2018

Raja A. S. Mukherjee
Affiliation:
Department of Mental Health, Learning Disability, Division of Mental Health, Social and Developmental Psychiatry, St George's, University of London, Tooting, London SW17 0RE, e-mail: rmukherj@sgul.ac.uk
Katherine Owen
Affiliation:
St George's, University of London
Sheila Hollins
Affiliation:
St George's, University of London
Rights & Permissions [Opens in a new window]

Extract

Journal clubs have traditionally been important means by which clinicians, academics and trainees appraise research that relates to their field. In recent years the evidence-based style has become more prevalent, allowing knowledge gleaned from research to be applied to clinical situations. Currently, the main approach of evidence-based journal clubs is quantitative. This paper describes the evaluation of a modified journal club format, developed in an academic department of a medical school and combining the appraisal of both qualitative and quantitative papers.

Type
Education & training
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © 2006. The Royal College of Psychiatrists

Journal clubs have traditionally been important means by which clinicians, academics and trainees appraise research that relates to their field. In recent years the evidence-based style has become more prevalent, allowing knowledge gleaned from research to be applied to clinical situations. Currently, the main approach of evidence-based journal clubs is quantitative. This paper describes the evaluation of a modified journal club format, developed in an academic department of a medical school and combining the appraisal of both qualitative and quantitative papers.

The place of evidence-based journal clubs is now established. Since the first series of articles from McMaster University in Canada, when a set of guidelines for evidence-based journal clubs was suggested, there has been a growing literature on the subject (McMaster University, 1981). Subsequently, numerous established international journals have all published their own versions of these guidelines. Gilbody (Reference Gilbody1996) suggested the format which has been adopted in many psychiatric journal clubs. Warner & King (Reference Warner and King1997) reported that once implemented in this format, 88% of participants improved their critical appraisal skills and 100% felt it was an appropriate use of academic programme time. Geddes (Reference Geddes1998) suggested that the critical appraisal skills of clinicians were, for many, at best rusty. He also highlighted the importance placed on this skill by the College, which introduced the critical review paper as part of the core skills tested in the MRCPsych examination. Dhar & O’Brien (Reference Dhar and O'Brien2001) highlight the usefulness of this approach for trainees in preparation for postgraduate examinations. Owen et al (Reference Owen, House and Worrall1995) point to the usefulness of this approach in improving research practice and Geddes & Harrison (Reference Geddes and Harrison1997) suggest that adopting this approach improves clinical practice.

Although the argument that qualitative research has an important role is becoming more widely accepted, there is little evidence of an increasing ability to evaluate the quality of qualitative research. There is an ongoing debate about how this can be done and what kind of criteria should be used. In 1998, the National Health Service commissioned a review of the literature and their report highlighted criteria that could be used to differentiate between the quality of qualitative papers (Reference Murphy, Dingwall and GreatbatchMurphy et al, 1998). Pope & Mays (Reference Pope and Mays1999) summarise some of these points and offer further guidelines.

The idea to change the journal club format within the Department of Mental Health (Learning Disability) from an older style to a more evidence-based approach had been suggested for some time. However, the expectation that this would be based on quantitative research was not acceptable because of the unique mix of medical and social science skills within the group. Each theoretical perspective was felt to have equal validity and therefore required equal consideration in the journal club. It was proposed to establish a journal club, which would present and assess both qualitative and quantitative research papers on the same subject at the same session. It was envisaged that critical appraisal skills in both quantitative and qualitative research among all members, regardless of their discipline, would develop. As the critical appraisal of qualitative papers has not been previously described, this component of the process was evaluated.

Method

Journal clubs are held regularly as part of the regular postgraduate academic meetings of the Department of Mental Health, Learning Disability. They are attended by doctors, psychologists, clinical and social science researchers, and multidisciplinary community team members, with a variable attendance averaging about ten individuals. It was decided to pilot the joint (i.e. qualitative and quantitative) process of assessment with a view to extending it to a wider audience if successful.

The structure of our new journal club was modelled loosely on that described by Gilbody (Reference Gilbody1996). A different presenter was chosen for each session, a psychiatric specialist registrar for the quantitative section of the session and social science researcher for the qualitative section. The structure was modified based on comments received throughout the period of study.

Lists of questions used in quantitative appraisal are available from numerous sources (Reference GreenhalghGreenhalgh, 1997; Reference Sackett, Strauss and RichardsonSackett et al, 1999). These were summarised and provided to the facilitators. For the qualitative papers a series of questions based upon the framework suggested by Pope & Mays (Reference Pope and Mays1999) were developed (see Appendix). A glossary of terms, collected from available literature, was also provided to both groups (available from the authors).

Two short questionnaires composed of a mixture of open answer boxes as well as some 5-point Likert scales were developed. The first questionnaire was used to obtain baseline opinions. The second, distributed after four sessions over a 6-month period, was used to assess any change in confidence when appraising qualitative papers, and participants’ enjoyment and the perceived usefulness of the new format.

The collected data were analysed using Stata, version 7, for Windows. Non-parametric statistics were used to assess the change in confidence in those completing the journal clubs based on the null hypothesis that there would be no change.

Results

Table 1 provides a breakdown of those attending both the initial and final journal club for the pilot study. Table 2 shows how confidence levels changed during the course of the study and gives a breakdown of the perceived educational value, enjoyment and usefulness of this new format. In all areas, seven or more of those attending reported positive statements.

Table 1. Breakdown of those attending the pilot study

Speciality/role Number attending initial session Number attending final session
Consultant psychiatrist 6 3
Specailist registrar 3 3
Speech and language therapist 1 1
Social scientist 2 2
Training advisor 1 0
Nurse 1 0
Associate specialist 1 0
Total 15 9

Table 2. Confidence levels of nine participants in assessing qualitative papers before and after the pilot study based on a 5-point Likert scale

n %
Confidence in evaluating qualitative papers at start of study1,2 2 22
Confidence in evaluating qualitative papers at end of study1,3 7 78
Considered this format of educational value 8 89
Enjoyed the new format 8 89
Considered this was a useful process 9 100

The majority of people at the start of the pilot study showed little confidence in appraising qualitative papers. Those who were more confident tended to be researchers who were already familiar with qualitative research methods. Owing to the small number in the study, the non-parametric Wilcoxon matched-pairs sign-rank test was used. A significant level of change in confidence was noted (z=2.535, P=0.012). The largest changes were seen in those with the least initial experience of reading and appraising qualitative papers.

Discussion

Despite increasing recognition of the value of the evidence-based journal club, there has been little, if any, appraisal of qualitative journal papers in a medical institution. However, there is an increasing body of evidence to show that qualitative research is considered to be as equally important as quantitative research. Rosser (Reference Rosser1999) argues that only the quality and the relevance of evidence blended with the context and values of the patient will achieve the benefit of medical evidence for patients. Greenhalgh & Hurwitz (Reference Greenhalgh and Hurwitz1999) highlight the importance of narrative, particularly its role in an evidence-based world. They emphasise the importance of listening and understanding the patients’ views and suggest that the process of taking a history can be comparable to methods of qualitative research. They further argue that it is important not to ignore the relevance of qualitative research, as it often seeks a deeper truth and aims to understand the significance of phenomena (Reference GreenhalghGreenhalgh, 1997; Reference Greenhalgh and HurwitzGreenhalgh & Hurwitz, 1999). Malterud (Reference Malterud2001) argues that qualitative enquiry could contribute to a broader understanding of medical science and that methods of patient care are based on more than just the results of clinical experiments.

The results of this study suggest it is possible to apply the principles of evidence-based journal clubs to qualitative papers. As the new-format club progressed, it became more difficult to find both qualitative and quantitative articles on the same topic. Also, many apparently qualitative papers appear to contain both quantitative and qualitative elements. Furthermore, some of the qualitative papers were very long, and owing to time constraints more guidance as to the areas within the paper to read was needed. Following the conclusion of the study, it was decided to separate the journal clubs into two separate sessions. Box 1 summarises practical tips to help run a qualitative journal club.

The multidisciplinary nature of the Department of Mental Health, Learning Disability meant that initially senior colleagues familiar with qualitative research were able to facilitate the journal clubs. As sessions progressed other people were increasingly able to facilitate the sessions using the guidelines (see Appendix) and the glossary of terms (available from the authors) as a resource. The criteria of the guidelines tended to be strictly adhered to during the early sessions, but as people became more confident, a less rigid adherence developed. This suggests that, with the help of the guidelines, it would be possible to extend this format to other settings, even if experience of reading and appraising qualitative research was minimal.

Box 1. Practical guide to running a qualitative journal club

  1. Separate qualitative and quantitative sessions

  2. Inform those attending the journal club the title of the paper before the meeting

  3. Ensure sufficient time is allowed (at least 45 min but preferably 60 min)

  4. Try to pick papers that are not too long

  5. With longer articles the facilitator must guide the readers to the main areas to address

  6. Provide each group with a glossary of terms and questionnaires

  7. Allow plenty of time for group discussion of the main questions

  8. The facilitator of each session must be prepared with all the answers and chair the meeting stringently for time

  9. Start within guidelines (see Appendix) before asking questions outside them

  10. Some experience of qualitative research by a member of the group is useful, especially initially

Further benefits are that members of the department are now more aware of qualitative methods and may be more comfortable in using such methods in research projects - an evaluation supported by Owen et al (Reference Owen, House and Worrall1995).

The pilot study has shown that the critical appraisal of both qualitative and quantitative papers can easily be introduced to an existing journal club. It suggests that there are advantages achieved by further developing critical appraisal skills to include qualitative research papers.

Appendix

Critical appraisal guidelines for qualitative research

  1. (1) What was the aim/research question?

    Was it clear?

  2. (2) Who took part in the study?

    1. (i) Type of participants

    2. (ii) Number of participants

  3. (3) What sampling strategy was used?

    1. (i) Theoretical

    2. (ii) Purposive/purposeful

    3. (iii) Until saturation reached

    4. (iv) Convenience

    5. (v) Probability (each person has equal chance of being selected)

    6. (vi) Other Did the sample include the full range of possible cases/settings for conceptual generalisations to be made?

  4. (4) What method was used to collect data?

    1. (i) Individual interviews

    2. (ii) Focus groups

    3. (iii) Observation

    4. (iv) Analysis of documents

    5. (v) Other

    Were reasons for choice of method explicit? Would a different method have been more appropriate?

  5. (5) What method of analysis was used?

    1. (i) Grounded theory

    2. (ii) Phenomenology

    3. (iii) Thematic

    4. (iv) Content

    5. (v) Other

    Was the researcher explicit in describing analysis process?

    Was the analysis systematic?

    How well did the analysis succeed in incorporating all the observations?

    Was any computer software used to manage the data?

  6. (6) Results - what were the main themes or other findings discovered in this paper?

    Is it possible to follow links between the data and the explanations or theory given?

    Is the setting/context adequately described so findings could be related to other settings?

  7. (7) Were any methods used to enhance rigour/trustworthiness?

    1. (i) More than one person involved in analysis

    2. (ii) Respondent validation (feedback to research participants)

    3. (iii) Searching for negative cases, i.e. those which do not fit the theory

    4. (iv) Triangulation (the use of more than one method)

    5. (v) Reflexivity (considering the effects of the researcher on what is found (e.g. through use of a diary, or inclusion of background, personal characteristics of researcher)

    6. (vi) Other

  8. (8) Your views

    How understandable was the paper?

    How valuable did you find the results?

    Has it contributed usefully to knowledge?

    Did it answer your initial question?

Adapted from Pope & Mays (1995) by Owen (2002).

Declaration of interest

None.

References

Dhar, R. & O'Brien, A. (2001) Evidence based journal clubs and the Critical Review Paper: Candidates perspective. Psychiatric Bulletin, 25, 6768.Google Scholar
Geddes, J. (1998) Evidence based practice: a practical approach. Psychiatric Bulletin, 22, 337338.Google Scholar
Geddes, J. R. & Harrison, P. J. (1997) Closing the gap between research and practice. British Journal of Psychiatry, 171, 220225.Google Scholar
Gilbody, S. (1996) Evidence based medicine: a new format for journal clubs. Psychiatric Bulletin, 20, 673675.Google Scholar
Greenhalgh, T. (1997) How to Read a Paper. London: BMJ Publishing Group.Google Scholar
Greenhalgh, T. & Hurwitz, B. (1999) Narrative based medicine; why study narrative? BMJ, 318, 4850.Google Scholar
Malterud, K. (2001) The art and science of clinical knowledge: Evidence beyond measures and numbers. Lancet, 358, 397400.Google Scholar
McMaster University: Department of Clinical Epidemiology and Biostatistics (1981) How to read clinical journals: 1. Why read them and how to read them critically? Canadian Medical Association Journal, 124, 555558.Google Scholar
Murphy, E., Dingwall, R., Greatbatch, D., et al (1998) Qualitative research methods in health technology assessment: a review of the literature. Health Technology Assessment, 2, 167198.Google Scholar
Owen, D., House, A. & Worrall, A. (1995) Research by trainees; a strategy to improve standards of education and supervision. Psychiatric Bulletin, 19, 337340.CrossRefGoogle Scholar
Pope, C. & Mays, N. (1999) Qualitative Research in Healthcare. (2nd edn). London: British Medical Journal Books.Google Scholar
Rosser, W.W. (1999) Application of evidence from randomised controlled trials to general practice. Lancet, 353, 661664.Google Scholar
Sackett, D. L., Strauss, S., Richardson, S., et al (1999) Evidence Based Medicine: How to Practice and Teach EBM (2nd edn). London: Churchill Livingstone.Google Scholar
Warner, J. P. & King, M. (1997) Evidence based medicine and the journal club: a cross sectional survey of participants views. Psychiatric Bulletin, 21, 532534.Google Scholar
Figure 0

Table 1. Breakdown of those attending the pilot study

Figure 1

Table 2. Confidence levels of nine participants in assessing qualitative papers before and after the pilot study based on a 5-point Likert scale

Submit a response

eLetters

No eLetters have been published for this article.