Answer to "How may I evaluate IPE?"

Evaluation is the process of finding the value of something.  In education, evaluation is used predominantly to look at whether learning is achieved.  For IPE, we may be looking into whether learners have achieved the defined interprofessional learning outcomes or competencies, through self-reporting (self-assessment) via evaluation forms, or through more objectives assessments.  Students are known to complain about the amount of evaluation they are required to complete and, in particular, the perceived lack of change arising from their feedback. 

The Kirkpatrick model, first developed in 1958, is a specific outcomes-based framework used to evaluate training programs (originally for sales and manufacturing businesses). Since its adaption into six categories/levels for evaluation of IPE by Barr et al. (1999), it has been used extensively, either implicitly or explicitly, around the world.  More recently, the framework has been criticised for its short-term outcomes and its lack of recognition of the complexity of health care (see for example Yardley & Dornan, 2012). However, it is still employed particularly to answer the question of whether IPE is effective – either for a specific localised activity or a larger scale multi-site intervention. 

 

The levels are: reaction (learner satisfaction); changes in attitudes; change in knowledge; change in performance; organisational outcomes; and patient outcomes. 

 

While these outcomes are important for institutional data collection, they do not indicate what works, or why and how. Therefore, educators are turning to alternative approaches to evaluation including realist evaluation (Pawson & Tilley, 1997). When applied to health professional education, this aims to answer the following questions: what kinds of educational interventions will tend to work, for what kinds of learners, in what kinds of contexts, to what degree and what explains such patterns? (Wong, Greenhalgh, Westhorp & Pawson, 2012).

References

BANDALL, K.S, CRAIG, R. and ZIV, A. (2012).  Innovations in applied health: Evaluating a simulation-enhanced, interprofessional curriculum. Medical Teacher, 34(2), e176-e184.  

BARR, H., HAMMICK, M., KOPPEL, I., and REEVES, S. (1999). Evaluating interprofessional education: Two systematic reviews for health and social care. British Educational Research Journal, 25(4), 533-543.

ERICSON, A., LöFGREN, S., BOLINDER, G., REEVES, S., KITTO, S. and MASIELLO, I. (2017).  Interprofessional education in a student-led emergency department: A realist evaluation. Journal of Interprofessional Care, 31, 199-206.

KIRKPATRICK, D. and KIRKPATRICK, J.  (2006).  Evaluating Training Programs: The Four Level Model.  San Francisco: Berrett-Koehler.

PAWSON, R. and TILLEY, N. (1997). Realistic Evaluation. London: Sage.

THISTLETHWAITE, J.E., KUMAR, K., MORAN, M., SAUNDERS, R. and CARR, S. (2015). An exploratory review of interprofessional education evaluations.  Journal of Interprofessional Care, 29, 292-297.

WONG, G., GREENHALGH, T., WESTHORP, G. and PAWSON, R. (2012).  Realist methods in medical education research: what are they and what can they contribute? Medical Education, 46, 89-96.

YARDLEY, S. and DORNAN, T.  (2012).  Kirkpatrick’s levels and education ‘evidence’.  Medical Education, 46, 97-106.

© 2019 AMEE

 

Privacy