Answer to "How can I evaluate EDI training?"

Training is often not assessed comprehensively following implementation, which can set back EDI programs. First, and, perhaps, most obviously, it is difficult for administrators to understand the impact of training without systematic evaluation. Without this key information, EDI initiatives will continue to suffer from misperceptions: that they are not rigorously developed; that they are only lip service from administrators; or that there is ambiguity surrounding their effectiveness. Moreover, it can be difficult to build a long-term EDI program without understanding the impact of training. An effective program will encourage further educational planning. Even a weak EDI program can provide important feedback to trainers, sparking consequent research and development.


Importantly, evaluation must be conducted in a systematic and rigorous manner. Generally, training can assess one or more dimensions of outcomes: trainee reactions, learning, on-the-job behavior, and organizational results (Kirkpatrick, 1959; Alliger and Janak, 1989). Much diversity training relies on reaction surveys (e.g., “smile sheets”), as they are one of the fastest, easiest, and, therefore, most popular ways to ascertain a training event’s success. However, a trainee’s satisfaction alone may not provide a complete picture of training success. In the case of EDI, many individuals may not enjoy being challenged or faced with difficult and controversial topics. Moreover, satisfaction is not necessarily related to other types of outcomes (Kirkpatrick, 1979). Therefore, educational programs can, and should, be evaluated by examining many facets of outcomes (Sitzmann and Weinhardt, 2017), including learning and on-the-job behavior.


To this end, administrators may triangulate evaluation using several methods.

  • Self-report surveys can touch upon reactions, while more knowledge-based tests can capture learning.

  • Behaviorally-based simulations may include objective structured clinical examinations (OSCEs).

  • Observations in the field can help assess the impact of training for those providers who are already in practice. Moving beyond educational episodes, administrators may consider collecting

  • On-the-job metrics, such as health outcomes from patients treated by trained providers, may be collected by administrators.

  • Organizational results refer to the highest level of evaluation. Training effectiveness may be assessed as it impacts the organization as a whole – for example, if it enhances patient enrollment and satisfaction ratings.

Regardless of the type of evaluation, trainers should attempt to collect the same information pre- and post-intervention, such that any potential improvements can be directly measured against the baseline. Curcio et al. (2012, 2013) detail the process they followed for lawyers using background work from medical education.The questionnaires themselves have proved useful tools to engage students.

Of 34 studies that have evaluated cultural competence, Beach et al. (2005) found reliable evidence that EDI trainings improved knowledge, skills, and attitudes within health professionals. Moreover, it also positively impacts patient satisfaction, although scant research has explored ties to patient treatment adherence or health outcomes. Notably, studies should attempt whenever possible to use validated measures. Gozu et al. (2007)’s review of self-administered EDI instruments found that most studies used non-validated tools, and that only six out of 45 articles used measures with both reliability and validity. Thus, it is not enough to measure EDI and establish these important relationships, but to do so using appropriate and rigorous methods.

Evidence regarding evaluation continues to be limited by a lack of conceptual clarity about what training in diversity should be, weak methodology and difficulty in identifying the outcomes that can or should be measured (and how to do this).  The impact of such training on clinical practice and outcomes is even more sparse. We may benefit from exploring the literature in patient centred outcomes’ research as there is some evidence that taking a patient centred approach can improve outcomes. Given the suggestions that diversity training is about seeing patients as whole people who are more than an illness, there may also be benefit in exploring research which has looked at person centred approaches and outcomes. This may require a mixed methods research approach to better understand the end user experience of health and education.




Alliger, G. M. and Janak, E. A. (1989) ‘Kirkpatrick’s Levels of Training Criteria: Thirty Years Later’, Personnel Psychology, 42(2), pp. 331–342.

Beach, M. C., Price, E. G., Gary, T. L., Robinson, K. A., et al. (2005) ‘Cultural Competency: A Systematic Review of Health Care Provider Educational Interventions’, Medical care, 43(4), pp. 356–373.

Curcio, A. A., Ward, T. M. and Dogra, N. (2012) ‘Educating Culturally Sensible Lawyers: A Study of Student Attitudes about the Role Culture Plays in the Lawyering Process’, University of Western Sydney Law Review, 16, pp. 98–126.

Curcio, A., Ward, T. and Dogra, N. (2013) ‘Using Existing Frameworks to Develop Ways to Teach and Measure Law Students’ Cultural Competence’, in The Legal Profession: Education and Ethics in Practice. Athens, Greece: ATINER, pp. 21–34. Available at:

Gozu, A., Beach, M. C., Price, E. G., Gary, T. L., et al. (2007) ‘Self-Administered Instruments to Measure Cultural Competence of Health Professionals: A Systematic Review’, Teaching and Learning in Medicine, 19(2), pp. 180–190.

Kirkpatrick, D. L. (1959) ‘Techniques for evaluation training programs’, Journal of the American Society of Training Directors, 13, pp. 21–26.

Kirkpatrick, D. L. (1979) ‘Techniques for evaluating training programs’, Training and development journal.

Sitzmann, T. and Weinhardt, J. M. (2019) ‘Approaching evaluation from a multilevel perspective: A comprehensive analysis of the indicators of training effectiveness’, Human Resource Management Review. (Advancing Training for the 21st Century), 29(2), pp. 253–269.

© 2019 AMEE