Answer to "How do you make high stake decisions?"

 

Editor: Andrea Oudkerk Pool

 

Examples of high-stakes decisions are deciding if a learner can gain a qualification or can progress to the next stage of a course. Such decisions need to be robust and therefore require large amounts of data points that provide rich information about the learners’ strengths and weaknesses. These quantitative and qualitative data points have to be sampled across multiple contexts, assessors and methods. The aggregation of these quantitative and qualitative data requires expert judgment. If the system works well, the outcome will not be a surprise to the learner.

 

Because of the high stakes nature of these decisions it is important that multiple measures are taken to improve the quality of the decisions. Examples of these measures are:

  • Appointment of an assessment committee: although expert assessors reach similar final (pass – fail) decisions, their information processing approaches and reasoning behind their judgment differs. Therefore, a committee of trained assessors who discuss their evaluation should make high-stakes decisions. The committee should not limit their decision to a pass or fail, but should motivate their decision by a justification.

  • Avoid conflict of interest: Expert assessors should not be involved in the learning process of the individual learner.

  • Using narrative standards or milestones

  • Training committee members: the expert assessors should receive training on how to interpret the standards.

  • Deliberation proportional to the clarity of information: decisions concerning most learners are not difficult and do not require much time. Some learners’ performance can be qualified as borderline and therefore these decisions will require more deliberation. If it is not possible to make a decision based on the available data, more data points should be collected.

  • Learner and mentor input: The mentor should not be responsible for final pass-fail decisions as these decisions could be biased and it could damage their relationship with the learner. However, the mentor could provide input for the decision by, for example, signing for the authenticity of the portfolio, or writing a recommendation to the committee that is annotated by the learner.

  • Appeals procedures: Learners should have the possibility to appeal against the decision.

 

 

References

Bok, H. G., Teunissen, P. W., Favier, R. P., Rietbroek, N. J., Theyse, L. F., Brommer, H., ... & Jaarsma, D. A. (2013). Programmatic assessment of competency-based workplace learning: when theory meets practice. BMC Medical Education, 13(1), 123.

Driessen, E. W., Van Tartwijk, J., Govaerts, M., Teunissen, P., & van der Vleuten, C. P. (2012). The use of programmatic assessment in the clinical workplace: a Maastricht case report. Medical Teacher, 34(3), 226-231.

Oudkerk Pool, A., Govaerts, M. J., Jaarsma, D. A., & Driessen, E. W. (2018). From aggregation to interpretation: how assessors judge complex data in a competency-based portfolio. Advances in Health Sciences Education, 23(2), 275-287.

van der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical teacher, 34(3), 205-214.

Van Der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2015). Twelve tips for programmatic assessment. Medical teacher, 37(7), 641-646.

© 2019 AMEE

 

Privacy