Answer to "How do we assure quality in programmatic assessment?"

 

Editor: Adrian Freeman

 

It can be seen that there are similarities and differences in quality assurance between traditional endpoint assessments and programmatic assessments. A lot will depend on how the programmes are set up.

 

For example there may be significant components within the programme that are “numbers” based, e.g testing knowledge through progress testing. Each individual test and its items can be subject to the same quantitative (psychometric) measures to monitor quality as, say, an end of block single best answer test.

 

There may be individual components testing motor skills which could utilise the same quality assurance methods as traditional competency assessments such as Entrustable Professional Activities (EPA) or OSCE stations.

 

However one of the defining concepts of programmatic assessments is multiple judgements each of which may be low stakes but it is the overall view of all the low stakes assessments that leads to the final judgements. That final review process creates a need for different quality perspectives.

 

Programmatic assessments clearly lend themselves to the Kane perspectives on validity. The processes of making final inferences from multiple small judgements and combining quantitative and qualitative judgements. The low stakes individual judgements may require less intensive training for those judges/faculty. However the judges who are bringing all the evidence together to make the final inferences will need careful training such as benchmarking.

 

A vital  aspect of quality assurance is the collecting, storing and displaying of data.  Systems should be coordinated across multiple judges. There should be regular structured reviews by the administration/faculty to ensure that data is being recorded and there will be no significant gaps at the points of progression decisions. Please see the next question regarding IT systems.

 

 

Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane's framework. Med Educ 2015;49(6):560–75.

 

Schuwirth LWT, van der Vleuten CPM. Programmatic assessment and Kane’s validity perspective. Med Educ. 2012;46(1):38–48

© 2019 AMEE

 

Privacy