Most of us, who work in the line of evaluation, bear in mind and remind ourselves and others about validity and reliability of measurement / evaluation tools. When I design a study, I always think how to triangulate the collection of data, and use more than one system to measure the subject in question. Therefore, most of us will usually use several questions to measure the same indicator (and then conduct a reliability test); and ensure the test and indicators are actually measuring the topic we would like to learn about. However, I never put too much thinking about other tools and research instruments that are perform other types of measurements, i.e. Polygraph. What polygraphs are entitled to do, is to provide the researcher/authority with some information that in general is considered more credible than just another statement or testimony given by the participant. Much research was and is done on this regard, and it is widely known that polygraphs are not too credible or reliable tools to assess whether the participant tells the truth (a not so credible way to assess credibility!). This arises several questions:
(1) why is it s widely used, while known to be less reliable than the average person would expect this to be (not to mention experts)?
(2) why did humanity could not come up with a better, more reliable, more valid solution so far?
(3) what are the consequences of using such a tool on human rights, dignity, and justice?
(4) how can we improve the tool, or suggest a better tool, or at least suggest a tool to triangulate and validate polygraph findings?
It appears those questions are high priority these days, and there is a competition in the US, focusing on “Credibility Assessment Standardized Evaluation (CASE)“. This Prize Challenge offers five prizes to teams and individuals who will suggest fruitful tools to asses and standardize evaluation process for credibility tools. In their words: “The CASE Challenge is … to develop credibility assessment evaluation methods that can be used to objectively evaluate both existing and future credibility assessment techniques/technologies”.
Registration is open now, and those who will present winning solutions will be invited to Washington, DC in summer 2019. I am highly curios to learn what options we have to create a better, standardized evaluation, especially focusing on intended future behaviour. A reliable, standardized solution may be duplicated to other areas such as program evaluation in education and social services.