On the credibility of credibility tools

Most of us, who work in the line of evaluation, bear in mind and remind ourselves and others about validity and reliability of measurement / evaluation tools. When I design a study, I always think how to triangulate the collection of data, and use more than one system to measure the subject in question. Therefore, most of us will usually use several questions to measure the same indicator (and then conduct a reliability test); and ensure the test and indicators are actually measuring the topic we would like to learn about. However, I never put too much thinking about other tools and research instruments that are perform other types of measurements, i.e. Polygraph. What polygraphs are entitled to do, is to provide the researcher/authority with some information that in general is considered more credible than just another statement or testimony given by the participant. Much research was and is done on this regard, and it is widely known that polygraphs are not too credible or reliable tools to assess whether the participant tells the truth (a not so credible way to assess credibility!). This arises several questions:
(1) why is it s widely used, while known to be less reliable than the average person would expect this to be (not to mention experts)?
(2) why did humanity could not come up with a better, more reliable, more valid solution so far?
(3) what are the consequences of using such a tool on human rights, dignity, and justice?
(4) how can we improve the tool, or suggest a better tool, or at least suggest a tool to triangulate and validate polygraph findings?

It appears those questions are high priority these days, and there is a competition in the US, focusing on “Credibility Assessment Standardized Evaluation (CASE)“. This Prize Challenge offers five prizes to teams and individuals who will suggest fruitful tools to asses and standardize evaluation process for credibility tools. In their words: “The CASE Challenge is … to develop credibility assessment evaluation methods that can be used to objectively evaluate both existing and future credibility assessment techniques/technologies”. 
Registration is open now, and those who will present winning solutions will be invited to Washington, DC in summer 2019. I am highly curios to learn what options we have to create a better, standardized evaluation, especially focusing on intended future behaviour. A reliable, standardized solution may be duplicated to other areas such as program evaluation in education and social services.

Grant Allocation and Management

In small family foundations, the prevalent question is “how can we do best with our investment?”. Similarly to the question about fund investments in stock and other assets. However, the big question of how to impact the most with the funds, may be answered quite decently when taking several aspects into account:

(1) applicant information: what information is collected from applicant, and how much information is usable. In other words, the secret of data collection in the first stage is the lean nature of the collected data. Ensure the information requested is highly relevant for the decision making process.

(2) clear guidelines: when the foundation is getting too many irrelevant applications and letters, this raises the question of clarity and accuracy of guidelines for applications. Hence, ensure the application guidelines are clear, presented and accessible on the website, and state shortly and clearly who are the typical applicants to be considered. Also, provide a clear timeline for application rounds and decision announcements.

(3) decent evaluation: the most important, yet frequently neglected aspect, is evaluation and understanding of the contribution impact. It is easy to donate for a cause close to one’s heart. However, grant making is not only about giving, but also about ensuring the investment is impactful and fruitful. By this, the foundation needs to create simple rubrics or other evaluation tools to allow continuous and clear reporting (quarterly and annually). The evaluation tool should also provide the opportunity to compare between funded projects, to assist in decision making for future donations. Those evaluation tools do not have to be too sophisticated, they can be smart enough to compare and extract information based on the criteria on the guidelines, and most importantly – measure and evaluate the match and achievement of the foundation’s vision.

(4) organized decision making: when all the above are accomplished, the foundation’s management team and board are able to make decisions quite easily and effectively. Whether it is about deciding who are to be included in the next round of applicants (i.e. section 1 and 2 – guidelines and applicant information) or what was the impact of the invested funds over the year, and how the projects are doing in comparison (section 3). The management team arranges all the information for the board, and the board makes efficient,  wise, evidence-based decisions to highly impact and use the funds in the most significant way to achieve the foundation’s vision.

 

For information about our services please click here.