Building a strategy

Research quality and Think Tanks: definition, responsibility and impact

By 13/02/2015

Gary Mena is a researcher within ARU, a research institution based in Bolivia. The following blog was originally submitted to the TTI Exchange e-forum as a response to the discussions emerging there. Mena stated that he saw the coming exchange as an opportunity for the network to reach a consensus on the specific standards of quality research (for both quantitative and qualitative approaches) that each member could use to assess the quality of their research (some sort of “Istanbul protocol”) and to strengthen the peer review mechanisms used so far.

– How do you define research quality? What are the key challenges in assessing research quality? What effective methodology or tools have you used for measuring and assessing research quality?

The broad set of answers in the TTI Exchange e-forum motivated me to search in the literature to find what has been said on the topic. First of all, quality research shouldn’t be confused with quality evidence, because the sum of quality research results, on a specific topic, defines the quality of the evidence. Quality scientific research is understood as a process (checklist), comprising several steps, that attempts to ensure the credibility, applicability, consistency and neutrality of the results. The checklist defines a standard, which is agreed by experts, and may vary depending on whether one is using a quantitative or qualitative approach. In the case of quantitative research the criteria to assess its quality includes: the internal validity of the results (context, sample size, power calculation), external validity (ecological generalizability, verified predicted relationships, etc.), reliability (consistency if replicated), replicability (can others reproduce the results?), and objectivity (unbiased).

The key challenges are the lack of consensus on the specific standards that the research process must follow to guarantee its quality, and the compliance to the defined standards. Peer-review has been appraised as one of the main mechanisms to help strengthen the quality of research, but even in purely academic settings, peer-reviewers fail to spot mistakes because of the competitiveness in the research market, among other reasons. Furthermore, unless peer-reviewers re-do the entire research, they won’t be able to detect erroneous results. Finally, I’ve seen that replication (a fundamental step towards increasing the stock of quality evidence) is not encouraged as much as it should be. No one has the absolute truth and even eminences from Harvard, such as Reinhart and Rogoff*, make mistakes and yet end up influencing policy debates.

Within ARU we try to strengthen the quality of our research during different stages of the research process. For example, we have established reproducibility protocols and encouraged every researcher to follow them. A preliminary set of results is presented to the public in what we call “Applied Research Workshops”. The presentations help us to take into account not only the comments of national experts, but also general audiences (NGO, students, etc.).

– Whose responsibility is it to ensure that the product of your research is taken up by policy makers? Advocacy groups? Policy makers? Think tanks? Where does the responsibility of think tanks lie in ensuring impact?

The responsibility of ensuring that the product of the research is taken up by policy-makers goes beyond a single institution. Quality research is assured mainly within the academic circle; therefore, academia is the main audience of research results. Although this may sound counterintuitive at first, it isn’t, because the results of a single paper could hardly ever be taken as a conclusive piece of evidence. Once there is a reliable body of evidence regarding a topic, then it is possible to engage in dissemination activities.

However, in developing countries it usually happens that the political context is so complex and volatile (for example, replacement of authorities in charge of a specific topic) that it just isn’t worth dedicating an important amount of scarce resources on influencing policy directly. The long-term benefits of investing in young researchers and data collection are equally or more important.

– How much stakeholder consultation is appropriate at different stages of the research cycle and how can it best be facilitated to achieve the highest level of impact?

In the think tank context “stakeholders” is a broad concept that entails, among others: donors, policy makers, and academia. Consulting donors during the research cycle may be extremely fruitful as long as donors are willing to support every stage of the cycle, such as training young researchers or increasing the stock of data. Unfortunately donors want to see results as soon as possible, but investing on the formation of researchers is a long-term investment. However, it helps to achieve the highest level of impact because the institution signals its commitment to scientific research. Policy makers should be consulted at the end of the research cycle, once the interpretations of the results and conclusions have been made, because policy makers may introduce perverse incentives. For example, effective and potential policy makers may attempt to finance research only on topics related to their interests, compromising the credibility of the institution. Academia should be involved during the stage of validating the results through peer-reviewing and replication of results. They help greatly to achieve the highest level of impact by increasing the credibility of the results.

To conclude, if think tanks are to reliably inform policy debates, then the TTI exchange provides them with an excellent opportunity to discuss these issues formally, and to lay down, as a network, the specific standards that are necessary to assure the quality of their research products, which will ultimately increase the credibility of the participant institutions.

* See How a student took on eminent economists on debt issue – and won or Holy Coding Error, Batman