Planning agencies should make the value of science more clear.
Interview | editors
March 1, 2023 | Don’t wait for one comprehensive model that can make visible the value of investments in science. Such a model will not come. When calculating party platforms and policies, it’s best to use all the loose scientific knowledge that’s already available, says Miriam van Praag, chair of the KNAW panel that produces an advisory report on the value of science. Anything is better than zero return on investment in science – which is happening now.
If a political party mentions an investment in science in its party platform, the CPB is likely to only view that investment as a cost item in the calculation. Planning agencies’ computational models cover only the short and medium term, while investments in science often only pay for themselves in the long term. Evaluation of programs and policies should and can be done differently, and this is the main message of the individual The new KNAW tip.
The report also argues that the value of science should not be expressed solely in economic terms. A better indicator is ‘broad prosperity’. This concept consists of eleven categories, including housing, knowledge and skills, security and citizenship. So it’s better to talk about the values of science than wanting to use a single outcome measure, according to the panel that drafted the advice.
It doesn’t have to be perfect
The report expresses urgency. For example, the last sentence of the abstract states that “the ambition of planning offices must be raised quickly and dramatically” because the value of science is greater than “waiting a few more years for meaningful results.”
This tangible wheel is no coincidence, Miriam van Praag, VU board chair and KNAW committee chair, tells ScienceGuide. “Existing tools are not going to produce the desired results. Planning agencies are also aware of this. If we are to develop and perfect a new approach first, we are going to be busy for a while. So our message is that it does not have to be complete and that everything cannot be put into one complete model.” However, anything is better than setting returns to zero, as is now implied when accounting for investments in education and research.
The report uses the example of additional training each year, which generates an economic return of six to ten percent annually. “If we criticize the size of that margin — which is very wide — while we’re using models that ignore yield, and so we set it to zero, I’d rather just start with the fact that the efficiency is six to ten percent.”
Comprehensive model not aspire
Existing models of planning agencies serve their purposes well, for example calculating whether expenditure and income are in balance. However, it is only about the short or medium term. This makes such models unsuitable for calculating long-term values of science, says the KNAW panel. Planning agencies are aware of this problem.
However, setting up one large model capable of doing this is not an option. “Economists and econometric economists in the field of science rarely aspire to such a comprehensive model,” says Van Praag, an econometric expert himself. “One comprehensive model that shows different results and can correctly measure the long-term effects of different investments is simply too much. The CPB Director fully agrees with us.”
Lots of qualitative knowledge about the value of science
The committee therefore calls for the systematic use of a wide range of tools such as “sub-studies, scenario studies, literature studies, empirical facts, empirical studies, and meta-studies.” Existing studies will not always match the situation exactly and their results can sometimes be inaccurate. However, this supported way of working cannot be compared to the pursuit of a single, comprehensive model, as Van Praag demonstrates. After all, there is no universal usable model, and there never will be one.
The newsletter is available exclusively to employees of our partners.
“Now just use what you know,” Van Praag sums up the committee’s petition succinctly. “There’s a lot we don’t yet know quantitatively, but we do know qualitatively. Suppose you have qualitative knowledge about the impact of outdoor play on children’s development. If Party A program invests in children in elementary school playing outside for three hours each day, But Party B’s program didn’t say that, then you can say something about this on the basis of qualitative knowledge if you compare the programmes.”
Many possible measures and policy options in the field of education and science have already been studied, says Van Praag. “If a party platform suggests something about these topics, we can often say something sensible about them. In addition, working with micro-studies allows us not to express everything in terms of GDP, but from the use of ‘broad prosperity’.” She also welcomes Planning agencies do this. For example, a lot of research has already been done on the impact of education on health and equal opportunity – non-economic outcome measures.”
Compare it with recognition and appreciation
Of course one would have to argue with belts that are not yet perfect at first. For example, the panel has already established that indicators of broad prosperity are measured in different ways in different scientific disciplines. It is also conceivable that the impact of a particular action or investment was investigated, but not in the Dutch context.
Van Praag argues that planning agencies and scientists can at least work on this. “Instead of pretending that investment in science has no return, which is now de facto the case, we should try to translate such research from another country into the Dutch context and include it as information in the calculations of election programs and policy plans.”
If one large model is not used, not every statement based on qualitative data needs to be equally accurate, van Praag explains. Above all, it must be a narrative of every party platform or policy choice. You can compare it with a recognition and appreciation program; We no longer want to evaluate scientists on the basis of a single quantitative indicator, but look at the whole picture.
Planning agencies can already measure the value of science
If the values of science are better defined, it may turn out that one discipline contributes more to certain indicators of broad prosperity than the other discipline. This may then lead to a political decision to fund some scientific disciplines less and others more. However, the advisory report does not specify this scenario, and therefore there is no fear of such influence within the committee, says van Praag. According to her, it is hard at all to conceive that such a distinction could be made.
“We are looking at a somewhat higher level of abstraction; the values of science must first be better understood. Compared to the current situation, it is really useful for us to make public statements about policy choices or choices in education and science on electoral platforms.”
So things have to get better in the short term – and it is possible, van Praag knows. “the Promising studies by CPB, where current scientific knowledge on certain topics is presented, that’s what we mean. So they already can! If we do this now periodically, there is a good basis. I know it is very difficult to measure the values of science; For example because it is not entirely clear which investment resulted in the return, or because there is too much time between the investment and the return. However, in any case, this can be done much better than we do now when calculating electoral platforms or evaluating policy options.
“Travel enthusiast. Alcohol lover. Friendly entrepreneur. Coffeeaholic. Award-winning writer.”