Critics see the messages on the airlift as a way to expand the university’s business return thinking or as a marketing strategy to show the outside world, above all, that Erasmus University is doing a good job. In fact, there is no definition in the university’s strategy document of exactly what the impact would entail. In this way the questions remain open: What exactly is the effect? What is new about this ambition for Erasmus University? How do you measure whether science has an impact?
1. What is old or new is the effect?
The use of the word “effect” in conjunction with the science is somewhat new. Jorit Smit, who received his Ph.D. from Leiden, says of how science tries to benefit society, the idea behind this has been around for some time. It is part of a working group looking at how Erasmus University can best assess its ‘impact’. In essence, impact is about the “benefit” or “value” of science to society, Smit explains. How does your research relate to social issues outside the university?
Smit believes that this debate is as old as modern science itself, but it has changed in recent decades. Universities have always been associated with their environment. In the nineteenth century, universities mainly trained “good citizens” who served society, for example, as civil servants or bureaucrats. Scholars were already talking about their social significance in the 1970s.”
In the 1980s, the legislator added a legal article to the Higher Education and Scientific Research Law. In addition to education and research, the transfer of knowledge to society has become a clear task of higher education. This led to a discussion about valuation (more on this later). In recent years, outgoing Minister Ingrid van Engelshofen (Science, D66) has emphasized that making an impact must be one of the three pillars of science.
2. If the idea is not new, why is it so big for a sky bridge?
Because the focus of science in recent decades has been more on publishing in high-quality scientific journals, rather than on solving social issues, says influential researcher Smit. From the 1990s onwards, academics were judged primarily on their numbers: the number of publications, the high citation scores, and the grade of the journal in which they were published. “The universities implicitly chose the people who played this game well,” Smit says. “Scholars who establish relationships with the outside world are undervalued in this system. If you choose thirty-year employees who are especially good at publishing, then academics with other talents are left.”
Thus Smit also provides an explanation for universities’ recent interest in characterization on social impact. “Now that social influence is important again, you may need to draw attention to influence, in order to attract people who are good at it again.”
This additional attention to social impact is also due to the fact that the government has made universities more accountable for how they spend the public money they receive over the past 25 years.
Wilfried Meinhardt, Director of Policy at the Rotterdam School of Management and co-strategy: “The university has moved into a more neoliberal context, where accountability for public investments is the norm rather than just access to money and science.”
3. Where did these other buzzwords actually go? And what’s the difference with the effect?
The use of the word effect may be somewhat new, but it is very reminiscent of the term valuation, which was introduced in science ten years ago.
A term that did not make everyone happy by the way. Professor of theoretical astronomy, Vincent Ecke said in The world goes on In 2013 in an item about itchy words. Valuation is the creation of value from science. Soon the focus was on economic value alone. Furthermore, Eck took for granted that academics create value. Why was a special word for it needed?
Recognition and valorization became popular around the same time that VVD member Mark Rutte became prime minister in 2010. According to the liberal cycle of government, science should have been clearly profitable. And science should be guided more by questions of society, especially the business community. “Knowledge, experience, cash register” was one of the controversial statements on the subject by Halbe Zijlstra (VVD), the Secretary of State who was in charge of science policy at the time.
“Valuation has lost its popularity because of the focus on economic gain,” says valuation researcher Linda van de Burgowal of VU University in Amsterdam. “The impression arose that the valuation is only about money.” So it became a dirty word. According to Van de Burgoal wrong. “It could also be value creation for science, or for the general public or professionals in a particular field.” But the picture is already over. According to Smit, this is an important reason why van Engelshofen replaced the word “effect”.
But the experts in science themselves also differed about the meaning of the term valuation. According to Meinhardt, valorization implies a linear model of knowledge production: a scientist invents something and brings it into society where it has a return, while knowledge production often arises in interaction with society, that is, not linear. Therefore he calls the term extremely limited valuation.
Van de Burgoal, in turn, says that this is depiction (again), and that valuation aims to be the process of producing knowledge in interaction with society. Smit believes that the effect is still suggestive of the linear model, as if scientists outside the community were doing the sway in a butter can. “The effect of the effect, as it were. In any case, the valuation refers to another process.”
Finally, the disappearance of the term valuation is also related to the etymology and meaning of the word, as van de Burgowal sees it. “It comes from Flemish and French sciences. Can’t translate into English. Evaluation he is Karl Marx concept It does not cover the load. In the Anglo-Saxon countries, scientists used the “effect” for almost the same idea. English is the official language in science, so we now use the word trace.”
4. How do you measure the effect?
Well, the term is clear: impact. But how do you measure whether science has an impact? She called: “This is basically a question of politics and management. They need numbers to make it easier to tell the scale of the impact to the outside world.”
Measurement is difficult if the definition of the effect is not standardized. Impact means something different for every college, program and department, director of policy Mijnhardt believes. While RSM can help companies and organizations with business models for the energy transition, for example, the Erasmus School of Law can help governments and organizations develop good regulation and contractual relationships. “We intentionally left the definition of impact open at the institutional level, because it varies a little bit for each domain,” Meinhardt says.
Smit sees methodological problems looming when benchmarking. “It takes a long time before you know your impact, maybe twenty years. Perhaps your research will be forgotten and others will rediscover it after forty years. And suppose there is a social change in the area you are looking for, you can seldom say to what extent it is due to your research.”
But there are solutions. A common way to map research impact is to write novels. Simply put, a narrative is a written story about what you do as a scientist, how it turns out to be socially relevant and what relationships and interactions you have with social parties. “Because the truth is so complex, the description in the narration works best,” Smit says. Then you describe what works in your daily practice, as well as what does not work and what can be improved. It’s best to talk to community participants to hear their feedback.” Meinhardt: “You can back that up with numbers, but this story has to be groundbreaking.”
Smit: “The Netherlands Organization for Scientific Research, for example, works with narrative resumes. Then they tell the A4 applicants what they have achieved, what their background is, and how a particular scholarship can help them.” The problem, says Smit, is that the willingness to be evaluated on the basis of indicators Quantity remains. “Everyone is used to it. Partly for this reason, there is still a lot of controversy and uncertainty about the choice between these narrative resumes. Diversity sections also point to narrative perils, as implicit biases can play a larger role in assessments. So we must also Teaching leaders how to approach other forms of assessment and evaluation.”
5. Science is already subject to burnout. Shouldn’t we kill scientists outright if they also have to make an impact?
Then you write a narrative in which you try to quantify your impact, but in your performance interview you are still judged and evaluated based on the number of posts and citations. Isn’t making an impact just another task in an already busy schedule of scientists? Shouldn’t there be a different way to evaluate?
Smit sees a lot of criticism of the current method of evaluation, as it leads to perverse incentives. “When publication is critical, academics sometimes develop questionable strategies to publish as much as possible.” Example: Hello-shreddingInstead of presenting your research results in one post, you publish it in slides on multiple posts. That can be at the expense of content. “So if you start focusing more on impact, you have to reduce the importance of citations and the number of those posts.”
According to Smit, we should not fall into the trap of measurable measures again when assessing impact. “You should avoid creating a new system for incentives It builds around impact with the same quantifiable values as: How much do you post in the media, and how much do you get cited on social media? This doesn’t tell us much about the actual value of the research to society.”
Meinhardt: “Ultimately, we have to move toward a situation where scientists are allowed to specialize more in their careers. We still often ask scientists that they must be knowledgeable about all markets, they must have all the qualities: they must be educational, and good Scientifically, and socially good. If we accept that diversity can be found in it, we will make more progress.”