Stefanie Schuerz, Barbara Kieslinger, Katja Mayer and Teresa Schäfer all work at the Zentrum für Soziale Innovation GmbH/Centre for Social Innovation (ZSI) in Austria which has been at the cutting edge of Citizen Science in Europe for many years. Recently they have been focusing much of their research and development work on the topic of Participatory Evaluation and Impact Assessment in Citizen Science and so we thought it would be interesting to hear more from them on the subject.
Sally Reynolds: Can you describe in your experience how participants are usually involved in the evaluation of citizen science projects?
ZSI: A lot of times, academic scientists’ idea of participation starts and ends with people acting as data providers or collectors. So they either fill out a survey or take part in interviews or focus groups and provide their inputs in that way, or they go out into the world to collect data for a science project they have little to no say in. This is also true for evaluation. So participants are engaged by evaluators who might be internal or external to a project, and they are for instance asked to fill out a questionnaire or give an interview or are asked to provide their thoughts and experiences with the project and its outputs in some other way.
Less often, they might also be engaged as evaluators who work themselves as data collectors to scrutinise a project and its outcomes, employing methodologies and criteria pre-defined for them by academic scientists and other professionals in the academic system. In any case, they are usually not engaged as citizen experts who may have valuable insights on relevant criteria, values and expectations to look at when evaluating a research project.
S.R.: What about when it comes to assessing the impact of a citizen science project, how is this usually carried out?
ZSI: Evaluation in citizen science projects is usually focused on impact evaluation, so it is done near the end of a project and mainly concerns itself with project goals and whether and how they have been achieved. There has been increasing attention over the last 10-15 years for scientific endeavours to put greater emphasis on how they may achieve benefits beyond the scientific system, such as social, environmental, economic or health impacts (exemplified for instance by the Sustainable Development Goals).
But still, the focus is usually on impacts that are quantifiable, such as how many people were engaged as participants, how many data points they created, and how many of them remained active throughout the entire project or in the field after the project ended. What is less focused on is what is called formative evaluation that looks at the process and the feasibility of a project.
Such an evaluation is usually undertaken during a project runtime to adjust its processes and methodologies for better capacity building, target group alignment, and to facilitate collaboration and synergies. However, this all depends on the type of project conducted and its research design. For instance, a conventional Citizen Science project situated in the natural sciences might be organised differently from a Citizen Social Science project, because the latter is structured around a social concern, and implements thus a social scientific research cycle that might be more responsive to the different needs arising throughout a research process.
S.R.: Are there limitations in the typical evaluation and impact assurance practices in citizen science projects, and if so, how would you describe them?
ZSI.: Of course every approach to evaluation has its benefits and limitations, depending on what is taken into focus. According to the logic model, for instance, an evaluation may be formative (process-based) and/or summative (outcome-based). The first is done continuously during the runtime of a project and may be used to improve a process while it is still in progress. This makes it especially valuable in the context of adaptive project management, for example if you have a lot of uncertainties going into a project, or in times of unexpected crises as we all have come to know during Covid-19.
However, it does not scrutinise the impact of a project, which is often what counts most for external stakeholders, both research funders and beneficiaries. For this, you would need a summative evaluation that provides evidence for change triggered by an intervention, and is typically done at the end of a project or programme. Typically in citizen science projects, neither of these approaches defines project goals and formulates impact visions together with participants, which means the indicators of success are primarily tailored towards the logic of the academic system and the expectations of research funders and common practices of impact evaluation. This also means you might miss goals and potential impacts that are more relevant to the participants of a project.
Defining project goals with participants may also lead to more ownership and identification with a project, as well as empower potentially marginalised populations in the process.
Another aspect that comes into play when you engage actors in the field as equal partners in an evaluation is their knowledge and intuitions about what makes sense to even define as an impact, and which evaluation instruments would be appropriate for their respective contexts.
S.R.: Why do you find the topic of Participatory Evaluation and Impact Assessment in Citizen Science initiatives particularly important to address at this time?
ZSI.: Generally speaking, participation is seen as a way of making research and innovation more relevant and tackling complex and interconnected societal challenges, and it is of growing importance in our current R&I ecosystem. This is evident as public engagement was positioned as a fundamental building block of the current mission-oriented Horizon Europe framework programme.
However, for many practitioners public engagement is still a one-way street in the sense of “knowledge-transfer”, less about equal partnerships between different societal actors invested in a shared vision, and more about inviting citizens into single steps of an immutable process, as data providers, data gatherers, or as an audience to proselytise.
Our vision, shared by a lot of our colleagues working on projects similar to ours, is of a more radical inclusion. Because the question of participation of wider societal stakeholders in research and innovation is now much more widely discussed and of rising prevalence, the limits of what is seen as possible and feasible are currently being redefined.
While this change might be incremental, there is much more awareness both on the side of academic researchers, research funders, and in wider society that citizens and those affected by a societal issue may be included in scientific endeavours. Now it is important to demonstrate how this may happen at every step of the research cycle, from the very conception of a scientific project to the evaluation of its impacts, and create common standards and practices of how this might happen.
S.R.: Are there in your opinion good examples of effective practices in Participatory Evaluation and Impact Assessment that others can draw on?
ZSI: These practices are still relatively new in the field of Citizen Science and Citizen Social Science, but there have been research projects focused on this question already, many of which are funded through the SwafS Citizen Science call, such as our project CoAct but also partner projects such as ACTION, MICS, CitieS-Health, COESO, Step Change, INCENTIVE, and PRO-Ethics. Some of these work with a dedicated evaluation team, others co-create evaluation measures with participants of Citizen Science Initiatives.
They also differ in whether they focus on evaluation as a means to inform adaptive project management, while others focus more on increasing the quality of project outputs. Participatory evaluation is also to some degree integrated into research approaches such as Community Based Participatory Research (CBPR) and Participatory Action Research (PAR). Broadly speaking, most resources currently available can be found in the fields of citizen activism and international cooperation and development, as they have a much longer tradition of employing participatory evaluation in their work. Great resources are for instance Better Evaluation and Fast Track Impact.
S.R.: What are you planning in terms of creating and sharing knowledge in this area?
ZSI: We are currently editing an upcoming special issue of fteval journal on Participatory Evaluation and Impact Assessment in Citizen Science which will include both academic articles and practice reports and has a publication date set in June 2022.
We have also been working on a series of workshops and webinars in the course of the CoAct project. We have created extensive documentation available on the project website and are cooperating with other projects to exchange on a more regular basis on this question. We also curate an open Zotero group with resources on participatory evaluation.
S.R.: What are you hoping to achieve with your special interest in this topic and how can others interested in the topic contribute?
ZSI: Our goal is for participants of research and innovation projects being less treated as research subjects and more as equal partners with a stake in the success of an initiative. R&I has immense potential to meet the challenges of our time, but to do so we need to open the scientific system up to other forms of expertise and do justice to the lived experience of all those affected by such issues.
We also believe that it is essential for the scientific enterprise to succeed in meeting the complex and interconnected societal challenges we face today to critically examine existing power structures and raise the voices of the marginalised and disenfranchised. In thinking about the next steps to be taken, we are planning to intensify our collaboration on this issue with other projects and multiplier organisations. We will also organise a workshop on participatory evaluation in citizen science at next year’s ECSA Conference and are happy for people to join in the discussion.
S.R.: Many thanks for your input Stefanie, Barbara, Katja and Teresa.