Background

Public engagement approaches that bring together diverse stakeholders to collectively identify problems and generate solutions are recognised as a vital strategy to strengthen research-informed policy and practice to improve health and reduce inequities (; ; ; ; ; ). For example, in Australia, the National Health and Medical Research Council (NHMRC) outlines a vision of “community members, researchers and research organisations working in partnership to improve the health and well-being of all Australians through health and medical research” (), and public engagement is a key pillar of the strategies of health promotion agencies around Australia (; ; ; ) However, incorporating community perspectives into research and policy-making in public health has proven challenging, and there is a need to develop capacity and infrastructure to enable policy and practice stakeholders and researchers to engage with the public in meaningful ways (; ).

Citizen science approaches are a means of actively involving members of the public in scientific research, for example in collecting and analysing data and contributing to research design (). While originating in the natural sciences, citizen science approaches are increasingly being used to involve the public in gathering and making sense of data to address a range of public health issues (; ). For example, a recent scoping review () revealed a growing number of projects in which members of the public gather data on the features of their physical environments that help or hinder them to be healthy (e.g., neighbourhood walkability, green space, access to healthy foods). Such approaches provide opportunities for meaningful involvement of the public in research on issues that affect their health and wellbeing and can lead to community mobilisation and advocacy to address these issues (; ).

By involving members of the public as researchers, citizen science approaches can increase the amount of data that is gathered and analysed and increase the cost-effectiveness of research (; ). The large and novel data sets obtained through citizen science approaches have already made a significant impact on ecology and environmental research (; ; ). Involving community members with diverse knowledge, experience, and perspectives can also lead to the development of new research questions and increase the relevance and applicability of evidence generated. This can lead to the joint discovery of solutions to societal and scientific problems at a local, national, and global scale. Citizen science can also act as a vehicle for public engagement, education, and empowerment, increasing scientific literacy, topic knowledge and interest in science, and awareness, concern, and support for action to address societal issues (; ). Further, the use of citizen science approaches can improve policy decision-making and implementation, bringing about benefits for society as a whole (; ).

While citizen science projects in public health have typically been led by academic researchers (), there is growing interest in these approaches amongst policy and practice stakeholders, as a complementary approach to involving members of the public in their work. Amongst these stakeholders there is particular interest in the potential for citizen science to gain new perspectives on issues that affect the health and wellbeing of local communities and increase community support for actions to improve health and wellbeing. Despite increasing attention to the intersection between citizen science and policy, much of the work in this space has been in the environmental sciences (), and to date, little attention has been paid to understanding the feasibility of policy and practice stakeholder-led citizen science approaches in public health. Within this paper, we introduce the Citizen Science in Prevention project, which aims to address this gap by seeking to support and evaluate the use of citizen science approaches amongst policy and practice stakeholders, and provide the basis for an expansion of these approaches in public health.

Evaluation is vital to demonstrating the impacts of citizen science approaches and the factors that influence their success (; ). While many citizen science projects incorporate evaluative components, there have been few commonly established evaluation indicators or frameworks, with significant variation in the focus of individual project evaluations, from focusing on learning of individual participants (citizen scientists) through to the scientific knowledge gained, limiting opportunities for comparability across projects (). As argued by Kieslinger et al. (), evaluations of citizen science need to assess the value of citizen science in terms of both processes and outcomes, across a range of domains, including the scientific domain (i.e., the knowledge gained through the project), the citizen scientist domain (i.e., involvement and impacts of individual citizen scientists), and the socio-ecological dimension (i.e., the societal impacts of the project). Further, in order to facilitate and demonstrate the success of citizen science approaches in achieving their potential, “thoughtful evaluation needs to be embedded into a project’s design … (and) careful design and definition of desired project outcomes, ongoing monitoring of outcomes and adaptive management, and publishing lessons learned will move the field of citizen science forward” (). In line with this call for more thoughtful and embedded evaluation, we outline how we will use a developmental approach to evaluation () to ensure that evaluation is built into citizen science projects from the outset in a way that enables ongoing reflection and adaptation as well as provides rich insights into the feasibility and impacts of citizen science approaches in public health.

The Citizen Science in Prevention project

For citizen science to become embedded within policy and practice, it is vital that stakeholders have the knowledge and skills to apply these approaches, including an understanding of the impacts of these approaches and the barriers and facilitators to their successful implementation. The Citizen Science in Prevention (CSP) project () was established to build capacity in and strengthen the evidence base for the use of citizen science approaches in policy and practice in public health. CSP is a co-produced project, in which we are working closely with four health promotion agencies operating at the local or state level in Australia (South Western Sydney Local Health District, Tasmanian Public Health Services, the Victorian Health Promotion Foundation (VicHealth), and Wellbeing SA; collectively referred to as “project partners”) that had expressed interest in using citizen science approaches in their work. By working closely with these stakeholders, the CSP project aims to provide practice-based insights concerning the design, management, and impacts of citizen science approaches, and to guide capacity-building efforts for stakeholders in public health and related sectors. The CSP project began in April 2020 and will run until March 2023.

A core component of the CSP project is the developmental evaluation of four stakeholder-led citizen science projects. These projects are being resourced and led by the project partners as a means of trialling the use of citizen science approaches within their organisations. The projects span a range of issues in public health, from digital marketing of unhealthy products to community walkability. They engage members of the public in a variety of ways, from capturing screen shots on social media, to conducting audits of their local community and engaging in advocacy and action. Policy and practice stakeholders in each of the projects have partnered with councils and/or universities (referred to as “project implementers”) to enable the development and implementation of these projects. An overview of each of the four projects is provided in Figure 1.

Figure 1 

Overview of the four citizen science projects included in the evaluation.

Within this paper, we outline our protocol for the developmental evaluation of these four stakeholder-led citizen science projects, in which we will work closely with project partners to enable ongoing reflection and use emerging insights to guide the development and implementation of the projects. Through this approach, we seek to develop a rich understanding of how citizen science projects operate in policy and practice settings, and through the participatory nature of the evaluation process, we aim to build capacity of stakeholders in the use of citizen science approaches beyond the current projects. In this protocol paper, our aim is to contribute to the burgeoning field of citizen science evaluation by setting out an approach to the participatory evaluation of stakeholder-led projects that can be used to go beyond a focus on “what works” to gaining an in-depth understanding of how citizen science projects can be designed and conducted across different policy and practice contexts.

Aim

We aim to understand how citizen science projects operate within policy and practice contexts, including the barriers and facilitators to their application, the circumstances under which they are most useful, and the impacts of these approaches. The following questions will guide this evaluation:

  1. Perspectives: How are citizen science approaches perceived and valued by different actors (including project partners, implementers, other stakeholders within the partner organisations, and citizen scientists)?
  2. Processes: How are citizen science projects implemented in practice? How does the design and implementation of projects align with project goals? What are the barriers and facilitators to implementation?
  3. Impacts: What are the perceived impacts of the citizen science projects (for example, what are the benefits in terms of knowledge gained, change in policy and practice, and for citizen scientists) and how are these brought about?
  4. Context: What contextual factors influence the design, implementation and impacts of citizen science projects in public health? Under what circumstances are citizen science projects feasible (or not)?

Methods

Approach

We will use a developmental evaluation approach () to examine how citizen science projects operate within policy and practice contexts. Developmental evaluation shares commonalities with other participatory, reflexive, and learning-oriented evaluation approaches (e.g., ; ; ; ), and provides a structured way to continually collect, analyse, and use data to support ongoing decision-making. It is particularly suited to innovative, complex, and dynamic projects, where inputs, activities, and outcomes are not known in advance (). Developmental evaluation differs from more traditional forms of evaluation by focusing on supporting the process of innovation rather than seeking to judge the merit and value of a standardised program or to assist in embedding programs into practice ().

Within developmental evaluation, the evaluator is situated as part of the team that is responsible for developing and implementing a new approach, and their role is to bring evaluative thinking into the process of innovation, encouraging stakeholders to continually reflect on and learn from actions taken (). In line with this, the evaluation team will work closely with individual project teams to support the design and delivery of the citizen science projects. Several processes have been established to support the reflective and iterative nature of developmental evaluation and to enable feedback of insights and responsiveness to challenges as they emerge. The lead evaluators (SR and YL, who are experienced in evaluation and citizen science) will attend regular project meetings for each of the four projects, and the evaluation team (SR, YL, LM, PW and BS) will meet regularly to critically reflect on insights and issues emerging from each of the four projects.

To ensure that the processes and outcomes of the evaluation are tailored to the needs of the project partners, we have taken a participatory approach to evaluation in which project partners are named investigators on the CSP research team and are engaged regularly, through quarterly project meetings, regular email communications, and individual project meetings. Project partners were involved in developing and refining the overall approach to evaluation and provided input on the development of data collection instruments to ensure that the data that is gathered is relevant and useful, and will be involved in reviewing and interpreting data, and developing recommendations based on the findings from the evaluation. Given the focus on demonstrating the value of citizen science and building capacity in the use of these approaches in policy and practice contexts, this participatory approach is crucial to ensuring that evaluation findings are relevant, appropriate, and likely to be used ().

Several key principles underpin our approach within this project, and these are outlined in Figure 2.

Figure 2 

Principles underlying our approach to evaluation.

The evaluation protocol has been approved by the University of Sydney Human Research Ethics Committee (Ref: 2020/647).

Design

We will use a multiple embedded case study design, in which cases are examined at different levels of analysis. An embedded case study design is appropriate when trying to generate an in-depth understanding of complex phenomenon where there are perspectives of multiple stakeholders (). Within our study, each of the four citizen science projects will be a separate case, with each of the stakeholder groups representing a secondary unit of analysis (see Figure 3). This will enable us to undertake in-depth evaluations of each of the citizen science projects as well as comparing trends across the cases, examining the interplay between contextual factors and project-specific factors in shaping the processes and outcomes of the projects.

Figure 3 

Overview of embedded multiple–case study design (adapted from Yin 2009).

Data collection

To enable a comprehensive evaluation of the citizen science initiatives from the perspective of the different actors involved, four groups of participants will be recruited across the four projects: project partners, project implementers, citizen scientists and other stakeholders (see Table 1 for a description of each of these groups and how they will be recruited). Consistent with our aim of understanding how citizen science projects operate within policy and practice contexts, the primary target of evaluation activities are the policy and practice stakeholders leading the citizen science projects, with these stakeholders engaged in formal and informal evaluation activities across the duration of the project.

Table 1

Overview of participants and recruitment.


PARTICIPANTSDESCRIPTIONRECRUITMENT

Project partnersPolicy and practice stakeholders who are leading the citizen science projects.Project partners are co-investigators on the project and have agreed to be involved in evaluation

Project implementersPeople who are assisting with or responsible for implementing the citizen science projects, including council staff and university-based researchers.Project implementers will be contacted via project partners and invited to be involved in evaluation.

Citizen scientistsPeople who have taken part in one of the citizen science projectsCitizen scientists will be contacted via project partners and implementers and invited to be involved in evaluation.

Other stakeholdersRelevant stakeholders from within the partner organisations leading the citizen science projects and other organisations who have engaged with or are likely to be influenced by the results of the citizen science projects.Other stakeholders will be identified through discussions with project partners and implementers and invited to take part in the evaluation.

A mixed-methods approach using a convergent parallel design () will be used, and methods will include surveys and interviews, observations of project meetings, reflective journaling, and document review. Table 2 provides an overview of the methods of data collection to address each of the evaluation questions and these are outlined in more detail below.

Table 2

Overview of evaluation domains, questions, participants, and data collection methods.


DOMAINEVALUATION QUESTIONSPARTICIPANTSDATA COLLECTION METHODS

PerspectivesHow are citizen science approaches perceived and valued by different actors (including project partners, implementers, other stakeholders within the partner organisations, and citizen scientists)?
  • Project partners
  • Implementers
  • Citizen scientists
  • Other stakeholders
  • Interviews
  • Meetingobservations
  • Reflective journals
  • Surveys

ProcessesHow are the citizen science projects implemented in practice? How does design and implementation of the projects align with project goals? What are the barriers and facilitators to implementation?
  • Project partners
  • Implementers
  • Interviews
  • Document review
  • Meeting observations
  • Reflective journals

ImpactsWhat are the perceived impacts of the citizen science projects (for example on knowledge gained, policy and practice, and citizen scientists)?
  • Project partners
  • Implementers
  • Citizen scientists
  • Other stakeholders
  • Interviews
  • Document review
  • Meeting observations

ContextWhat contextual factors influence the design, implementation and impacts of citizen science projects in public health? Under what circumstances are citizen science projects feasible (or not)?
  • Project partners
  • Implementers
  • Citizen scientists
  • Interviews
  • Meeting observations
  • Reflective journals
  • Interviews

Project partners and implementers

Semi-structured interviews will be the primary form of data collection within this project, and will enable us to explore the processes, perceptions, impacts, and contextual factors influencing each of the citizen science projects from a range of perspectives. Interviews with project partners and project implementers will be conducted at two timepoints over the course of each of the four projects, with the first interviews taking place during early stages of project development and implementation and follow up interviews within 6 to 12 months of project completion. The early interviews will focus on perceptions of citizen science approaches and what they can offer, the goals and expected impacts of each project, and early experiences in planning and implementing projects. Follow-up interviews will explore the implementation and impacts of each citizen science project, including barriers and facilitators to implementation, and contextual factors influencing how projects played out in practice.

In addition to interviews, we will gather data from observations of meetings with project partners and implementers to help us to document how decisions are made, what issues arise, and how the citizen science projects are conceptualised as they progress through development and implementation. Members of the evaluation team will host quarterly meetings with all project partners, and where possible will attend regular meetings with each of the project teams and will gather reflective notes on key insights and issues arising. Where appropriate, members of the evaluation team will prompt project partners and implementers to reflect on issues.

Through ongoing discussions with project partners and implementers we will also identify relevant documentation related to project design, implementation, and expected and actual impacts. These documents will include project plans, interim reports, and final reports. We will also complete a review of key documents from across the CSP project, including notes from Community of Practice sessions to capture any key reflections or insights related to the research questions.

Finally, project partners will be asked to complete a reflective journal as the projects progress to document their experiences in the process of engaging with citizen science, including emerging contextual factors, important decisions, and challenges faced. We will prompt partners to record these notes following key meetings as well as encouraging them to capture notes in an ongoing manner.

Citizen scientists

Upon completion of each project, we will invite the citizen scientists involved in each project to take part in an online survey and follow up interview to reflect on their motivations and experiences of participating in the respective citizen science projects. The online survey will allow us to gather quantitative data that can be compared across the four projects to identify similarities and differences in motivations and experiences across projects and will enable us to purposively select participants to invite to interviews. Through follow-up interviews we will further explore citizen scientists’ interests in relation to the project, their experiences of taking part, the perceived impacts of the project, and the likelihood of engaging in other citizen science projects in the future. Involvement in the evaluation is independent of involvement in the individual projects, and citizen scientists are not obliged to be involved in the evaluation process. To acknowledge their contribution to the evaluation, citizen scientists will receive a $25 gift card upon completion of a follow-up interview.

Other policy and practice stakeholders

We will undertake semi-structured interviews with a range of other policy and practice stakeholders to explore their perceptions of citizen science approaches and track the impacts of the projects being evaluated, including whether and how the citizen science projects have influenced practices or decision making. Other stakeholders will include relevant people from within the four partner organisations and other agencies who have engaged with or are likely to be influenced by the results of the citizen science projects. These interviews will take place within 6–12 months after completion of each of the four citizen science projects to allow time for individual project findings to be reported and disseminated and for impacts to emerge.

Data analysis

In line with a developmental evaluation approach, data analysis will be iterative and ongoing, with key insights fed back to project partners on a regular basis, as they emerge, through project newsletters and regular meetings. We will also provide opportunities throughout the project for project partners to provide input into data analysis, interpretation and reporting to ensure that the outputs from this project are relevant and applicable to key stakeholders. NVivo qualitative data analysis software () will be used to manage data within this project.

Following each round of interviews, we will conduct thematic analysis of the data to construct themes inductively instead of restricting data analysis to preconceived categories (). In accordance with the recommendations of the consolidated criteria for reporting qualitative research (COREQ; Tong, Sainsbury, and Craig 2007), we will use a collaborative and iterative process of analysis to maximize rigour and ensure credibility of our findings. This process will involve members of the evaluation team reading and reviewing the transcripts, and working collaboratively to develop the codebook, code data, and organise themes. Following the same process, we will undertake thematic analysis of project documents, meeting notes, and reflective journals in an ongoing manner to draw out key insights.

Quantitative data from the citizen scientist survey will be analysed using SPSS software (). Descriptive statistics will be calculated for motivations and experiences of involvement in citizen science projects. Citizen science survey data will be analysed in parallel with citizen scientist interview data, with interview data providing more in-depth insights into the experiences of citizen scientists in the projects.

A variety of methods exist for integrating data in mixed-methods studies including merging, connecting, and embedding data (), and in this study, we will employ complementary integration of data from qualitative and quantitative sources with respect to our key research questions, by mapping the data against each of the four evaluation domains identified. Synthesis of data from the various data sources (interviews, survey, meeting notes, document review, and reflective journals) will be performed for each of the four case studies individually, with comparisons across the four projects to draw broader learnings about the process and impacts of stakeholder-led citizen science projects in public health. In doing so, we will also draw on the Kieslinger et al. () framework to ensure we draw out the processes and impacts across the areas of scientific knowledge, citizen scientists, and socio-ecological impacts.

Member validation through presentation of emerging findings at meetings with project partners and implementers will provide opportunities for clarification and confirmation of interpretations, as well as helping to establish trust in the emerging analysis and providing opportunities for ongoing reflection.

Discussion

Within this paper, we present our planned approach to evaluating stakeholder-led citizen science in public health. There is a growing need for public health agencies to better understand the needs of the communities in which they operate and to be more responsive and involve them in planning programs and initiatives that will have an impact on them. Citizen science is one approach to enabling greater involvement of community members, but there is a need for more evidence on how these approaches work in practice and a need to build capacity amongst stakeholders in the application of these approaches (). While there has been increasing interest in the application of citizen science approaches amongst these groups, to date there have been no systematic efforts to support and evaluate stakeholder-led citizen science approaches in public health. This project will provide in-depth, contextualised insights into how policy and practice stakeholders might incorporate citizen science approaches into their work and the value in doing so, as well as key issues to consider when embarking on the use of these approaches. Adoption of a developmental approach to evaluation will provide ongoing feedback to assist in decision making, facilitate capacity building within the project teams, and provide insights concerning barriers and facilitators to the use of these approaches and the contextual factors that influence their implementation and impacts.

The use of an evaluation approach in which project partners are involved in in the design of the evaluation and interpretation of data will ensure that findings are relevant to the needs of the partner agencies. Indeed, the developmental evaluation is a capacity-building exercise in itself (), increasing the familiarity and confidence of partners to use citizen science approaches in their own settings. Most project partners involved in this work are trialling the use of citizen science for the first time, and so the opportunities for reflection and learning built into the evaluation process provide a forum for knowledge sharing and support across projects. Rather than simply providing an end-point evaluation, the developmental evaluation approach helps project teams to reflect on what is going well and what needs to be adapted in an ongoing manner, sensitising them to the factors that influence how a project plays out and needs to be adapted.

Given the paucity of comprehensive evaluations of citizen science projects using commonly established indicators, this project will draw on the citizen science evaluation framework by Kieslinger et al. () to inform collection, analysis and reporting of data on processes and impacts across the scientific, citizen scientist, and socio-ecological domains. This will allow us to situate the findings of this project within the broader citizen science evaluation literature and explore similarities and differences in the processes and impacts of citizen science projects across disciplines.

Potential limitations

Findings from each project are necessarily context dependent as each is taking place within unique circumstances with a variety of factors that influence development, implementation, and impacts, limiting the generalisability of the findings from individual projects. However, the use of an approach that utilizes multiple embedded case studies, which seeks to elicit an in-depth understanding of the perspectives of multiple actors and the contextual factors that shape the processes and outcomes of the projects, will enable us to draw out insights that can inform the use of citizen science approaches across settings.

While this project will provide crucial insights into the application of citizen science approaches in policy and practice settings, it is important to note that the four partner organisations are somewhat similar in the sense that they are government health agencies (albeit a mix of local and state level, with differences in funding and governance structures). We will not capture the potential application of citizen science approaches by other key stakeholders, such as non-government organisations. Despite this, we anticipate that the insights will be useful across a range of settings, and through this evaluation and the broader CSP project, we hope to stimulate further application and evaluation of citizen science approaches in policy and practice settings in public health.

Within developmental evaluation, embedded evaluators work closely with project teams, and within our study, project partners are members of the broader research team and have input into the design of the evaluation. This participatory approach to evaluation may be seen to reduce objectivity and introduce potential bias into the research. However, developmental evaluation does not seek to answer the question “Does it work?,” but instead seeks to understand the complexity of how a project plays out in practice and to use the insights that emerge over the course of the project to enable ongoing adaptation. Within this context, our approach would be considered a strength rather than a limitation.

Conclusion

Within this paper we have outlined our planned approach to the evaluation of stakeholder-led citizen science projects. By adopting a approach to evaluation, we ask “What is going on here?”, seeking a deeper understanding of how citizen science approaches projects operate within policy and practice contexts, including the barriers and facilitators to their application, the circumstances under which they are most useful, and the impacts of these approaches from the perspective of different stakeholders. Through the adoption of a participatory approach to evaluation, and focusing on ongoing reflection, we aim to support stakeholders to utilise citizen science approaches within their work.

This detailed presentation of our evaluation protocol is intended to contribute to the growing literature on citizen science evaluation, offering practitioners and evaluators an example of an approach which focuses on fostering innovation and building capacity in the use of citizen science approaches. This approach is likely to be particularly suitable when supporting and evaluating the use of citizen science by stakeholders and agencies that are new to these approaches.