Introduction

Robust evaluation provides opportunities to enhance participatory science project quality and accountability to stakeholders (; ; ). Much of the current work of participatory science evaluation focuses on data quality, participant and learning outcomes, and multidimensional impacts (; ; ; ). While traditional science often focuses on quantities of papers and citations to gauge efficacy, participatory science projects span a broader range of goals, methods, and products not capturable by publications alone (). These and other considerations, such as the relatively recent rise of the participatory science field, make evaluation challenging. Although evaluation is a rich area of active research, project teams may have limited experience of evaluation resources.

Participatory science evaluation frameworks are growing in quantity and sophistication (). For instance, impact evaluation considers how participatory science affects specific areas and dimensions varying widely by project () but often includes science, society, economies, the environment, and governments (; ; ). Also, the CS-Track project created a database of more than 4,500 projects and conducted broad analysis across existing projects with their data (). Recently, the Measuring the Impact of Citizen Science (MICS) project produced a questionnaire and artificial intelligence–based online evaluation tool for projects seeking to measure impact in society, science, environment, economy, and governance (). Evaluation itself has become more participatory, directly involving project teams and participants in the process of evaluation (). Each of these approaches provides layers of sophisticated analysis. However, before a project can mine data for analysis, basic data must be gathered, often by practitioners. The Science Products Inventory (SPI) is a tool that formalizes the collection of data across a wide range of scientific outputs to support project evaluation. Examples of practitioner application of evaluation tools outside of the original studies are not commonly reported in the literature. The usability of evaluation frameworks and methods by non-professional evaluators is important, as participatory science teams may not include members with evaluation experience () or may wish to involve participatory scientists in evaluation (). Passani et al. () note that “only few publications provide actual indicators for impact assessment while the vast majority are at a higher level of abstraction, in this way, failing to provide a ready-to-use methodology for practitioners.”

We present three diverse use cases centered on one simple tool: the SPI, a checklist-style, baseline framework. It establishes two broad inventories: a products inventory of items that reflect important scientific baseline assets for participatory science projects, as well as data practices that apply FAIR (Findable, Accessible, Interoperable, and Reusable) principles () to citizen science metadata (). Product outputs are sorted into categories for evaluating and planning projects in terms of a broad scope of scientific productivity and data practices (Table 1). Supplemental File 1 contains a complete listing of each SPI category and product, with brief definitions for reference. The SPI was created by a panel of 20 experts representing diverse perspectives, and piloted with eight established participatory science projects (). This paper is written as a reflective supplement and follow-up to Wiggins et al. ().

Table 1

Summaries of SPI Categories and Products, and FAIR Data Categories and Practices. See Supplemental File 1 for full details of the items. API: application programming interface.


CATEGORYPRODUCT

WrittenDissertations, theses (#)

WrittenScholarly publications (#)

WrittenReports (#)

WrittenGrants awarded (#, $)

DataAPIs (Y/N)

DataData packages (#)

DataMetadata (Y/N)

DataVisualizations (Y/N)

DataSpecimens/samples (#)

DataRequests (# requests, transfer volume)

Management and PolicyRegulatory action (Y/N)

Management and PolicyDecision support (Y/N)

Management and PolicyForecasting/models (Y/N)

CommunicationBlogs (Y/N)

CommunicationNewsletters (Y/N)

CommunicationVideos (Y/N)

CommunicationPresentations (Y/N)

CommunicationWebsite (Y/N)

CATEGORYDATA PRACTICE

FindableData available from project website (Y/N)

FindableData available from repositories or registries (Y/N)

AccessibleDownloadable data file(s) available (Y/N)

AccessibleTools for data exploration (Y/N)

AccessibleData licensing specified (Y/N)

AccessibleMetadata available (Y/N)

AccessibleAPI documentation (Y/N)

InteroperableData recorded in standard formats for discipline (Y/N)

ReusableUniqueness of data (describe)

ReusableTime scale of data (# yrs)

ReusableSpatial scale of data (describe)

ReusableHow much data (# data points, describe)

ReusableErrors documented (Y/N)

ReusableQuality assurance or quality control documented (Y/N)

ReusableChanges documented (Y/N)

ReusableQuestionable data flagged (Y/N)

ReusableSoftware or platform development (Y/N)

The variety of evaluation frameworks enables projects to select appropriate tools based on compatibility with required metrics, project workflows, and available evaluation expertise. Mayer et al. () conclude that multiple evaluation methods may be used holistically. While a number of the frameworks we cited above establish robust evaluation practices for features such as impact, our goals in this study are primarily limited to broad applications of scientific evaluation. Many projects like ours must evaluate quantitative scientific outputs, measured not only with peer-reviewed science papers. To this end, we report on the usefulness and customizability of the SPI, especially for non-evaluator project leaders and as a framework that aligns well with our funder priorities. It was practicable for our independent project teams to initiate and begin using the SPI soon after its 2018 publication, which coincided with NASA’s SPD-33 policy advocating for strong science outcomes for citizen science (). Previous uses of the SPI, such as Dykman and Prahalad (), reinforce the concepts of data availability and reusability, but they do not directly demonstrate utilization of the framework, leaving a gap in the literature.

Here, we bring together three independent perspectives, ranging from an internal evaluation by a team of non-professional evaluators, to an external evaluator, and a program manager–led evaluation by a hosting organization. We were curious as to why the SPI was so effective when our perspectives and projects were so varied. Therefore, we examined our evaluation use cases for common insights. Together, we provide a discussion of the utility of the SPI, including our applications and customizations. We highlight ways in which participatory science practitioners and hosting organizations with primarily scientific goals and limited evaluation resources can customize tools to evaluate their projects.

In doing this, we acknowledge that the SPI is limited in scope and that there is promising new evaluation research (for example, ) incorporating facets such as social sciences and participatory evaluation. We make note of the additional frameworks each perspective incorporates into the SPI to create a blend of evaluation methods. Noting that the SPI is “intended to support general-purpose planning and evaluation of citizen-science projects with respect to science productivity” (), we acknowledge that “citizen social science research” as defined by Kieslinger et al. () is beyond the scope of this paper, but of interest for future work.

Methods

Utilizing the SPI as a tool to catalog science products and data practices, we independently evaluated our respective participatory science projects. These projects span multiple subject areas, including the space science–oriented Aurorasaurus project; an interdisciplinary suite of protocols in GLOBE’s (The Global Learning and Observation to Benefit the Environment) participatory science mobile application, GLOBE Observer (GO); and the broad network of environmental science projects administered by the Smithsonian Environmental Research Center (SERC). Supplemental File 2: Project Background Information includes additional descriptions of each project and comparison of their relative sizes and scopes.

The SPI can be used to collect quantitative data about a project’s science products in a standardized way such that disparate projects can be compared, and it can be customized for specialized types of data specific to one project. For us, the SPI’s Data Practices inventory (Table 1) comprises a checklist toward capturing data and metadata practices across projects. We also identify potential gaps and propose customizations by offering practical insights from user perspectives.

Large, contributory projects () within a US government ecosystem are required to provide specific ongoing, formative reports using concrete, quantitative metrics like publication quantity. Documenting the indicators required by our grants with a flexible framework like the SPI allows us to meet our projects’ unique needs over time. Beginning in 2019, the authors realized that each of us independently and uniquely used the SPI, and that comparison could be useful to others. Having individually analyzed our respective SPI datasets, we conducted periodic virtual meetings for secondary analysis and peer review of each team’s past and current SPI applications. Each team presented their individual usages and customizations, which were discussed, compared, and contrasted with the others. We recorded the meetings and took notes, which we co-interpreted to produce our results and analysis. We provide more details on individual methods in each of three “perspective” sections to follow. This paper considers our uses of the SPI for practical evaluation needs and provides guidance for other projects considering this tool. We do not perform a formal assessment of the efficacy of the SPI for assessing other outcomes, as that is beyond both our scope and that of the SPI as a general-purpose planning tool (). The lessons from each program are summarized in Table 2.

Table 2

Summary of our reflections on the use of the Science Products Inventory. While these benefits are not unique to the SPI, they show how the tool can aid effective participatory science evaluation. GLOBE: The Global Learning and Observation to Benefit the Environment, SERC: Smithsonian Environmental Research Center.


LESSONS FROM OUR REFLECTIONS ON THE USE OF THE SPIPROJECT

Adaptability

Provides a framework to add, remove, or adapt evaluation items as required by the projectAurorasaurus, GLOBE, SERC

Provides a framework to create new inventories (specifically an Engagement Product and Education Research Product Inventory)GLOBE

Allows formative evaluation (reflections on changes for enhanced impact)Aurorasaurus, GLOBE

Ease of use

Can be conducted by people who are not experts in evaluationAurorasaurus, GLOBE, SERC

Evaluation can be carried out consistently by different people, for example through online reporting surveysGLOBE, SERC

Consistency of reporting

Can be used to evaluate impacts consistently across timeGLOBE

Can be used to evaluate impact across activities within a program, and across programsAurorasaurus, GLOBE, SERC

Perspective 1: Aurorasaurus, A Single Project With Global Reach

The Aurorasaurus project uses volunteer reports to map auroras. Evaluation has been led primarily internally by a two-person, non-evaluator staff working closely to collect and analyze evaluation data. Alongside partners at Penn State University, the Aurorasaurus team conducted an initial survey-based formative evaluation in 2015. In 2019, they applied the SPI to demonstrate multifaceted scientific productivity, and they continue to use it for longitudinal evaluation.

Implementation of the Science Products Inventory

The SPI evaluation of the Aurorasaurus project listed each individual product by category, with a condensed summary. The Aurorasaurus SPI identified products critical to project success that were not represented in the framework. Supplemental File 3: Aurorasaurus SPI Utilization provides further examples. For instance, disruptive innovation and discoveries occur in participatory science, making “Scientific Discoveries” appropriate to add to the “Data” category. The Aurorasaurus project was instrumental to the first publication on the subauroral phenomenon Strong Thermal Emission Velocity Enhancement (STEVE) (), and continues to support scientist- and participatory-scientist-led discoveries related to the phenomenon. While scientific discovery is not a major goal for every project, it rewards and motivates participants (). Discoveries also represent significant ways in which citizen science contributes to scientific disciplines ().

The SPI includes formal publications and communications. However, in a field centered on public participation, informal learning opportunities accessible to stakeholders in multiple audiences are valuable to recruitment, training, and engagement. The Aurorasaurus project has a social media presence, and is featured in reputable public science outlets, comprising a new product, “Informal Learning/Media Assets (Written, Communications)”.

Aurorasaurus’s SPI reported on other products beyond existing categories. For example, although the Aurorasaurus project does not seek to directly generate the existing Management & Policy products, the Aurorasaurus team noted that other metrics can indicate participatory science influence—not least, “Broader Acceptance of Participatory Science (Management and Policy)” as a practice. They also identified opportunities to contribute to broader acceptance, such as framing their work in more community-oriented terms. Ultimately, the Aurorasaurus SPI may add products that include cultural shifts, as well as formal, top-down acceptance from leadership bodies. For example, an indicator of community influence could be an analysis of whether participatory scientists recruit their peers.

The Aurosaurus SPI indicated that new products could be useful, for instance, highlighting communication opportunities and the value of partnerships. Although the SPI primarily references one-way communication, participatory science is fundamentally collaborative, and two-way communication is critical. Project team members partner with participatory scientists from a range of disciplines, as well as with other organizations. The resulting symbiosis enhances data analysis, access to subject matter experts, educational opportunities, creative products, transparency, and visibility. Since its initial SPI implementation, the Aurorasaurus team has increasingly tracked the presence of “Collaborations and Interdisciplinary Partnerships (Communications)”. Documenting partnerships not only encourages the investment of time, resources, and relationship-building, it is required by their funding programs in NASA’s Science Activation portfolio ().

Implications for the project

Utilizing the SPI confirmed that the Aurorasaurus project has a comprehensive suite of science products and data practices, as expected for a project of its age and breadth. The inventory also revealed areas for potential growth, such as creating tools (like online user interfaces) that make aurora data more easily explorable by participatory scientists. The exercise proved useful in structuring future goals. Also of note, assigning SPI implementation to a new team member provided a thorough introduction to the project’s accomplishments, activities, and partnerships, streamlining training.

Perspective 2: The Global Learning and Observation to Benefit the Environment Program, A Large, Multifaceted Project With Global Reach

The NASA Earth Science Education Collaborative (NESEC) program supports various aspects of the large GLOBE project, particularly the GO mobile application. NESEC has a highly-matrixed team that works closely with external project evaluators. Their collaborations result in a variety of scientific outputs, including observational photos, data visualizations, research papers, and presentations. The NESEC evaluation team adopted the SPI to help manage the variety and quantity of scientific outputs produced by the app. They then formatively compared baseline data from 2019 with 2020 data to hone the team’s direction and ensure the program was meeting goals.

Implementation of the Science Products Inventory

Through meetings with project leaders, the evaluation team focused on tracking products that align with NASA’s citizen science priorities per the NASA SPD-33 document (). These included data management plans, results dissemination to participatory scientists, scientific publications (especially those coauthored by participatory scientists), and up-to-date websites to explain intended scientific outcomes and progress. The inventory became a tool for reflection and planning, as well as tracking and reporting.

Referencing the example inventory in Wiggins et al. (), as well as NASA’s desired science products, the evaluation team combined the Data Practices and Science Products tables to streamline tracking. They also adjusted product definitions to fit project-specific needs, clarifying team interpretation. For example, the evaluation team uses the “Samples” product to refer to physical samples of mosquito larvae collected through the Mosquito Habitat Mapper protocol.

As participants collect observations for the GLOBE program, the broader NESEC team provides corresponding and complementary engagement activities that are researched and evaluated for their educational value. The evaluation team therefore created two additional SPI-inspired inventories: the Engagement Product Inventory (EPI), which tracks public-facing products, and the Education Research Product Inventory (ERI), which captures research and evaluation products centering on GLOBE educational activities. These can be found in Supplemental File 4: NESEC Products Inventories. Examples of engagement products include teaching materials, social media posts, hands-on activities at events, student/youth research projects, web-based resources, and webinars. Examples of education research products include academic articles and presentations regarding educational work with GLOBE, as well as data from any educational assessments. Using these alongside the SPI provided the evaluation team with a more comprehensive program overview.

The evaluation team uses an automated form on the online survey platform Qualtrics (Supplemental File 5: NESEC Output Tracker) for their large project team. After an output is complete, the evaluation team uses the form to identify the project focus area the output fits, describe the output, and record other implications of the output and resources. The form automatically populates the three inventories, which the evaluation team collates and edits before sharing with the wider team via monthly meetings or annual reports.

Implications for the project

The evaluation team captured changes in impacts and quantities of different types of scientific products correlated with COVID-19 pandemic effects. Although their ability to present research at conferences was hindered, GLOBE received more data requests in 2020 than 2019. Supplemental File 4: NESEC Products Inventories contains examples of science products the evaluation team tracked between 2019 and 2020, noting that the raw numbers are not directly comparable between products.

The inventories provide a relatively superficial program overview, but allow a manageable capture of the vast array of products created by a large project with many team members, partners, and assets. In addition to capturing science products across GLOBE protocols, the inventories are used to identify program areas in which further evaluation would be useful. For example, the inventories showed that a 2019 pilot of a virtual internship focusing on the Mosquito Mappers protocol produced robust outcomes for learners, as well as useful scientific products. A follow-up survey in 2020 focused on broader education and learning impacts, examining the roles of these virtual internships in students’ scientific literacy achievement ().

Perspective 3: The Smithsonian Environmental Research Center, An Organization That Hosts Onsite Programs and Works With Partners Who Help to Implement Projects

SERC is an environmental research center that administers a network of participatory science projects across 20 labs, ranging from less than 10 volunteers to more than 100. Evaluation is led internally by a scientist with the aid of the SERC citizen science program staff. The SERC team uses the SPI as a component of its participatory science evaluation, tracking, and planning. At the end of each fiscal year, labs are asked to respond to an SPI-based questionnaire, with responses used to track project progress, set goals, and discuss changes in project structure for the upcoming year. These elements comprise the program’s formative evaluation structure and are used to modify project structure and management for the SERC citizen science program as a whole, as well as for individual projects. SERC does not have evaluation professionals: the above evaluations are administered by the Citizen Science Program Coordinator and therefore rely heavily on existing evaluation instruments such as the SPI.

Implementation of the Science Products Inventory

Before deciding to implement the SPI as an element of the iterative evaluation process, staff from each lab involved in the SERC citizen science program were interviewed. The evaluation team’s goal was to provide feedback about which lab members were most knowledgeable about specific inventory elements, as well as to gain insight into irrelevant inventory elements and opportunities for new categories. Project staff in different roles (e.g., principal investigators [PIs], post-doctoral fellows, and technicians) were interviewed separately to better understand how knowledge of science products varied within labs. Based on these surveys, inventory elements not germane to SERC participatory science projects, like Application Programming Interfaces (APIs), were cut from future surveys, while new elements, such as conference posters, were added to address internally relevant outputs. Additional inventory items, such as information about project structure (e.g., number of staff members involved) were added, and are listed in Supplemental File 6: SERC SPI Utilization.

Based on feedback from interviews, annual surveys are sent to staff serving as the primary point of contact for volunteer activities. Some categories of products are primarily provided by PIs, such as Written and Management and Policy, while others are primarily provided by technicians, like Data and Communications. Information about data practices tends to be provided by both groups.

Implications for the project

SERC’s SPI has proven a useful tool for tracking project progress and goal-setting within projects. Additionally, it enables comparison over time, and across projects within the same organization. The inventories have also been useful for planning because they provide data that can be used to better understand the staffing and resources needed to develop and sustain a new project. For example, in labs and projects that have not previously engaged volunteers, SPI data from similar, existing projects was used to understand the balance between staff time, capacity, and the number of samples collected/processed. Information from the SPI provided avenues for researchers to give feedback about elements of project structures, including whether the volunteers who participated had appropriate background experience and skills for activities. The surveys gave staff the opportunity to reflect on the project recruitment and training strategies that best met project needs and supported volunteers.

SPI data are aggregated by the SERC citizen science program management staff to consider the participatory science project suite as a whole. Individual labs are most interested in data related to their projects, and results from those annual inventories are shared with each lab group. However, participatory science staff are more focused on implementation across the organization. Using the same inventory across projects streamlines information sharing. These data are frequently used in grants, promotional materials, and reports to the central offices of the Smithsonian Institution. The SPI also provides a standardized way to look for patterns, such as whether more staff time is required for projects that involve volunteers in a wider range of tasks, or whether labs need dedicated help with communications in order to meet their goals.

SERC found it useful to include the multifaceted perspectives of those who provide information for the inventory. Information is not equally distributed within labs, and different people play different roles in the process. Generally, PIs tend to have roles related to formal communications and academic outputs, while technicians are more likely to work directly with volunteers and have more insights into the specific activities in which participatory scientists participate. SERC citizen science staff have found using the SPI in this way helpful for facilitating recognition of the roles and expertise that different lab members bring to the project. Approaching the evaluation collaboratively helps generate more accurate and actionable feedback about projects.

Discussion

Here we demonstrate the utility of a customizable evaluation tool, the SPI, across three realms of projects. Each evaluation functions with a different scope, from one project, to a focused cohort of projects serving an international audience, to a broad network of projects. While they share similarities, each project has a slightly different purpose, method, and use. Our uses of the SPI framework reflect the variety of needs, as well as the structural and aspirational similarities of many projects (as discussed by ). Despite differences in organization and process, we each use the SPI to analyze and draw insights (Table 2). Utilizing the same tool streamlined information sharing between our projects. Our findings highlight the importance for practitioners to identify and use evaluation tools customizable to their own needs.

The SPI helped non-evaluators independently track data that augmented funder-required standards. The process was not onerous, and the simple task of listing individual products inspired consideration of future directions. Aurorasaurus’s SPI continues to assist with reporting. GLOBE and SERC input the framework’s parameters into the Qualtrics platform to create surveys to which project teams can easily contribute. The authors gained insights into the process of recognizing which outputs are less relevant to the project, and which subject matter experts within a project are most knowledgeable about different types of information. For example, SERC leveraged different roles within projects to streamline collection of the most relevant data. Gathering such data provides an initial step toward more targeted evaluations, such as data quality and impact, and toward even more participatory evaluation models. It also helps researchers better assess the resources needed for project development and sustainability when creating and leading projects.

While Wiggins et al. () describe the SPI in terms of a single initial use, having used the SPI over time, we each found practical use for longitudinal evaluation. For example, prior to the publication of the SPI, GLOBE had engaged in quantitative reporting but was not yet integratively evaluating the project in a structured manner. The SPI collated a broad set of measurables relevant to participatory science and provided each of us with a practical, structured method of annual evaluation consistent between projects, year to year. By tracking products in multiple categories, it illustrated that the traditional expectation of a standard annual percent increase in participation is unrealistic for participatory science, especially in cases in which data-gathering opportunities are tied to external factors like natural cycles. Far from our projects lacking impact during quieter periods, SPI data revealed underlying trends in how participant activities may naturally vary and deepen over time: As external factors shift, higher-quantity engagement can give way to higher-quality interactions and vice versa. SERC also found the SPI feasible for annual use, and the Aurorasaurus team uses a modified SPI to track products year-round.

Participation is a visible aspect that inspires volunteers to reach out to projects, and within settings that utilize traditional metrics such as publication quantities, we found the SPI useful in articulating the importance of participation-based metrics. Cyclical data collection is not uncommon in participatory science, and Aurorasaurus is subject to the 11-year cycles of the Sun. Longitudinal studies between similar points on the cycle may be more comparable within a longer-term effort. Aurorasaurus plans to benchmark the upcoming active period of solar maximum to compare with its original 2015 survey. By contrast, during quiet solar minimum periods when there are fewer active auroras to report, Aurorasaurus data shows more activity in participatory science research and outreach. A traditional evaluation method might not have captured the depth and shift of this engagement, but the team’s SPI data revealed the project as highly active, even when the Sun is not.

We each adapted the SPI framework differently (Table 2). Aurorasaurus proposed new products and categories, such as Scientific Discoveries (Data), Informal Learning/Media Assets (Written, Communications), and Collaborations and Interdisciplinary Partnerships (Communications). GLOBE created two additional SPI-inspired inventories to encompass Education Research and Engagement Products. SERC added a section related to project structure, and trimmed less-applicable categories. Other projects may customize the inventory to fit needs different from our projects’, which are built with concrete outputs in mind and all use a contributory format. We note that in a co-creative context, project outcomes are not top-down defined, and may involve participatory scientists directly in the evaluation process ().

Our varied needs also highlight limitations of the SPI as a quantitative method that does not address qualitative data, providing a solid but limited foundation for tracking outputs. Unlike the traditional, hierarchical model of science, participatory science is based on collaborative information-sharing between participatory scientists and scientists. Projects look to develop this evolving field to the mutual benefit of all its stakeholders: scientists, grantors, the general public, and participatory scientists themselves. While grantors may require other metrics, qualitative frameworks center on this crucial participant experience. Even as it highlights transparency and extensibility, the SPI inherently excludes elements such as learning and impact. As project teams, we have found it to be a useful starting point, which we combine with other methods as appropriate for our goals and contexts, such as those put forward in Phillips et al. (), Friedman et al. (), and Phillips et al. (). We note that evaluating the methodology of the SPI is beyond the scope of our work.

Mayer et al. () point out that participatory science evaluation in practice may sometimes involve only external evaluators, not project leaders and/or participants. Because of this, frameworks and tools are often built with professional evaluators in mind. However, we agree with Passani et al. () on the need for participatory evaluation frameworks and tools that are more accessible to non-evaluators. While all of our projects are well supported and medium to very large in scale, our team sizes vary greatly. As a checklist-style tool, the SPI is implementable by small teams of 1 to 2 people. It can apply across large projects covering single or multiple disciplines, and even sets of projects. The Aurorasaurus and SERC project teams, who are not formally trained evaluators, found implementation as simple as filling in a spreadsheet, making the SPI a method with a low barrier to entry. Even those for whom external evaluation is out of scope may find value in publishing their inventories to repositories like the Open Science Framework (OSF) or Zenodo for transparency, citability, and open benchmarking. As an example of an inventory, see MacDonald and Brandt (). We found the SPI a helpful tool for making evaluation accessible and practicable. It sustainably facilitated data gathering to apply to other evaluation frameworks. We hope our work informs other project teams’ work to identify, customize, and implement evaluation tools that will benefit projects, participants, and the broader field.

Conclusion

Participatory science as a field incorporates aspects of multiple disciplines, including science, informal learning, social sciences, and volunteer management. Projects varying widely in subject matter and construction often share structural similarities. Robust evaluation can reflect such parallels. Easy-to-use frameworks and tools flexible enough to accommodate a variety of needs and workflows, like the SPI, were found to be useful to both evaluators and project leads.

The SPI did not yet exist when the Aurorasaurus, GLOBE, and SERC participatory science projects began, so our work pertains to formative evaluation. However, the Science Products and Data Practices provide strong goalposts that can help shape a project at the outset. Now that this framework is available, it can be used for front-end evaluation and planning projects. For example, SERC used it in project planning as part of an iterative design and goal-setting process for the institution’s program of participatory science activities. The SPI complements existing evaluation frameworks to provide longitudinal benchmarks to projects that may not have extensive evaluation resources, but seek transparency and ease of use.

Data Accessibility Statement

Some SPI data is provided and/or summarized in the Supplemental Files. In addition, Aurorasaurus published its SPI as a data release (). Other data is not publicly available because they were collected for program improvement purposes, but evaluation reports using these data are available upon request.

Supplementary Files

The Supplementary files for this article can be found as follows:

Supplemental File 1

SPI Background Information. DOI: https://doi.org/10.5334/cstp.536.s1

Supplemental File 2

Project Background Information. DOI: https://doi.org/10.5334/cstp.536.s2

Supplemental File 3

Aurorasaurus SPI Utilization. DOI: https://doi.org/10.5334/cstp.536.s3

Supplemental File 4

NESEC Products Inventories. DOI: https://doi.org/10.5334/cstp.536.s4

Supplemental File 5

NESEC Output Tracker. DOI: https://doi.org/10.5334/cstp.536.s5

Supplemental File 6

SERC SPI Utilization. DOI: https://doi.org/10.5334/cstp.536.s6