Introduction

Citizen science, defined here as public participation in scientific research, was originally conceived as a method for gathering large amounts of data across time and space (). For decades or even centuries, citizen science has contributed to knowledge and understanding about far-ranging scientific topics, questions, and issues (). More recently, citizen science practitioners—those who conceive, develop, and implement citizen science projects—have sought not only to achieve science research outcomes but also to elicit learning and behavioral outcomes for participants (; ).

Many proponents of citizen science argue that participating directly in the scientific process via citizen science is an excellent way to increase science knowledge and literacy (; ; ; ); understand the process of science (; ); and develop positive action on behalf of the environment (; ; ; ). While some projects have demonstrated achievement of a few learning outcomes (see for examples), most projects have yet to document robust outcomes such as increased interest in science or the environment, knowledge of science process, skills of science inquiry, or stewardship behaviors (; ; ; ).

Several factors may account for the lack of demonstrated and measurable learning outcomes. First, the field of citizen science is still young. Few if any specific outcomes have been defined or described by the field, therefore, project designers may not have clear concepts of what types of learning they are attempting to foster. In addition, measuring learning requires dedicated time, resources, and expertise in conducting social science research or evaluations, which many citizen science projects lack. As a result, citizen science suffers from a lack of quality project evaluations and cross-programmatic research ().

The informal science learning community recently developed guidance including tools and resources for evaluating learning outcomes from participation or engagement in informal science education (ISE) activities (; ). These tools and resources are relevant to the field of citizen science, because many citizen science projects operate in informal environments such as private residences, parks, science and nature centers, museums, community centers, after-school programs, or online. In addition, many citizen science projects are funded through ISE initiatives because the projects are expected to foster lifelong science learning (). Therefore, tools developed to measure learning outcomes resulting from ISE can serve as logical starting points for evaluating outcomes of citizen science participation.

The objectives of the research presented in this paper were to determine and describe the types of learning outcomes that are intended by citizen science project developers, to examine the alignment of these outcomes with informal science learning frameworks and guidelines, and to develop and present a new framework for articulating citizen science learning outcomes. We believe that the framework will help citizen science practitioners to design projects that achieve measurable learning. We also hope that the framework will facilitate cross-programmatic research to help the citizen science field show how its projects are impacting science and society. Our research further sought to determine the extent to which citizen science learning outcomes have been evaluated across the field, as a first step toward our overall goal of deepening evaluation capacity for the citizen science community.

Citizen Science and Informal Science Learning

The educational underpinnings of citizen science—particularly when involving adults—draw heavily from Informal Science Education (ISE), what Falk and Dierking () refer to as “free-choice learning”—lifelong, self-directed learning that occurs outside K-16 classrooms. Two influential documents from the ISE field provided a starting point for our study. The Framework for Evaluating Impacts of Informal Science Education Projects (), supported by the National Science Foundation (NSF), was the first publication produced by the ISE field that described a “standard” set of learning outcomes (referred to as impact categories) that could be used to systematically measure project-level outcomes. (We will refer to this publication as the “ISE Framework” for the remainder of this paper.) A major goal of the framework was to facilitate cross-project and cross-technique comparisons of the impacts of ISE projects on public audiences. The five impact categories are:

  • Knowledge, awareness, or understanding of Science, Technology, Engineering, and Math (STEM) concepts, processes, or careers
  • Engagement or interest in STEM concepts or careers
  • Attitude toward STEM concepts, processes, or careers
  • Skills based on STEM concepts or processes
  • Behavior related to STEM concepts, processes, or careers

A second document, Learning Science in Informal Environments: People, Places, and Pursuits (), focuses on characterizing the cognitive, affective, social, and developmental aspects of science learners. Termed the “LSIE strands,” these aspects of science participation include:

  • Interest and motivation to learn about the natural world
  • Application and understanding of science concepts
  • Acquisition of skills related to the practice of science
  • Reflecting on science as a way of knowing, participating in, and communicating about science
  • Identifying oneself as someone capable of knowing, using, and contributing to science

The authors of the LSIE strands noted that while the concepts originated in research, at the time of writing they had not yet been applied or analyzed in any systematic venue. The significant overlap between the LSIE strands and the ISE Framework’s impact categories is shown in Table 1.

Table 1

Comparison of NSF Framework and LSIE strands.

NSF Framework CategoryLSIE Strands

Knowledge, Awareness, Understanding: Measurable demonstration of assessment of, change in, or exercise of awareness, knowledge, understanding of a particular scientific topic, concept, phenomena, theory, or careers central to the project. Strand (2), Understanding: Come to generate, understand, remember, and use concepts, explanations, arguments, models, and facts related to science.
Engagement, interest or motivation in science: Measurable demonstration of assessment of, change in, or exercise of engagement/interest in a particular scientific topic, concept, phenomena, theory, or careers central to the project. Strand (1), Interest and motivation: Experience excitement, interest and motivation to learn about phenomena in the natural and physical world.
Skills related to science inquiry: Measurable demonstration of the development and/or reinforcement of skills, either entirely new ones or the reinforcement, even practice, of developing skills.Strand (3), Science Exploration: Manipulate, test, explore, predict, question, and make sense of the natural and physical world; and Strand (5): Participate in scientific activities and learning practices with others, using scientific language and tools
Attitudes toward science: Measurable demonstration of assessment of, change in, or exercise of attitude toward a particular scientific topic, concept, phenomena, theory, or careers central to the project or one’s capabilities relative to these areas. Attitudes refer to changes in relatively stable, more intractable constructs such as empathy for animals and their habitats, appreciation for the role of scientists in society or attitudes toward stem cell research. Related to Strand (6), Identity: Think about themselves as science learners, and develop an identity as someone who knows about, uses, and sometimes contributes to science. Also, related to Strand (4), Reflection: Reflect on science as a way of knowing; on processes, concepts, and institutions of science; and on their own process of learning about phenomena.
Behavior: Measurable demonstration of assessment of, change in, or exercise of behavior related to a STEM topic. Behavioral impacts are particularly relevant to projects that are environmental in nature since action is a desired outcome. Related to Strand (5), Skills: Participate in scientific activities and learning practices with others, using scientific language and tools.

A third ISE document also contributed to framing this study. In 2009, an inquiry group sponsored by the Center for Advancement of Informal Science Education (CAISE) produced “Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education” (), which was created as a “first step toward developing an organized methodology for comparing outcomes across a variety of Public Participation in Scientific Research (PPSR) projects” (p.20). This paper included a rubric of potential citizen science learning outcomes, based on the ISE Framework, and examined ten NSF-funded citizen science projects to assess whether they reported outcomes similar to those described in the ISE Framework.

One result of the CAISE report was a realization that citizen science practitioners were measuring project outcomes in varied ways, making it difficult for cross-programmatic research to study the collective impact of the field. Another result was the development of a project typology based on the level of participant involvement in the scientific process (). This typology described “Contributory” projects that are researcher-driven, where participants primarily focus on data collection; “Collaborative” projects that are typically led by researchers, but may include input from participants in phases of the scientific process such as designing methods, analyzing data, and disseminating results; and “Co-created” projects that involve participants in all aspects of the scientific process, from defining a question to interpreting data to disseminating results (Figure 1). This typology allowed projects designed for different reasons and in different ways to be grouped to help researchers understand common outcomes.

Figure 1 

Participants in citizen science engage in a large number of activities such as designing studies, collecting and analyzing data, and disseminating project results. What do project designers hope that participants will learn from their participation? How are desired learning outcomes designed? How are they measured?

Credit: No copyright. Pacific Southwest Region USFWS/Flickr/Public Domain.

The three documents described above served as foundations for articulating learning outcomes from citizen science participation, however, they lacked systematic empirical support. This study provides such support by ground truthing and applying the concepts within the ISE Framework, the LSIE strands, and the Bonney et al. () rubric to the field of citizen science.

Methods and Results

Our research used two sources of data—a structured review of citizen science project websites and an online survey of citizen science practitioners—to address the following three questions:

  1. What are the learning outcomes that are intended or desired by citizen science practitioners, and to what extent do these outcomes align with those described by the field of informal science education? (Data Source: Website Review)
  2. What is the status of evaluation of citizen science learning outcomes across the field? (Data Source: Online practitioner survey)
  3. How are citizen science learning outcomes measured by different projects? (Data Source: Online practitioner survey)

We also conducted a literature review to uncover definitions, descriptions, and elucidations of the learning outcomes that we identified through our research. We used the results of this review, along with our new understanding of the outcomes desired and measured by citizen science practitioners, to develop a framework of common learning outcomes for the citizen science field.

Intended Learning Outcomes

To describe and understand the learning outcomes that are intended or desired by citizen science practitioners as they develop projects, we first identified individual projects by conducting a semi-structured search of the following citizen science portals: Citizen Science Central (citizenscience.org); InformalScience (informalscience.org); SciStarter (scistarter.com); Citizen Science Alliance (citizensciencealliance.org); and National Directory of Volunteer Monitoring Programs (yosemite.epa.gov/water/volmon.nsf/). The last portal included 800 projects, from which we sampled every fifth one. If a project was listed on multiple portals, we included it only once. In total, 327 citizen science projects met our criteria for study inclusion: Being open to participation in the U.S. or Canada, having an online presence, and being operational at the time of the search (2011). The complete list of databases and search terms used to locate citizen science projects is available in the supplemental material for this paper (Appendix A).

From each of the 327 project websites, we gathered the following information: Project name, URL, contact information, general goal statements, learning objectives or desired outcomes (if any), and potential indicators of learning (if any). Nine percent of project websites did not describe intended learning outcomes (e.g., some projects stated their goals to be purely scientific in nature), but the remaining 92% of projects described at least one. We coded each goal statement and learning objective into one of the major categories outlined in the ISE Framework (knowledge, engagement, skills, attitude, behavior, other) and into sub-codes outlined in the assessment rubric by Bonney et al. ().

Several projects described multiple learning outcomes. In these cases, each distinct outcome was coded separately. For example, the Great Lakes Worm Watch states that its goal is “increasing scientific literacy and public understanding of the role of exotic species in ecosystems change.” Objectives are to “provide the tools and resources for citizens to actively contribute to the development of a database documenting the distributions of exotic earthworms and their impacts across the region as well as training and resources for educators to help build understanding of the methods and results of scientific research about exotic earthworms and forest ecosystems ecology.” The text from the goal statement and learning objectives (left) were coded into the outcomes categories on the right:

Increasing scientific literacy and public understandingcontent knowledge
Citizens actively contribute to the development of a databasedata collection and monitoring, data submission
Help build understanding of the methods and results of scientific researchNature of Science knowledge

Results from our coding of project goals and objectives are presented in Table 2. They reveal that the number of aspirational learning outcomes for projects ranged from zero to as many as seven, with about 40% of projects including at least two. The majority of projects (59%) focus on influencing skills related to data collection and monitoring. Intended outcomes for these projects are often stated as “Volunteers gain data collection and reporting skills.” The second most frequently stated intended learning outcome (28% of projects) was understanding of content knowledge (e.g., “volunteers learn about macroinvertebrates and stream health”). The third most-common intended outcome, increased environmental stewardship—which typically includes some type of behavior change (e.g., “engage watershed residents in protecting water quality”)—was specified by about 26% of projects. Other intended learning outcomes were mentioned much less frequently, including increases in knowledge of the nature of science, data analysis skills, interest in the environment, civic action, data submission, communication skills, use of technology, science careers, study design, and also shifts in attitude/awareness. Considering all projects for which intended learning outcomes were stated, each of the ISE Framework impact categories was represented, suggesting a strong alignment between learning outcomes desired for citizen science participants and those for participants in the informal science learning community more generally.

Table 2

Count of specified learning outcomes as coded from 327 citizen-science project websites. Percentages represent the proportion of projects that described the stated outcome. Several projects stated more than one outcome.

Stated Outcomes on project websitesCount of projects stating outcome (N = 327)Percentage of projects stating outcome

Data Collection and Monitoring19359%
Content Knowledge9028%
Environmental Stewardship8626%
No Education Goal Specified299%
Attitude/Awareness258%
Nature of Science206%
Data Analysis144%
Interest in the Environment134%
Civic Action124%
Submitting Data124%
Interest in Science103%
Community Health93%
Communication Skills72%
Using Technology62%
Science Careers41%
Designing Studies20.5%

Status of Citizen Science Project Evaluation

To uncover the status of evaluation of citizen science learning outcomes across the field, we conducted an online survey of citizen science practitioners in March 2011. Delivered via Survey Monkey™, the survey contained 25 questions, including 20 closed-ended questions with predetermined options including “other.” The remaining five questions were open-ended, providing text boxes for answers. Only one question, which asked respondents to classify their project according to the three-model typology of citizen science developed by Bonney et al. (), required a response. Additional questions focused on the duration of the project, the approximate number of participants, and the type of training that participants received. Respondents also were asked if any type of evaluation had ever been conducted for their project; details about evaluations that were conducted; what learning outcomes described in the ISE Framework had been measured; and what other types of outcomes had been measured. The complete set of survey questions is available in the supplemental material (Appendix B).

Following approval by the Cornell University Institutional Review Board (#1102002014), we sent an email invitation to potential respondents describing the goal of the survey and explaining that participation was voluntary and confidential. Two reminder emails were sent approximately two and four weeks following the initial invitation. An informed consent statement was included at the start of the survey. Potential respondents were recruited via the citizenscience.org listserv (citsci-discussion-l), which anyone could join, and which at the time of recruitment had approximately 1,100 members. Not all members of the listserv were project leaders, and multiple list members likely represented a single project, making it difficult to know the actual number of projects represented by listserv members. After the survey was closed, we made sure that all responding projects were included in the previously described website review, to obtain as much overlap between the two datasets as possible.

The survey was completed by 199 respondents representing 157 unique projects (some projects had multiple entries, in which case only the first entry was included; other respondents failed to include information about their project name, which was optional). All but ten of the 157 unique projects also were represented in the project website data. The remaining ten projects that responded to the online survey but were not in the website review were either no longer operational, not in the US or Canada, or did not have a web presence. The majority of projects (72 or 37%) had been operating from 1–5 years, and nearly half (49%) had fewer than 100 participants. Because most questions were optional, response rates varied for different survey items.

Results revealed that of the 199 respondents, 114 or 57% had undertaken some type of project evaluation. More than half of the evaluations were administered by internal project staff to measure project outcomes or impacts, mostly using data collected through surveys. About one third of respondents reported conducting post-only or pre-and posttest evaluation designs. Reasons for conducting project evaluations included: Gauging participant learning; identifying project strengths and weaknesses; obtaining additional funding or support; promoting a project more broadly; and providing recommendations for project improvement. In addition to asking about project learning outcomes (described in the next section), the survey also asked what other aspects of the project had been evaluated. Two thirds of participants reported measuring satisfaction or enjoyment with the project, followed by motivation to participate (53%) and evaluation of project outputs such as numbers of participants, web hits, journal articles, and amount of data collected (44%). Other measured outcomes included scientific/conservation (39%); effectiveness of workshops and trainings (38%); data quality (37%); community capacity building (23%); and social policy change (3%).

Another open-ended question asked respondents “Please do your best to provide the name or description of any instrument (e.g., Views on Science and Technology Survey) used to collect evaluation data, even if you developed the instrument.” Of the 72 respondents to this question, only three had used a pre-existing, validated instrument. The majority of respondents had developed their own instruments in-house or had an external evaluator develop original instruments. A handful of respondents replied with “Survey Monkey” or some other data collection platform as opposed to describing an evaluation instrument. Some mentioned tools such as GPS units or calipers as instruments used by the project, while others stated that they did not understand the question. When asked about their overall satisfaction with their evaluations, more than half of respondents expressed agreement or strong agreement that evaluations were of high quality, that evaluation findings were informative to the project developers, that recommendations from the evaluation were implemented, that the project had improved as a result of evaluation, that they learned a lot about evaluation, and that they felt confident they could personally conduct an evaluation in the future.

Survey respondents also were asked about aspects of the evaluation process for which they would like assistance. The highest priority was help with developing goals, objectives, and indicators, followed by creating or finding appropriate survey instruments, help with analyzing or interpreting data, and help with data collection. Participants also were asked what specific resources would be most helpful for conducting evaluations. The most common replies were a database of potential surveys and data collection instruments; sample evaluation reports from citizen science; examples of evaluation designs; and an entry-level guide for conducting evaluations.

Finally, respondents were presented with a list of eight different online organizations that support or provide resources for evaluation and were asked how often they access them. Surprisingly, the majority of respondents had never heard of any of the resources or organizations. The only exception was citizenscience.org, which was used by 46% of respondents, but rarely (as opposed to frequently, sometimes, or never). These results show a range of evaluation efforts and a positive attitude toward evaluation and findings among citizen science practitioners, but also a need for more knowledge of and accessibility to evaluation tools and resources.

Measurement of Learning Outcomes

Respondents who reported having conducted evaluations (114 or 57%) were asked “For the most recent evaluation of your project, which broad categories of learning outcomes, if any, were evaluated?” Responses to this question were based on the ISE Framework broad impact categories. Aggregated results across all projects revealed that interest or engagement in science was the most commonly measured outcome (46%), followed by knowledge of science content (43%). Behavior change resulting from participation and attitudes toward science process, content, careers, and the environment accounted for 36% and 33%, respectively, of measured learning outcomes. Science inquiry skills (e.g., asking questions, designing studies, data collection, data analysis, and using technology) were the least commonly measured outcomes across all projects (28%). In an open-ended question about other types of learning outcomes, about 10% of respondents also described measuring motivation and self-efficacy or confidence to participate in science and environmental activities.

Considering differences in categories of learning outcomes measured within project types, contributory projects (for which there were 69 respondents that had conducted evaluations) reported measuring interest in science most frequently (43%) and skills of science inquiry least frequently (18%). Two-thirds of all collaborative projects (N = 21) measured content knowledge, followed by interest (57%), behavior change (52%), and attitudes and skills (both 43%). Only nine survey respondents represented co-created projects that had conducted evaluations, and of these, skills of science inquiry were measured most often. Responses combined across projects and separated among project types are summarized in Figure 2.

Figure 2 

Measured learning outcomes from online survey of citizen science practitioners who reported having conducted some sort of evaluation (n = 99).

Earlier in this paper we showed that a majority of citizen science project websites described intended learning outcomes very similar to those in the ISE framework, although not always using the same language. Results from the online practitioner survey added to our “ground truthing” of the ISE Framework, as respondents described attempts to measure these same outcomes, albeit to varying degrees. Open-ended responses highlighted the need to emphasize efficacy as an important learning outcome in citizen science. Survey respondents also made it clear that additional resources were needed to help formulate and measure learning outcomes.

A Framework for Articulating and Measuring Common Learning Outcomes for Citizen Science

In addition to synthesizing and comparing empirical results from our website review and practitioner survey to describe intended and measured learning outcomes, we used key word searches to conduct a review of more than 40 peer-reviewed articles focused on defining and measuring these learning outcomes. Our data and review facilitated a re-conceptualization or contextualization of several of the impact categories presented in the ISE Framework to make them relevant to citizen science, in particular environmentally based projects. For example, some outcomes uncovered in our research, such as “skills of science inquiry,” map well to the categories and their definitions in the ISE framework. Other outcomes, such as “attitude,” required clarification. Our new framework is thus based on both empirical data and contributions from the literature and includes the following learning outcomes: Interest in Science and the Environment; Self-efficacy for Science and the Environment; Motivation for Science and the Environment; Knowledge of the Nature of Science; Skills of Science Inquiry; and Behavior and Stewardship (Figure 3).

Figure 3 

Framework for Articulating and Measuring Individual Learning Outcomes from Participation in Citizen Science.

This framework should help citizen science practitioners consider some of the more commonly desired and achievable learning outcomes when designing projects. However, we emphasize that no single project should try to achieve and/or measure all, or even most, of these outcomes, as doing so can set up unreasonable expectations for both the project and its evaluation. We also note that the framework is not exhaustive. Indeed, as citizen science continues to expand, new research will inevitably reveal other learning outcomes that are important to articulate and measure.

Below we describe each outcome within the framework, highlighting how each has been explained in the broad educational field and also providing examples of how each has been used in published studies of citizen science. These outcomes are not hierarchical but, beginning with interest in science and the environment, build from and reinforce each other.

Interest in Science and the Environment

We define interest as the degree to which an individual assigns personal relevance to a science or environmental topic or endeavor. Within ISE, Hidi and Renninger () treat interest as a multi-faceted construct encompassing cognitive (thinking), affective (feeling), and behavioral (doing) domains across four phases of adoption: triggered situational interest typically stimulated by a particular event and requiring support by others; maintained situational interest, which is sustained through personally meaningful activities and experiences; emerging individual interest characterized by positive feelings and self-directed pursuit of re-engaging with certain activities; and well-developed individual interest leading to enduring participation and application of knowledge. Our definition of interest is compatible with Hidi and Renninger’s () later phases of interest development, which are characterized by positive feelings and an increasing investment in learning more about a particular topic. Interest in science is considered a key driver to pursuing science careers in youth (; ) and sustained lifelong learning and engagement in adults (; ). Over time, this type of interest can lead to sustained engagement and motivation and can support identity development as a science learner (; ). Further, interest is noted as an important precursor to deeper engagement in democratic decision-making processes regarding science and technology ().

Although interest is considered to be an attitudinal structure (see ; ; ), equating interest with attitudes should be avoided because attitude is a very broad construct, encompassing related but distinct sub-constructs such as efficacy, interest, curiosity, appreciation, enjoyment, beliefs, values, perseverance, motivation, engagement, and identity (). Interest also has been used synonymously with engagement (), but as McCallie et al. () point out, engagement has yet to be well defined and has multiple meanings within the literature, particularly in ISE.

Citizen science projects, especially those for which repeated visits or experiences are the norm, can lend themselves to deeper and sustained interest in science and the environment, yet few studies have looked at interest as an outcome, and those that have find mixed results. Price and Lee () reported increased interest in science among Citizen Sky observers, especially among participants who engaged in online social activities. Crall et al. () examined general interest in science as a reason for participation in citizen science and suggested that interest was not a driving force for joining a project. Interest in specific nature-based topics, i.e., butterflies, was seen as a driver for engagement and also as a motivator for adding increasingly more complex data protocols to the French Garden Butterflies Watch project (). Other research has shown that interest in use of natural resources can be a very strong determinant for future and sustained involvement in the decision-making process about management of natural resources (). From these studies, it appears that examining interest in science more broadly may be less effective than measuring specific science topics. However, an audience’s pre-existing interests in specific topics may not change significantly through participation.

Self-efficacy for Science and the Environment

Another important outcome for studying learning is self-efficacy, i.e., a person’s beliefs about his/her capabilities to learn specific content and to perform particular behaviors (). Research has found that self-efficacy affects an individual’s choice, effort, and persistence in activities (, ; ). Individuals who feel efficacious put more effort into their activities and persist at them longer than those who doubt their abilities. Self-efficacy is sometimes referred to as “perceived competence” (in Self Determination Theory) and “perceived behavioral control” (in Ajzen’s Theory of Planned Behavior, ). Berkowitz et al. () treat self-efficacy as an essential component in environmental citizenship (along with motivation and awareness), which is dependent on an individual’s belief that they have sufficient skills, knowledge, and opportunity to bring about positive change in their personal lives or community.

In the context of citizen science, self-efficacy is the extent to which a learner has confidence in his or her ability to participate in a science or environmental activity. In a study involving classrooms, middle school students participating in a horseshoe crab citizen science project showed greater gains in self-efficacy than a control group (). In an online astronomy project, however, researchers found a significant decrease in efficacy toward science, possibly owing to a heightened awareness of how much participants did not know about the topic (). Crall et al. () determined that self-efficacy is not only important in carrying out the principal activities of a project but also in the potential for individuals to carry out future activities related to environmental stewardship. Working in a participatory action project with Salal harvesters, Ballard and Belsky () found that the process of co-developing and implementing different experiments increased workers’ self efficacy regarding their skills in scientific research. Although efficacy was not called out directly in the ISE Framework, it can be considered part of the LSIE Strand 6, “identity as a learner” (). Self-efficacy also was mentioned by project leaders in our online survey and thus appears to be an important potential outcome from citizen science participation.

Motivation for Science and the Environment

Motivation is a multi-faceted and complex attitudinal construct that describes some form of goal setting to achieve a behavior or end result. The LSIE strands () include motivation to sustain science learning over an individual’s lifetime as an important aspect of learning in informal environments. The literature on volunteerism frames motivation as an important factor in effective recruitment, accurate placement, and volunteer satisfaction and retention (, ). Of the dozens of theories on motivation, two perspectives seem especially relevant to volunteerism and citizen science. First, the Volunteer Functions Inventory (VFI), developed by Clary et al. (), examines how behaviors help individuals achieve personal and social goals. Clary et al.’s () categories of motivation include values (importance of helping others); understanding (activities that fulfill a desire to learn); social (influence by significant others); career (exploring job opportunities or work advancement); esteem (improving personal self-esteem); and protective (escaping from negative feelings). Wright et al. () studied the motivations of birders in South Africa using a modified version of the VFI and found five categories of motivation to be most important: Recreation/nature; values; personal growth; social interactions; and project organization.

The second perspective comes from Self-Determination Theory (SDT), which treats motivation as an explanatory variable for meeting basic psychological needs (i.e., competency, relatedness, and autonomy) and describes different types of motivations as falling on a continuum from intrinsic to extrinsic (, ). According to SDT, individuals are likely to continue pursuing a goal to the extent that they perceive intrinsic value in the pursuit of that goal (i.e., the extent to which they experience satisfaction in performing associated behaviors themselves versus performing behaviors to comply with extrinsic goals such as conforming to social pressures, fear, or receiving rewards). Although SDT can help practitioners better understand the psychological needs behind participation, few published studies have used SDT in the context of citizen science. One exception is a paper by Nov et al. (), which used SDT with social movement participation models in an examination of three digital citizen science projects. These researchers found that intrinsic motivation was one of four drivers that influenced quantity of participation, but that it did not affect quality of participation.

In the context of citizen science, motivation can serve as both an input and outcome, i.e., to understand the basis of motivation for ISE/citizen science experiences (input) and to sustain motivation to continue participating over long time periods (outcome). However, most studies have examined reasons for participation such as the desire to contribute (see ; ; ; ; ), rather than motivations, which describe the psychological underpinnings of behavior (e.g., “because it makes me feel good”). In an examination of motivation in online projects, Rotman et al. () described a complex and changing framework for motivation that was influenced by participant interest, recognition, and attribution. Although several studies have purported to examine motivation, it has not been defined nor studied uniformly throughout the field of citizen science. Nevertheless, the major consensus appears to be that motivation for citizen science, like other volunteer activities, is dynamic and complex.

Content, Process, and Nature of Science Knowledge

Included within the ISE Framework’s impact category of “awareness, knowledge, and understanding” are several subcategories such as knowledge and understanding of science content; knowledge and understanding of science processes; and knowledge of the Nature of Science. Knowledge of science content refers to understanding of subject matter, i.e., facts or concepts. Knowledge of the process of science refers to understanding the methodologies that scientists use to conduct research (for example, the hypothetico-deductive model or “scientific method”). Knowledge of the Nature of Science (NOS) refers to understanding the epistemological underpinnings of scientific knowledge and how it is generated, sometimes presented from a post-positivist perspective (). NOS addresses tenets of science such as tentativeness; empiricism; subjectivity; creativity; social/cultural influence; observations and inferences; and theories and laws (see , , , ). For improving scientific literacy, understanding of NOS and the process of science are generally considered more important than understanding basic content or subject matter (; ; ), and knowledge of the process of science is a regular component of well-established assessments of science knowledge (). Despite this recognition, most attempts to measure science literacy within the ISE field fall back on content knowledge, i.e., rote memorization of facts, rather than knowledge of the nature or process of science (; ).

Indeed, citizen science evaluations have typically emphasized measuring gains in topical content knowledge as opposed to science process knowledge, with mixed results (; ; ; ; ; ; ; ; ; ; ; ; ; ). Overdevest et al. () did not find a significant increase in project participant knowledge about streams and water quality, probably because new volunteers were already highly knowledgeable about the subject matter. Price and Lee () actually found a decrease in science content knowledge among project participants, likely owing to exaggerated notions of participants’ self-perceived content knowledge before starting the project and the realization of how much they did not know after participating in the project.

However, a few studies have used measures of the process of science to assess impacts of citizen science project participation. Jordan et al. () and Brossard et al. () used adaptations of the science and engineering indicators and showed no gains in understanding of the process of science as a result of citizen science participation. In contrast, Ballard et al. () used interview data to show evidence that the Salal harvesting project “… increased local people’s understanding of the scientific process and of the ecosystem on which they were a part (p. 14)”. And significant increases in understanding of the process of science before and after participation in a stream water quality-monitoring project were reported by Cronin and Messemer (). However, this study had a very small sample size, which may limit generalizability of the results.

Likewise, few citizen science projects have attempted to study understanding of the NOS. Jordan et al. () found no evidence for change in knowledge of the NOS using pre-post scenario-based questions in an invasive species project. Price and Lee () found little evidence that project participation influenced epistemological beliefs about NOS, owing to the fact that “epistemological beliefs are personal beliefs and thus harder to change after participating in only one citizen science project” (p. 793). These findings suggest that while citizen science can effectively demonstrate gains in content knowledge, it has a long way to go before it can positively establish increases in understanding of science process and the NOS.

Skills of Science Inquiry

Skills of science inquiry are observable practices that can be transferred to daily life, such as asking and answering questions; collecting data; developing and using models; planning and carrying out investigations; reasoning about, analyzing, and interpreting data; constructing explanations; communicating information; and using evidence in argumentation (; ).

The hands-on nature of many environmentally based citizen science projects makes them particularly well suited to influence the development and/or reinforcement of certain science-inquiry skills including asking questions; designing studies; collecting, analyzing, and interpreting data; and discussing and disseminating results (; ; ; ). Top priorities for many practitioners are helping participants learn to follow protocols and exercise accurate data collection skills, because these practices directly influence data quality. The field-wide emphasis on data quality likely comes from the large percentage of contributory, scientist-driven projects, for which a key goal is gathering data of sufficient quality to add to the existing knowledge base through publication in peer-reviewed journals. Consequently, many citizen science projects most effectively influence skills that are related to data and sample/specimen collection, identification of organisms, instrument use, and sampling techniques. Many projects also engage participants in the use of various technological tools such as GPS units, digital thermometers, water conductivity instruments, rain gauges, nets, and smartphones, to name just a few (Figure 4).

Figure 4 

Many citizen science project designers hope not only to collect important scientific information but also to help project participants gain skills such as scientific reasoning. Here, a team of volunteers with Public Lab, a non-profit environmental science community, launch a weather balloon. Data collected via the balloon will be used in 3-D mapping surveys, but figuring out how to measure just what participants are learning as they conduct this research is a challenge for the citizen science field.

Credit: Alan Kotok/Flickr/CC BY-2.0.

A few researchers have begun to study skill acquisition in citizen science. Becker et al. () showed an increase in the ability to estimate noise levels with increasing participation in WideNoise, a soundscape project operated through mobile devices. Increases in youths’ self-reported science inquiry skills, such as their perceived ability to identify pond organisms and to develop testable hypotheses before and after participation in Driven to Discover, also have been reported (). Sullivan et al. () describe the use of communication prompts and strategies to “steer birders toward providing more useful data” and essentially change the birding habits of eBird participants to increase data quality. Using the theory of legitimate peripheral participation, Mugar et al. () used practice proxies, a form of virtual and trace ethnography, to increase accuracy of data annotation among new members. Additionally, some projects have successfully conducted small-scale studies that compare volunteer-collected data to those collected by experts, thereby creating a baseline metric for assessing their participants’ skills (see ; ; ).

Another hallmark of citizen science is the collection of large, publicly available data sets and rich, interactive data visualizations. Many projects that provide data visualizations may seek to enhance skills related to data interpretation, i.e., the ability to effectively comprehend information and meaning, often presented in graphical form (). In one of the few studies examining data interpretation in citizen science, Thompson and Bonney () showed that even the majority of “active users” of eBird did not properly use the extensive array of data-analysis tools. Numerous studies in educational research have shown that assessing the type of reasoning skills needed for data interpretation requires asking a series of reflective questions to determine one’s justification underlying the reasoning (e.g., ; ).

Other inquiry skills such as study design, communication, critical thinking, decision making skills, and critically evaluating results are less studied within the citizen science literature. Crall et al. () used open-ended questions to determine whether engaging in an invasive species project improved the abilities of participants to explain a scientific study, write a valid research question, and provide a valid sampling design. These researchers noted positive gains in all but the ability to explain a scientific study. Char et al. () found an increase from pre-post training in the ability of COASST volunteers to correctly weigh evidence to determine whether it contained sufficient information for accurately identifying species. These few studies show the potential for studying citizen science participants to evaluate the development of complex science inquiry skills, but such studies are in their infancy.

Behavior and Stewardship

Behavior change and development of environmental stewardship are among the most sought-after outcomes in science and environmental education programs, both in and out of schools (; ; ; ; ; ). Theories examining various determinants of environmental behavior include those espousing the links between knowledge, attitude, and behavior (; ; ; ); attitudes and values (; ); behavior modification and intervention (); and nature exposure (; ; ; ).

We define behavior and stewardship as measurable actions resulting from engagement in citizen science, but external to the protocol activities and the specific project-based skills of the citizen science project. For example, collecting water quality data may be a new behavior for a project participant, but if the data collection is part of the project protocol it should be measured as a new skill rather than a new behavior. However, somebody decreasing their water usage as a result of participating in a water quality monitoring project would be an example of behavior change. Our literature review identified five categories of behavior and stewardship that are of interest to the citizen science field and for which we provide definitions below: Global stewardship behaviors; place-based behaviors; new participation; community or civic action; and transformative lifestyle changes.

Global stewardship refers to deliberate changes in behavior that minimize someone’s individual ecological footprint and which collectively can have global influence (e.g., installing low-flow shower heads, recycling, purchasing energy-efficient appliances). Place-based behaviors refer to observable actions to directly maintain, restore, improve, or educate about the health of an ecosystem beyond the activities of a citizen science project (e.g., removing invasive species; cleaning up trash; eliminating pesticide use; purchasing locally grown food; engaging in outreach to youth groups). New participation is defined as engagement in science or environmental activities, organizations, or projects spurred on by participation in a citizen science project. Community or civic action refers to participation in civic, governmental, or cultural affairs to solve problems at the local, regional, or national level. Actions could include donating to environmental organizations, signing petitions, speaking out against harmful environmental practices, or recruiting others to participate in environmental causes. Finally, transformative lifestyle changes are efforts that require a strong up-front cost or long-term commitment to maintain, such as investing in a hybrid vehicle, becoming a vegetarian, or pledging to use mass transit whenever possible.

Citizen science projects, especially those dealing with environmental topics, are typically hands-on, occur in local environments, and require repeated monitoring and data gathering, making them natural conduits for affecting behavior change (). However, research has been limited and results have been mixed regarding the actual influence of citizen science on behavior change. For example, in a study examining two different projects, one on pollinators and one on coyotes, Toomey and Domroese () show that participants engage in new activities and change their gardening practices, but otherwise did not take part in advocacy or change their environmental stewardship practices. Crall et al. () found significant differences between current and planned behavior as a result of participating in an invasive species project using self-reported measures, but the actual behavior change was not well described. Using a case-study approach, Oberhauser and Prysby () claim that participants of the Monarch Larva Monitoring Project “work to preserve habitat at many levels, from advocating a more environmentally friendly mowing regimen and insect-friendly pest control, to challenging parking lot, building, and road development projects that threaten monarch habitat (p. 104).” However, the source of these data or accompanying methodologies are not clearly described. Cornwell and Campbell () also used a case study approach and were able to document advocacy and political action by volunteers which directly benefited sea turtle conservation. Evans et al. () documented locally, place-based stewardship in a bird breeding program, while other projects showed no change in place-based stewardship practices (). In a study of human health effects of industrial hog operations, Wing et al. () describe actions being taken by community groups to engage in decision-making that addresses local environmental injustices. Taken together, these examples provide some evidence that citizen science may influence behavior and stewardship, but more robust methodologies are needed to establish causation. Plenty of anecdotal data also highlight other examples of behavior change that have not been published or exist only in the gray literature.

Discussion

Results from research conducted through a systematic review of citizen science project websites and a survey of practitioners who design and implement citizen science projects confirm the relevance and applicability of three ISE documents (; ; ) in framing intended learning outcomes for citizen science participants. Informed by this research along with a systematic literature review, we have modified and contextualized these documents to create a new framework that contains definitions and articulations of learning outcomes for the citizen science field. We believe that the framework provides a robust starting point for setting learning goals and objectives for citizen science projects and designing projects to meet those objectives.

Our research has some limitations, however. First, both the co-created and collaborative project categories represented in the online practitioner survey have small sample sizes, so generalizing the types of learning outcomes intended by these project types is challenging. Also, it’s unclear whether the distribution across project types in the online survey reflects the actual distribution of contributory, collaborative, and co-created projects across the U.S. and Canada, or if a disproportionate number of contributory projects received and responded to the survey request. We made no additional effort to recruit additional collaborative or co-created project respondents, thus response bias may be an issue. Also, while we made an effort to ensure that projects which responded to the practitioner survey were included in our website review, project level data from the two sources were not examined together. Doing so may have shown convergence or divergence of intended versus measured outcomes, but was beyond the scope of this work and may have violated confidentiality conditions. Finally, this work is a descriptive study based largely on self-reports in the case of the practitioner survey and published desired outcomes in the case of the website review. More robust inferential studies that can examine field-wide relationships and causal factors between project characteristics and observed learning outcomes would be a significant next step.

Despite these limitations, our findings provide insights into the ways in which learning has been articulated, studied, and measured by citizen science projects. They also provide information about the status of citizen science project evaluation in general. For example, an overwhelming majority of survey respondents expressed positive attitudes toward the importance of evaluation and the evaluation process. However, they also expressed a need for additional support and resources to conduct evaluations. Nearly all respondents reported developing their own evaluation instruments, although most projects measured very similar outcomes. And the fact that very few projects were aware of resources available for guidance in conducting evaluations and locating evaluation instruments suggests that work needs to be done to disseminate tools and resources to the citizen science professional community.

The comparison of intended learning outcomes described on citizen science project websites and the outcomes actually measured by projects highlights some interesting disconnects. For example, fewer than 5% of project websites stated “increasing interest in science and/or the environment” as an intended outcome, yet interest in science was the most commonly measured outcome (46%) across all projects in the online survey. The frequent measurement of interest in science may result from the relative ease of obtaining instruments to measure this outcome or it may be a proxy for measuring interest in the specific topic addressed by the project (e.g., birds, butterflies, astronomy, weather). Further, despite these reported measurements, few studies have published data about changes in interest, perhaps because they have not actually tried to measure it or because the typical citizen science participant (Caucasian, older, highly educated) already demonstrates a high interest in science when joining a project, making it difficult to detect changes in interest over the course of project participation (; ). However, ample opportunity exists for citizen science projects to increase interest in science and the environment by reaching individuals who are not already engaged, especially underserved audiences for whom access to informal science programming may be limited (; ). Additionally, projects that reach youth audiences via K-12 settings can minimize self-selection bias and carry out quasi-experimental studies to determine whether interest in science is leveraged through citizen science participation ().

As another example of a disconnect, self-efficacy was seldom stated as an intended outcome in the website review and did not emerge as a major category of desired outcomes via the online survey. However, approximately 10% of survey respondents mentioned the concepts of “agency,” “confidence,” or “efficacy” in open-ended comments. As stated earlier, self-perceptions of efficacy affect choices of activities that individuals pursue, how much effort they put toward them, and how long they persist in those pursuits (e.g., ; ). Enhancing perceptions of efficacy may be the single most important outcome for many citizen science projects, thus we have included efficacy in our framework.

Yet another disconnect relates to motivation. Few project websites mentioned motivation as an intended learning outcome, and our online survey showed that practitioners measured motivation primarily to understand reasons for participation. Motivations change over time, however, and sustaining project participation requires an understanding of changing roles for individuals within a project and motivations for continued participation. More work also is needed to understand how motivations connect to Self-Determination Theory and serve psychological needs within the context of citizen science. For example, the desire to contribute to a project may be associated with a psychological need for competence, and the desire to engage socially with others may serve the psychological need for relatedness. Studies that examine where motivations fall within the intrinsic-extrinsic motivation continuum are needed to understand how motivation might influence sustained participation over time.

Our results also reiterate the inclination for practitioners to expect and measure gains in science content knowledge, typically through context-specific instruments that measure mastery of project activities and program content rather than increased knowledge about the process of science or the Nature of Science. Although some projects have begun to demonstrate outcomes related to “thinking scientifically” (; ; ), a gap remains in our understanding of the potential for citizen science to influence deeper understanding of the process of science and the Nature of Science as well as the more complex facets of science inquiry (i.e., critical thinking, reflection, and reasoning). Future work should focus on the development of robust and contextually appropriate tools to better capture deep reflection and rich dialogue about NOS.

In perhaps our most surprising finding, nearly 60% of project websites in our study listed data collection as an intended outcome, yet across all projects combined, our online survey showed that skills related to data collection were the least-measured outcome (28%). These findings may reflect the difficulty of measuring attributes such as the acquisition of skills and the relative ease of measuring other constructs such as knowledge, interest, and attitude. This disconnect also represents a potential tension that exists within the citizen science field, particularly among contributory projects: The need for high confidence in data quality versus the dearth of studies that have assessed data collection skills. While several studies demonstrate that volunteers are able to collect data of similar quality to experts, these tend to be isolated examples (; ). Although a multitude of ways to validate citizen-science data exist (see ), tools and techniques are needed that can assess changes in participant data collection skills over time.

Additionally, the field needs to better understand whether citizen science participation can influence other important inquiry skills such as the ability to make decisions regarding appropriate research methodologies, to use variables and control groups properly, and to evaluate evidence. And as attention is increased on the potential for citizen science to democratize science, further work should examine the extent to which it can support development or reinforcement of critical thinking skills that inform decision making and help to create an informed citizenry. Also, in the new world of “Big Data,” citizen science is well poised not only to provide the public with large and robust data sets but also to develop support systems so that users can understand how to effectively use these dynamic resources. Such provisioning may facilitate new lines of research to better understand how participants engage with data sets and what meaning they hold for them.

Finally, in our website review, environmental stewardship was mentioned as an intended outcome by 25% of projects—second only to data collection—suggesting a strong desire for citizen science projects to influence individual behavior change. About one-third of survey respondents reported measuring behavior change, but based on several open-ended comments, some practitioners equated the act of participating in a project as a change in behavior, meaning that such change was indicated for all participants. Recall, however, that we define behavior change as change that goes beyond project activities. Further, tacit assumptions may exist about engagement in specific project activities leading to more global environmental behaviors (; ) (e.g., the assumption that water-quality monitoring can lead to reducing carbon emissions, recycling, and conserving energy). Intended behavioral outcomes should be directly connected to project content and activities, and the knowledge of how to perform these targeted behaviors should be made explicit to participants (; ). While citizen science can likely impact behavior change, the development of effective implementation strategies and measurement of those outcomes are still in their infancy.

Conclusion

Thousands of citizen science projects exist around the word, reaching potentially millions of people, particularly in the observation and monitoring of species and habitats (). Such projects have the potential not only to engage individuals in the process of science, but also to encourage them to take positive action on behalf of the environment (; ). If such outcomes are to be achieved, project developers need to better understand how to design projects so that activities and educational learning opportunities support and align with feasible and realistic outcomes ().

This study has resulted in a framework to support citizen science practitioners in articulating and measuring learning outcomes for participants in their projects. The framework also should help to build capacity for practitioners seeking to conduct evaluations of citizen science projects by helping them to develop their program theory, i.e., to identify underlying assumptions about how project activities affect expected outcomes (; ; ; ). In this regard, most evaluators recommend starting with articulation of project outcomes, then working backward to determine not only what can be achieved and how, but also what can be reasonably measured ().

Toward that end, work proceeding in parallel to this research is developing generic, yet customizable, evaluation scales that are tested as valid and reliable in citizen science contexts and which align to the framework described above (see DEVISE scales: https://cornell.qualtrics.com/jfe/form/SV_cGxLGl1AlyAD8FL). By adopting common learning outcomes and measures, the citizen science field can further evaluation capacity and begin to conduct cross-programmatic analyses of citizen science projects to provide funders, stakeholders, and the general public with evidence-based findings about the potential for citizen science to impact the lives of its volunteers. Such studies also could provide critical information regarding why and how to achieve outcomes and under what conditions outcomes can be maximized.

Future work should support continued development of consistent measures that can be used across studies, particularly those that do not rely on self-reports (; ; ). Continued professional development opportunities for citizen science practitioners to spearhead evaluations of projects will increase capacity for such endeavors, build a steady source of knowledge about impacts, and lead to improved project design, implementation, and sustainability for the field as a whole. Initiation of in-depth longitudinal studies that measure persistence of change over time would add understanding of the impacts of such experiences (). To the extent possible, more effort should be placed on studies that include experimental designs, random assignment, and control groups. Such efforts will increase the field’s ability to provide evidence for causal connections between citizen science participation and learning outcomes.

Additionally, continued research on learning outcomes should seek to incorporate social learning theories, which may be helpful in understanding how learning happens in citizen science and the mechanisms and processes that enable active learning. Social learning theories such as Cultural Historical Activity Theory (); Activity Theory (), Experiential Learning (; ), Situated Learning Theory (), and Communities of Practice () are ideally suited for examining learning in citizen science because they emphasize the roles that participation in socially organized activities play in influencing learning (; ). Social learning theory may be particularly useful to consider when developing project activities and experiences. Practitioners interested in incorporating social learning theories into citizen science project design, research, and evaluation should refer to the following studies for guidance: Roth and Lee (); Brossard et al. (); Ballard et al. (); Raddick et al. (); and Jackson et al. ().

Finally, as citizen science continues to grow, it will be important for the field to take a reflective look at its relative impact, and to evaluate whether appropriate questions are being asked by qualified researchers working across projects that involve diverse audiences and issues. Such an analysis will be a first step in gathering critical evidence to demonstrate the potential of citizen science to truly democratize science.

Additional Files

The Additional Files for this article can be found as follows:

Appendix A

Databases and Search terms used to locate citizen science websites. DOI: https://doi.org/10.5334/cstp.126.s1

Appendix B

Questions from Online Practitioner Survey. DOI: https://doi.org/10.5334/cstp.126.s1