Start Submission

Reading: Citizen Scientist or Citizen Technician: A Case Study of Communication on One Citizen Scienc...


A- A+
Alt. Display

Research Papers

Citizen Scientist or Citizen Technician: A Case Study of Communication on One Citizen Science Platform


Danielle E. Lin Hunter,

Colorado State University, US
About Danielle E.
Department of Biology, PhD candidate
X close

Gregory J. Newman,

Colorado State University, US
About Gregory J.
Natural Resource Ecology Laboratory, Research Scientist
X close

Meena M. Balgopal

Colorado State University, US
X close


The different types of engagement in citizen science produce potentially different outcomes. Although participation in citizen science can increase volunteer scientific literacy, recent research suggests that this does not occur in projects in which volunteers only contribute data. We performed a content analysis of project descriptions (n = 152) along with project descriptions found on hyperlinked websites (n = 23), analyzing volunteer tasks according to cognitive order as defined by Bloom’s Taxonomy, an educational framework designed to classify an individual’s depth of knowledge. We also considered who benefits from the tasks that volunteers performed. We found that most projects described volunteers as performing (low order) tasks and described the benefits to citizen science projects. Our analysis indicates that project managers describe the scientific process in a limited capacity, which has implications for volunteer scientific literacy. That said, our study is limited in that we did not confirm the findings with project managers who wrote task descriptions. It is also important to recognize that our findings investigate communication on a single citizen science platform and are not representative of the entire field of citizen science. However, it serves as a first step to better understand how scientists and project managers communicate about citizen science.

How to Cite: Lin Hunter, D.E., Newman, G.J. and Balgopal, M.M., 2020. Citizen Scientist or Citizen Technician: A Case Study of Communication on One Citizen Science Platform. Citizen Science: Theory and Practice, 5(1), p.17. DOI:
  Published on 04 Sep 2020
 Accepted on 13 Jul 2020            Submitted on 21 Jun 2019


Citizen science efforts involve the general public in various aspects of the scientific process (Bonney et al. 2009a). However, there are different degrees to which volunteers can participate (Shirk et al. 2012). Citizens hire scientists to answer local questions in contractual projects, whereas volunteers collect or analyze data for professional scientists in contributory ones. In collaborative projects, volunteers and project managers work together on certain parts of the project, whereas co-created projects involve collaboration between volunteers and project managers throughout the entire project (from asking scientific questions to sharing results). Finally, in collegial projects, citizen scientists perform research independent of professional scientists. While many papers have anecdotally noted that most projects tend to be top-down, contributory projects (e.g., Pocock et al. 2017), few have documented this through research (Bela et al. 2016; Groulx et al. 2017).

Volunteers have many diverse motives for participating in citizen science. Studies have investigated motivation for participation in individual projects (Raddick et al. 2013; Domroese and Johnson 2017) and across multiple projects (Alender 2016; Geoghegan et al. 2016). Both types of studies indicate participant motives that include contributing to science, to conservation efforts, or to the community; connecting with nature or with a specific place; socializing; furthering a career; exercising; having fun; and learning. The quality of overall volunteer participation in citizen science is defined by how well project outcomes align with volunteer needs and motives (Shirk et al. 2012).

Participating in citizen science programs can result in various outcomes, but one that has received attention in the literature is learning. Multiple citizen science stakeholders, including volunteers, community members, and scientists, can learn about scientific inquiry and environmental issues when engaging in citizen science projects (NASEM 2018). This paper focuses on learning outcomes of volunteers specifically. Participating in citizen science can increase volunteer scientific literacy (Bonney et al. 2009b). Knowledge gains, engagement in inquiry-based reasoning, and changed conservation attitudes and environmental behaviors are all documented learning outcomes in citizen science (NRC 2000; Trumbull et al. 2000; Jordan et al. 2011; Toomey and Domroese 2013). Scientific literacy has important benefits for both individuals and society (Laugksch 2000). Here, we define the highest level of literacy as the ability to make an evidence-based decision that alters one’s behaviors (UNESCO 1978; Balgopal and Wallace 2009). Individuals empowered to engage in scientifically informed behaviors can make decisions in their personal lives that also result in economic or environmental benefits to society (Laugksch 2000). While changed behaviors may be considered the idealistic epitome of scientific literacy, the two are often not correlated (Nisbet and Scheufele 2009).

Although education research has focused on the disparity between scientific literacy and scientifically informed decision-making, this is an area that is under studied in the field of citizen science. A recent report by NASEM (2018) revealed that there was little to no research on how the design of citizen science projects may result in learning outcomes. In fact, there is little research on how scientists and project managers generally choose to design projects. This prompted our inductive investigation of how project managers and scientists describe their projects on an online platform for citizen science. Our objectives were to characterize 1) the tasks that volunteers were asked to perform and 2) who was described as benefitting from the tasks. Volunteer tasks are important because how citizen science volunteers are engaged in projects may affect opportunities for them to increase their own scientific literacy (Bonney et al. 2016). Furthermore, we analyzed described project outcomes because learning is more likely to occur when projects align with volunteer motivations and interests (NASEM 2018).

Theoretical Framework

Science literacy is a complex concept that describes people’s understanding and applications of scientific knowledge (Laugksch 2000). Although scientific literacy has historically been described as an endpoint of learning about and understanding scientific concepts, it is probably more accurately described as a continuum (Uno and Bybee 2016). A person becomes increasingly scientifically literate as they learn more about science content and the scientific process by which this information is generated (NRC 2012). The notion that scientific literacy is dependent not only on content knowledge but also critical thinking and scientific skills has a long history in education research (Sanderson and Kratochvil 1971). These components of scientific literacy have been classified as basic and integrated processes. Basic processes involve “observing, classifying, using numbers, measuring, using space/time relationships, communicating, predicting, inferring,” whereas integrated processes include “defining operationally, formulating hypotheses, interpreting data, controlling variables, experimenting” (Sanderson and Kratochvil 1971, p. 13). Engaging in basic processes has been linked to low-order thinking, and integrated processes are associated with high-order thinking (Lewis and Smith 1993). More recently, communicating results and educating others have been associated with demonstrating high-order thinking because these skills involve critically understanding and thinking about science content knowledge (NASEM 2018). The classification of thinking as low and high has been expanded to include medium-order thinking (Jensen et al. 2014).

Low-, medium-, and high-order thinking have been connected to Bloom’s taxonomy of cognition, a psychological framework used by educators to classify levels of thinking or learning objectives (Miri, David, and Uri 2007; Jensen et al. 2014). Bloom’s taxonomy classifies different cognitive skills that support the development of scientific literacy. On one end of the continuum is the ability to remember or memorize information. As scientific literacy increases, individuals demonstrate abilities to understand, apply, analyze, evaluate, and create knowledge (Bloom, 1956). Jensen et al. (2014) defined low-order cognition as being able to remember and understand knowledge, medium-order cognition as the ability to apply knowledge, and high-order cognition as the ability to analyze, evaluate, and create knowledge. Because Bloom’s taxonomy is most often used in educational settings and not in citizen science contexts, we slightly modified the three levels, expanding the medium-order category to include understanding and analyzing because application of knowledge is uncommon in citizen science (Figure 1).

Figure 1 

Bloom’s Taxonomy: an educational framework used to classify the depth of knowledge as low, medium, and high. The words and phrases above represent the tasks of the citizen science projects on (modified from Bloom 1956).


In this study, we sought to investigate how project managers communicate about volunteer tasks on an open-access digital platform for citizen science projects called We conducted a content analysis of the different tasks that volunteers are described as performing and considered how they are related to tasks associated with increased scientific literacy.

Positionality statements

Our team comprises three experts in environmental social science: a graduate student studying conservation social science, an ecologist who studies citizen science programs, and a social scientist who studies ecological and scientific literacy. Our team has collaborated for the past three years and knows that our individual perspectives have helped shape the analysis of this study. All three of us are formally trained in ecology and have worked at some point in our lives as research assistants (or technicians) and as citizen science volunteers. Our research experience in discourse analysis varies, but we have worked collaboratively throughout the research process to ensure that we adhere to the standards of research practices.

Research setting is an online platform that supports citizen science projects (Newman et al. 2012). It is a free resource and therefore helps to alleviate some of the financial difficulties that citizen science projects often face by providing cost-effective data management, documentation, and sharing (Wang et al. 2015). It has been used by more than 750 projects since it was first developed in 2007. Project managers create project pages where they provide a general overview of the project and define goals and tasks. They can also add links to other webpages that often further describe projects. Project managers can recruit and communicate with volunteers on There is also a means by which data collection protocols can be developed and shared with other citizen science projects. Once a volunteer joins a project, they upload data to using the protocols developed by the project managers. Project managers and volunteers can also perform preliminary analyses of their data on Because has these different functions and governance capabilities, it can support a diverse array of projects (Lynn et al. 2019) and in turn can collect metadata across projects to conduct informative meta-analyses (Newman et al. 2012). is an appropriate platform for such a study because it can support studies across an array of projects with different types of volunteer tasks.

Data collection

In this study we included environmental and ecological projects on that were active between June 2016 and June 2018. Any project that temporarily ceased activity between these dates but resumed after data were extracted for the study was excluded. This resulted in 165 projects that were included in our analysis. Of these 165 projects, some projects were created under the same umbrella organization and shared the same descriptions. In these instances, they were counted as a single project. This reduced our total number of unique project descriptions to 152. Each project on has a profile page on which a primary project description is provided by the creators. Project managers also have the option to provide defined metadata attributes such as goals, tasks, external website, study extent, status, privacy, sampling design, QA/QC (quality assurance/quality control), QA/QC description, and organization. Project descriptions and described metadata attributes were extracted for analysis.

Some projects include hyperlinked external websites that further describe relevant metadata attributes of the project; these were included in our analysis. Only websites that were hyperlinked in the main project descriptions or within the metadata attribute fields described above were included. Websites often contain information other than what pertains to the project, so only information about the project within the hyperlinked domain was included. Although some websites provide links to other website domains with more information on the project, these were excluded from analysis. Within the website domain, specific data collection protocol documents and data portals that require a login were also excluded. Of the 165 projects in the study, 34 had website links that met the criteria for additional inclusion. Again, the projects that shared the same website were analyzed as a single project. This resulted in a total of 23 unique external websites that were included in our analysis.

Data analysis

We used an inductive coding process to identify emergent themes, while using Bloom’s taxonomy as a sensitizing concept (Bowen 2006; Charmaz 2006). In other words, we recognized that citizen science tasks could be classified by levels of cognition using Bloom’s taxonomy; however, we did not anticipate all the tasks that would be described, nor how often they would appear in our analysis. We started by identifying potential codes for random subsets of 10% of the project descriptions. As we considered these data, it became clear that tasks for volunteers varied across levels of cognitive expectations, warranting the use of Bloom’s taxonomy to inform further coding (Bloom 1956). Project descriptions on and text of hyperlinked websites were analyzed through qualitative content analysis (Elo and Kyngäs 2008).

Volunteer tasks, as described by project managers, were coded by cognitive level as low-, medium-, and high-order tasks (Figure 1). High-order tasks were those that were intended to benefit or enact change on the environment (e.g., to take actions to reduce erosion or to help something). Medium-order tasks were those that intended to help volunteers understand the environment (e.g., understand or question). Finally, low-order tasks were intended to ask volunteers to describe or otherwise measure aspects of the environment (e.g., upload a photo or measure the pH of water). Some projects asked for public opinion only, rather than for any other task to be performed. We restricted our coding of each project description to a single code consisting of either low, medium, or high. If project descriptions described multiple levels of tasks, we coded them to be at their highest level of Bloom’s taxonomy. Therefore, a project description that described both low and medium tasks was coded only as medium.

Coding also distinguished between an intended task and a performed task. Often, project descriptions included related actions for which the intended task had a larger scope with fewer explicit actions. The performed task was the means by which the intended task could be accomplished; this usually included specific action steps. In other words, the volunteer was expected to accomplish the intended task by doing the more specific performed task. For example, a volunteer might help by collecting water-quality data. In this case, it is intended that the volunteer helps. The specific task that they are expected to perform to accomplish this is collecting water quality data. For our analysis, the intended task was labeled as helping, and the performed task was labeled as collecting. Although the distinctions between intended and performed tasks may at first appear to be subtle, they are important because some tasks are related to others. This coding process was necessary to distinguish between what project managers expect citizen science volunteers to accomplish (the intended task) and how they are expected to accomplish it (the performed task). This distinction is rooted in educational psychology research that considers both intended learning outcomes and performed tasks (Anderson and Krathwohl 2000).

Volunteer tasks are meant to help the project, but all projects differ in their intended outcomes. We therefore considered who or what was benefitting from the indicated volunteer tasks. Each time a new beneficiary code was identified, all projects and website materials were re-coded. Benefits identified included: no described benefit, the environment, social ecological systems, volunteer awareness or knowledge, volunteer empowerment, the citizen science project itself, the community, and the scientific community. The environment and social ecological systems were considered separately because in the citizen science literature, the focus is often on management and policy outcomes rather than on direct environmental benefits (e.g., Shirk et al. 2012; McKinley et al. 2017). Furthermore, social ecological benefits related to management (Shirk et al. 2012) differ from community benefits that can increase local social capital (Conrad and Hilchey 2011) or promote community health (Den Broeder et al. 2017). By separating the benefits to volunteer awareness/knowledge from the benefits to volunteer empowerment, we sought to tease apart the degree of participation described by volunteer tasks. This distinction is important in the environmental education literature in which empowerment through skills and meaningful participation are viewed as the highest level of environmental education, whereas increasing awareness and knowledge constitute the lowest levels (UNESCO 1978).

To ensure trustworthiness of our analysis, we engaged in iterative coding and peer and expert debriefing (Creswell and Miller 2000; Nowell et al. 2017; O’Connor and Joffe 2020). The first author developed the initial codes and codebook, and engaged in inter-rater coding with the third author, who was trained in how to use the codebook, and independently coded a randomly selected 20% of the extracted content (as suggested by O’Connor and Joffe [2020]). The initial inter-rater coding reliability was 89%, but after expert debriefing, the codebook was revised (Appendix 1), and 100% agreement was obtained between two coders. Three months after the initial round of coding took place, the first author re-coded the project descriptions and websites to ensure the reliability of the codes through intra-rater coding, and 93% agreement was found. Each discrepancy was double-checked and evaluated through peer debriefing until our team reached consensus.


We found that most projects on the platform described volunteers as performing tasks that we classified as low order and described benefits to citizen science projects. Our analysis suggests that project managers describe citizen science activities as those that engage volunteers in limited participation in the scientific process.

Volunteer tasks

Descriptions of volunteer tasks were categorized as intended and performed and were calculated across tasks found in project descriptions (n = 152) and hyperlinked website materials (n = 23; Table 1; Figure 2). Low-, medium-, and high-order tasks were intended respectively in 74, 17, and 52 of the 152 projects on Two projects described volunteer opinions. When performed tasks were considered, low-, medium-, and high-order tasks were found in 136, 1, and 5 projects of the 152 projects on Two of the projects were coded as a solicitation of volunteers’ opinions. No intended tasks were found in seven of the projects, and no performed tasks were found in eight of the projects. One project suggested that volunteers could help the project, a high-order intended task, but did not explain how it should be done; hence the project was coded as having a high-order intended task but with no performed task.

Table 1

Intended and performed task codes. The number of times and percent that a task code was found on (out of 152) and on hyperlinked websites (out of 23) are presented, along with textual examples. Identifying information was omitted to maintain anonymity of the project.

Task code Website Example

Intended count Intended percent Performed count Performed percent Intended count Intended percent Performed count Performed percent

Low order 74 48.7% 136 89.5% 5 21.7% 15 65.2% “Observe and record flora”
“Take a photo of the landscape”
“Monitor local intermittent stream channels”
Medium order 17 11.2% 1 0.7% 0 0.0% 0 0.0% “Evaluate the status of the species”
“Assess the success of restoration efforts”
“Compare creek flow season to season”
High order 52 34.2% 5 3.3% 17 73.9% 7 30.4% “Help inform future tree planting”
“To clean up [location] and all streams that flow through [location]”
“Improve sections of wetlands, rivers, lakes, or estuaries”
Opinion 2 1.3% 2 1.3% 0 0.0% 0 0.0% “Fill out our survey”
“Gathering citizen science inputs on what ecosystem attributes are valued”
No task 7 4.6% 8 5.3% 1 4.3% 1 4.3% “[Project name], [location]”
Figure 2 

Intended and performed volunteer tasks. Presented according to cognitive level (low-, medium-, and high-order tasks), the solicitation of an opinion, or no task expectations.

Intended and performed tasks were also identified in websites that were hyperlinked on project pages. Low-, medium-, and high-order tasks were intended respectively in 5, 0, and 17 of the 23 project websites. No projects asked volunteers to record their opinions. However, 15, 0, and 7 of 23 project websites asked volunteers to perform low-, medium-, and high-order tasks, respectively. None of the websites asked for an opinion. Of the 23 hyperlinked website materials, only one had no discernable task described.

We analyzed the breakdown of intended tasks by performed tasks on and hyperlinked websites according to cognitive level, the solicitation of just an opinion, and no task expectation (Figure 3). On, all the project descriptions that were coded as intending low- or medium-order tasks were coded as performing low-order tasks. Of the 52 intended high-order tasks, 45 were coded as low-order performed tasks, 1 was coded a medium-order performed task, 5 were coded high-order performed tasks, and 1 had no performed task. The two projects that requested opinions from volunteers and the seven projects that had no intended task had alignment between intended and performed tasks.

Figure 3 

Breakdown of intended tasks by performed task order. Misalignment of intended and performed tasks expected of citizen science volunteers.

We conducted a similar analysis on the text obtained from hyperlinked websites. All five of the project websites that intended low order tasks were also coded as having low-order performed tasks. Of the 17 websites that intended high-order tasks, 10 were coded as having volunteers perform low-order tasks and 7 were coded as having volunteers perform high-order tasks. The one project that had no intended task was also coded as having no performed task. There were no intended or performed tasks coded as medium order or asking for an opinion on the websites.

Beneficiary of the tasks

We recorded who or what benefitted from the tasks that volunteers performed for each project using one of eight codes: the citizen science project, the environment, the social ecological system, the scientific community, volunteer awareness or knowledge, volunteer empowerment, the local community, or no described benefit (Table 2; Figure 4). Of the 152 project descriptions on, 80 are described as benefitting the citizen science project, 25 as benefitting the environment, and 24 as benefitting social ecological systems. Benefits to the scientific community are described in 17 project descriptions. Volunteer awareness or knowledge and volunteer empowerment are described as benefits in 11 and 7 projects respectively. Benefits to communities are described in 7 projects, and 16 projects have no described benefit.

Table 2

Benefit codes. and hyperlinked websites were evaluated for descriptions of beneficiaries of the citizen science projects. Numbers, percentages, and examples of these codes are presented, while maintaining project anonymity.

Benefit code Websites Example

Count Percent Count Percent

Citizen science project 80 52.6% 8 34.8% “You can help us understand whether this seaweed has a negative or positive impact on our [location] coastal ecosystems.”
Environment 25 16.4% 8 34.8% “This project supports aquifer recharge, rainwater harvesting, and salmon recovery in [watershed].”
Social ecological systems 24 15.8% 9 39.1% “Baseline studies, like [project name], of populations are crucial for management decisions.”
Scientific community 17 11.2% 7 30.4% “These samples will help small game biologists better understand age and sex demographics of game bird populations.”
Volunteer awareness or knowledge 11 7.2% 5 21.7% “Increase public awareness of ecological value of maintaining healthy vernal pools.”
Volunteer empowerment 7 4.6% 3 13.0% “[Project] is a water quality data collection and education program that seeks to increase awareness about the importance of water quality and promote stewardship of [location’s] aquatic resources.”
Community 7 4.6% 2 8.7% “They sought to … improve city planning, management, and human health.”
No described benefit 16 10.5% 1 4.3% “[Project name], [location]”
Figure 4 

Coded benefits on and project websites. A comparison of who or what benefits from volunteer tasks according to project descriptions on and associated project websites.

Project beneficiaries were identified and coded from hyperlinked websites. Eight of the 23 websites describe benefits to the citizen science project, eight describe benefits to the environment, and nine describe benefits to the social ecological systems. Seven websites describe the benefits of the project to the scientific community. Volunteer awareness or knowledge is described as a benefit in five projects, while three websites describe volunteer empowerment as a benefit. Two websites describe benefits to the community, and on one of the websites there are no described benefits.


Our content analysis revealed that only a small proportion of the environmental and ecological projects on the platform expect volunteers to engage in higher-order tasks. Instead, most of the projects we examined describe low-order tasks for their volunteers to perform. We argue that our analysis provides an opportunity for citizen science managers and researchers to further examine claims about the value of participation on volunteer scientific literacy.

Characteristics of projects with higher-order tasks

On, five of the 152 total projects (3%) expect volunteers to perform high-order tasks compared with seven out of 23 websites (30%). Only one project is coded as describing high-order tasks on both their project description and their website; this project asks volunteers to educate the public about what they learned. Our findings are corroborated by two recent studies. A survey of 77 citizen science project managers reported that volunteers were mostly data collectors who rarely engaged in stewardship or communication activities (Wiggins and Crowston 2015). Another study of volunteers from six different citizen science projects revealed that while general communication about participating in the project was common, communication about findings of the research was uncommon (Phillips et al. 2019).

There were two noteworthy characteristics across the 11 projects (4 from, 6 from websites, and 1 from both) with expectations that volunteers perform a higher-order task. First, projects that engage volunteers in performing high-order tasks are more likely to describe volunteers as having a more direct sphere of influence on the discussed outcome. Eight describe benefits to the environment directly, four of which also describe benefits to social ecological systems. Citizen science is often described as benefitting social ecological systems like management and policy as opposed to benefitting the environment directly (Shirk et al. 2012; McKinley et al. 2017). When social ecological benefits are described, however, volunteers are described as one step removed from their actions benefitting the environment. Their actions might benefit (support) management or policy, which then goes on to benefit the environment, but are not described as engaging in actions that directly benefit the environment. That said, benefitting the environment is a common motive for volunteers in environmental citizen science (Alender 2016; Geoghegan et al. 2016).

Furthermore, none of the 11 projects suggest benefits to the citizen science project itself, the most commonly described benefit. Benefits to the citizen science project may have an eventual impact on other outcomes, but the language used to describe volunteer tasks does not directly suggest these outcomes. When projects describe volunteers as indirectly imparting outcomes, they maintain the primary discourse related to public participation in environmental issues: green governmentality (Lassen et al. 2011). Green governmentality maintains top-down structures in that the public provides help to experts, who then go on to enact change. However, civic environmentalism is a less common discourse in which members of the public drive decision-making and enact change on their own (Bäckstrand and Lövbrand 2006). Although project managers assigning tasks for volunteers will naturally maintain top-down green governmentality, when direct benefits are described, discourse shifts toward civic environmentalism because volunteers more directly impart project outcomes.

Second, these projects describe not only higher-order tasks, but also higher-level benefits to volunteers. Four of the eleven projects describe benefits to volunteer empowerment, while none of them describe increasing volunteer awareness or knowledge. This distinction is rooted in environmental education literature. The Tbilisi Declaration was drafted at the first Intergovernmental Conference on Environmental Education to outline the role and importance of environmental education for sustainability. Increasing awareness and gaining knowledge were viewed as the lowest levels of environmental literacy. However, learning the skills to solve environmental problems and thorough participation in environmental problem solving have been, and still are, viewed as the highest level of environmental literacy (UNESCO 1978; McBride et al. 2013). Therefore, several of the projects that engage volunteers in higher-order tasks also, whether purposefully or inadvertently, describe activities that are in alignment with more holistic and comprehensive forms of environmental education.

The 11 projects that ask volunteers to perform higher-order tasks may intend to empower volunteers, yet this cannot be claimed without follow-up interviews with the project leaders. However, a cross-case analysis of five volunteer biological monitoring projects revealed that level of participation and empowerment were not connected (Lawrence 2006). Volunteers participating in contributory projects were empowered to collect data more independently and more rigorously, and participants in a contractual project were empowered to protect local economically and culturally important resources. While projects on that intend higher-order tasks may involve volunteers in diverse participation and use language that more directly describes benefits, we cannot speculate about whether this is purposeful or if empowerment of volunteers actually occurs. Further research is needed to understand how language may affect volunteer empowerment.

Low-order tasks in citizen science

Low-order tasks are those that ask volunteers to describe or otherwise measure the environment, often through activities such as monitoring the presence or absence of a species, collecting samples, or uploading a picture. Because each of these tasks involves collection, most of the projects on are considered contributory (Shirk et al. 2012). These findings are supported by other studies that indicate that citizen science is primarily contributory (e.g., Bela et al. 2016; Pocock et al. 2017) and focuses on data collection (Phillips et al. 2019). It is important to note the distinction between task complexity and task order as per Bloom’s taxonomy (Wiggins and Crowston 2015; NASEM 2018). Performed tasks may be complex (e.g., identifying hard-to-identify species or taxa, measuring transects or nested plots, and measuring and collecting verification samples for subsequent laboratory analysis) yet still be coded as low order, given that they require volunteers to describe or measure the environment rather than analyze data or otherwise critically reflect on the meanings or interpretations of results.

Previous research on contributory citizen science suggests that these projects may not increase public scientific literacy compared with those projects that engage volunteers in deeper degrees of participation (Bonney et al. 2016). In fact, a content analysis of 327 project websites revealed that if volunteer learning objectives were defined, they were usually low order (Phillips et al. 2018). This is important because environmental education research indicates that participation in low-order tasks alone is insufficient to motivate pro-environmental behaviors changes (Balgopal and Wallace 2009) that make up the highest level of scientific literacy (Bloom 1956; UNESCO 1978). Because many volunteers tend to be college educated, or even retired scientists (Geoghegan et al. 2016), they exhibit high levels of scientific literacy prior to participating (Martin 2017). There are different types of scientific literacy that can be improved upon; for example, one can have high ornithology literacy but low watershed literacy (Uno and Bybee 1994). Therefore, individuals who are generally scientifically literate can still benefit from engaging in citizen science activities. Consideration of how project design affects scientific literacy is also important because learning is another common motive for participation (Domroese and Johnson 2017).

Contributory citizen science and engagement in low-order tasks can result in positive outcomes for both volunteers and the scientific community (Shirk et al. 2012). For example, volunteers often report high satisfaction and learning as a result of participation (e.g., Trumbull et al. 2000; Wright et al. 2015). Volunteers sometimes do not want to engage in more than data collection, suggesting that they are satisfied with engaging in low-order tasks (Martin, Christidis, and Pecl 2016; Lewandowski et al. 2017; Phillips et al. 2018). Furthermore, contributory citizen science has benefited the scientific community through expanded temporal and geographic sampling scale (Cooper et al., 2007), cost-effective data collection and analysis (Dickinson et al. 2010), and peer-reviewed publications, whether explicitly mentioned (Kullenberg and Kasperowski 2016) or not (Cooper, Shirk, and Zuckerberg 2014). We, by no means, intend to undervalue the contributions of contributory citizen science or its dedicated volunteers.

We do, however, wish to push the field of citizen science to consider two critiques that can be learned from political theory and from participatory democracy. First, common discourse often serves to perpetuate current norms and values, meaning that how we communicate about a phenomenon today affects how the phenomenon will occur in the future (Young 2001). Therefore, it is possible that describing low-order tasks now might affect how future citizen science projects are planned and designed. We recognize that our analysis is limited in that we studied projects only on and therefore cannot generalize to the greater state of communication regarding citizen science. However, this serves as a first step toward interesting questions for future citizen science and communication researchers to consider. Second, programs—in this case, citizen science projects—should be designed with non-dominant groups in mind (Sanders 1997; Kadlec and Friedman 2007). Therefore, even if most citizen science volunteers tend to be generally scientifically literate, project design should focus on how it can benefit and be accessible to individuals with lower or different types of scientific literacy (NASEM 2018). Only then can we expand the stakeholders who benefit from engaging in citizen science.

Project managers and volunteers work together in citizen science communities of practice. There are different models to evaluate the degree to which they work together (Shirk et al. 2012). In more traditional scientific communities of practice, scientists ask and answer research questions, while technicians perform tasks that help answer the scientists’ research questions and accomplish the scientists’ goals (Shapin 1989; Doing 2004). Hence, when the project managers describe low-order tasks for volunteers to perform in contributory projects and define project benefits that may misalign with volunteer motives for participation, they are described as citizen technicians rather than citizen scientists.


First, we did not confirm the findings of this research with project managers through interviews; we relied on content analysis only. It is possible that the citizen science projects in our analysis have volunteers perform higher-order tasks than were described. Second, although the coding in this content analysis was based on initial agreement using over 20% of the data, rather than 100%, as is suggested by Krippendorff (2004), we followed the protocols advocated by O’Connor and Joffe (2020). Third, our analysis was designed to assume that the person who created a project page was the project manager; however, members of the public may have developed their own collegial projects on, indicating that they are engaging in more parts of the scientific process than just data collection, as our results may suggest. Fourth, does not vet projects, meaning that there are no minimum qualifications for people to create projects. While the lack of a vetting process is appealing to newcomers and is inclusive, especially to those involved in projects with few resources, it may also result in busy, time-constrained project managers paying less attention to the ways in which they describe their projects. Fifth, project managers may spend very little time describing their projects on a platform because they have already described their projects in detail elsewhere. Although we examined external website content descriptions to avoid this bias, detailed project descriptions external to descriptions may still exist that we were unable to obtain. Finally, we recognize that the citizen science projects used in our analysis may not be representative of all citizen science projects.


Despite some limitations, our findings suggest that the ways citizen science projects on communicate about the tasks their volunteers perform tend to describe citizen scientists as citizen technicians and tend to categorize most projects as contributory. Although those engaged in citizen science projects may informally discuss the limited role that volunteers play in scientific inquiry, we are unaware of other systematic analyses of online citizen science materials. We reiterate that is just one citizen science platform and is not representative of the entire citizen science community. Yet, this study is a first step in understanding how people communicate about citizen science. Further research is necessary to understand the broader scope of communication about citizen science and to support or refute these findings by validating these results with project managers and volunteers. There is a need to better understand how described benefits affect volunteer recruitment, retention, or future project design modalities (e.g., citizen science projects remain focused on a contributory model). These findings suggest that there may be a missed opportunity to increase volunteer scientific literacy in citizen science and that improved science communication skills among managers describing these endeavors may play an important role in shaping the citizen science community of practice. We conclude that communication on is one that largely describes projects as contributory, describing low-order tasks for volunteers. How citizen science project managers communicate about their projects may have implications for current and future volunteer engagement, volunteer agency to enact environmental change, and perceptions of citizen science broadly. These implications can hinder public scientific literacy and therefore public engagement in behaviors that would ultimately be beneficial to the environment.

Supplementary File

The supplementary file for this article can be found as follows:

Supplemental File 1

Appendix 1. DOI:


The authors acknowledge the members of the Balgopal lab for providing edits and participating in peer debriefing.

Funding Information

This study was supported, in part, from the National Science Foundation SI2-SSI program (Award # 1550463). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the National Science Foundation.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

All three authors worked collectively to conceptualize this study. The first author extracted the data, co-developed the codebook, and analyzed the data. The second author develops and maintains and provided expertise on analyzing data from the platform. The third author co-developed the codebook and engaged as an inter-rater coder. All three authors were involved in peer debriefing, writing, and/or editing various drafts of this manuscript.


  1. Alender, B. 2016. Understanding volunteer motivations to participate in citizen science projects: a deeper look at water quality monitoring. Journal of Science Communication, 15(3): 1–19. DOI: 

  2. Anderson, LW and Krathwohl, DR. 2000. A taxonomy for learning, reaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Addison Wesley Longman. 

  3. Bäckstrand, K and Lövbrand, E. 2006. Planting trees to mitigate climate change: Contested discourses of ecological modernization, green governmentality and civic environmentalism. Global Environmental Politics, 6(1): 50–75. DOI: 

  4. Balgopal, M and Wallace, A. 2009. Decisions and dilemmas: Using writing to learn activities to increase ecological literacy. Journal of Environmental Education, 40(3): 13–26. DOI: 

  5. Bela, G, Peltola, T, Young, JC, Balazs, B, Arpin, I, Pataki, G, Hauck, J, Kelemen, E, Kopperoinen, L, Van, A, Keune, H, Hecker, S, Suškevičs, M, Roy, HE, Itkonen, P, Kulvik, M, Laszlo, M, Basnou, C, Pino, J and Bonn, A. 2016. Learning and the transformative potential of citizen science. Conservation Biology, 30(5): 10. DOI: 

  6. Bloom, BS. 1956. Taxonomy of educational objectives: The classification of educational goals. New York: D. McKay Co., Inc. 

  7. Bonney, R, Ballard, H, Jordan, R, McCallie, E, Phillips, T, Shirk, J and Wilderman, CC. 2009a. Public participation in scientific research: Defining the field and assessing Its potential for informal science education. A CAISE inquiry group report. Washington, DC: Center for Advancement of Informal Science Education. 

  8. Bonney, R, Cooper, CB, Dickinson, J, Kelling, S, Phillips, T, Rosenberg, KV and Shirk, J. 2009b. Citizen science: A developing tool for expanding science knowledge and scientific literacy. BioScience, 59(11): 977–984. DOI: 

  9. Bonney, R, Phillips, TB, Ballard, HL and Enck, JW. 2016. Can citizen science enhance public understanding of science? Public Understanding of Science, 25(1): 2–16. DOI: 

  10. Bowen, GA. 2006. Grounded theory and sensitizing concepts. International journal of qualitative methods, 5(3), 12–23. DOI: 

  11. Charmaz, K. 2006. Constructing grounded theory: a practical guide through qualitative analysis Vol. 10. New Brunswick, NJ: Rutgers. DOI: 

  12. Conrad, C and Hilchey, KG. 2011. A review of citizen science and community-based environmental monitoring: Issues and opportunities. Environmental Monitoring and Assessment, 176(1–4): 273–291. DOI: 

  13. Cooper, CB, Dickinson, J, Phillips, T and Bonney, R. 2007. Citizen science as a tool for conservation in residential ecosystems. Ecology and Society, 12(2). DOI: 

  14. Cooper, CB, Shirk, J and Zuckerberg, B. 2014. The invisible prevalence of citizen science in global research: migratory birds and climate change. PLoS ONE, 9(9): e106508. DOI: 

  15. Creswell, JW and Miller, DL. 2000. Determining validity in qualitative inquiry. Theory into Practice, 39(3): 124–130. DOI: 

  16. Den Broeder, L, Lemmens, L, Uysal, S, Kauw, K, Weekenborg, J, Schönenberger, M, Klooster-Kwakkels, S, Schoenmakers, M, Scharwächter, W, Van de Weerd, A, El Baouchi, S, Schuit, AJ and Wagemakers, A. 2017. Public health citizen ccience; perceived impacts on citizen scientists: A case study in a low-income neighbourhood in the Netherlands. Citizen Science: Theory and Practice, 2(1). DOI: 

  17. Doing, P. 2004. ‘Lab hands’ and the ‘Scarlet O’: Epistemic politics and (scientific) labor. Social Studies of Science, 34(3): 299–323. DOI: 

  18. Domroese, MC and Johnson, EA. 2017. Why watch bees? Motivations of citizen science volunteers in the Great Pollinator Project. Biological Conservation, 208: 40–47. DOI: 

  19. Elo, S and Kyngäs, H. 2008. The qualitative content analysis process. Journal of Advanced Nursing, 62(1): 107–115. DOI: 

  20. Geoghegan, H, Dyke, A, Pateman, R, West, S and Everett, G. 2016. Understanding motivations for citizen science. Wiltshire, UK: UKEOF. 

  21. Groulx, M, Brisbois, MC, Lemieux, CJ, Winegardner, A and Fishback, L. 2017. A Role for nature-based citizen science in promoting individual and collective climate change action? A systematic review of learning outcomes. Science Communication, 39(1): 45–76. DOI: 

  22. Jensen, JL, McDaniel, MA, Woodard, SM and Kummer, TA. 2014. Teaching to the test…or testing to teach: Exams requiring higher order thinking skills encourage greater conceptual understanding. Educational Psychology Review, 26(2): 307–329. DOI: 

  23. Jordan, RC, Gray, SA, Howe, DV, Brooks, WR and Ehrenfeld, JG. 2011. Knowledge gain and behavioral change in citizen-science programs. Conservation Biology, 25(6): 1148–1154. DOI: 

  24. Kadlec, A and Friedman, W. 2007. Deliberative Democracy and the Problem of Power. Journal of Public Deliberation, 3(1): 8. DOI: 

  25. Krippendorff, K. 2004. Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research, 30(3): 411–433. DOI: 

  26. Kullenberg, C and Kasperowski, D. 2016. What Is citizen science? – A scientometric meta-analysis. PLOS ONE, 11(1): e0147152. DOI: 

  27. Lassen, I, Horsbøl, A, Bonnen, K and Pedersen, AGJ. 2011. Climate change discourses and citizen participation: A case study of the discursive construction of citizenship in two public events. Environmental Communication, 5(4): 411–427. DOI: 

  28. Lawrence, A. 2006. “No personal motive?” Volunteers, Biodiversity, and the false dichotomies of participation. Ethics, Place and Environment, 9(3): 279–298. DOI: 

  29. Lewandowski, E, Caldwell, W, Elmquist, D and Oberhauser, K. 2017. Public perceptions of citizen science. Citizen Science: Theory and Practice, 2(1). DOI: 

  30. Lewis, A and Smith, D. 1993. Defining higher order thinking. Theory Into Practice, 32(3): 131–137. DOI: 

  31. Lynn, SJ, Kaplan, N, Newman, S, Scarpino, R and Newman, G. 2019. Designing a platform for ethical citizen science: A case study of Citizen Science: Theory and Practice, 4(1). DOI: 

  32. Martin, VY. 2017. Citizen science as a means for increasing public engagement in science: Presumption or possibility? Science Communication, 39(2): 142–168. DOI: 

  33. Martin, VY, Christidis, L and Pecl, GT. 2016. Public interest in marine citizen science: is there potential for growth?. BioScience, 66(8): 683–692. DOI: 

  34. McBride, BB, Brewer, CA, Berkowitz, AR and Borrie, WT. 2013. Environmental literacy, ecological literacy, ecoliteracy: What do we mean and how did we get here?. Ecosphere, 4(5): 1–20. DOI: 

  35. McKinley, DC, Miller-Rushing, AJ, Ballard, HL, Bonney, R, Brown, H, Cook-Patton, SC, Evans, DM, French, RA, Parrish, JK, Phillips, TB, Ryan, SF, Shanley, LA, Shirk, JL, Stepenuck, KF, Weltzin, JF, Wiggins, A, Boyle, OD, Briggs, RD, Chapin, SF, Hewitt, DA, Preuss, PW and Soukup, MA. 2017. Citizen science can improve conservation science, natural resource management, and environmental protection. Biological Conservation, 208: 15–28. DOI: 

  36. Miri, B, David, BC and Uri, Z. 2007. Purposely teaching for the promotion of higher-order thinking skills: A case of critical thinking. Research in Science Education, 37(4): 353–369. DOI: 

  37. National Academies of Science, Engineering, and Medicine (NASEM). 2018. Learning through citizen science: Enhancing opportunities by design. Washington, DC: National Academies Press. DOI: 

  38. National Research Council (NRC). 2000. How people learn: Brain, mind, experience, and school. Washington, DC: National Academies Press. DOI: 

  39. National Research Council (NRC). 2012. A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Washington, DC: The National Academies Press. DOI: 

  40. Newman, G, Wiggins, A, Crall, A, Graham, E, Newman, S and Crowston, K. 2012. The future of citizen science: emerging technologies and shifting paradigms. Frontiers in Ecology and the Environment, 10(6): 298–304. DOI: 

  41. Nisbet, MC and Scheufele, DA. 2009. What’s next for science communication? Promising directions and lingering distractions. American Journal of Botany, 96(10): 1767–1778. DOI: 

  42. Nowell, LS, Norris, JM, White, DE and Moules, NJ. 2017. Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1): 1–13. DOI: 

  43. O’Connor, C and Joffe, H. 2020. Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19: 1–13. DOI: 

  44. Phillips, TB, Ballard, HL, Lewenstein, BV and Bonney, R. 2019. Engagement in science through citizen science: Moving beyond data collection. Science Education, 103(3): 665–690. DOI: 

  45. Phillips, T, Porticella, N, Constas, M and Bonney, R. 2018. A framework for articulating and measuring individual learning outcomes from participation in citizen science. Citizen Science: Theory and Practice, 3(2): 3. DOI: 

  46. Pocock, MJO, Tweddle, JC, Savage, J, Robinson, LD and Roy, HE. 2017. The diversity and evolution of ecological and environmental citizen science. Plos One, 12(4): 1–17. DOI: 

  47. Raddick, JM, Bracey, G, Gay, PL, Lintott, CJ, Cardamone, C, Murray, P, Schawinski, K, Szalay, AS and Vandenberg, J. 2013. Galaxy zoo: Motivations of citizen scientists. Cornell University Library, 12(1): 1–41. DOI: 

  48. Sanders, LM. 1997. Against deliberation. Political Theory, 25(3): 347–376. DOI: 

  49. Sanderson, BA and Kratochvil, DW. 1971. Science—A process approach, Product Development Report No. 8. Palo Alto: American Institute for Research in the Behavioral Sciences. 

  50. Shapin, S. 1989. The invisible technician. American scientist, 77(6): 554–563. 

  51. Shirk, JJL, Ballard, HHL, Wilderman, CC, Phillips, T, Wiggins, A, Jordan, R, McCallie, E, Minarchek, M, Lewenstein, BC, Krasny, ME and Bonney, R. 2012. Public participation in scientific research: a framework for intentional design. Ecology and Society, 17(2): 29–29. DOI: 

  52. Toomey, AH and Domroese, MC. 2013. Can citizen science lead to positive conservation attitudes and behaviors? Human Ecology Review, 20(1): 15. 

  53. Trumbull, DJ, Bonney, R, Bascom, D and Cabral, A. 2000. Thinking scientifically during participation in a citizen-science project. Science Education, 84(2): 265–275. DOI:<265::AID-SCE7>3.0.CO;2-5 

  54. United Nations Educational, Scientific and Cultural Organization (UNESCO). 1978. The Tbilisi Declaration. UNESCO/UNEP Environmental Education Newsletter, 3(1): 1–8. 

  55. Uno, GE and Bybee, RW. 1994. Understanding the dimensions of biological literacy. BioScience, 44(8): 553–557. DOI: 

  56. Wang, Y, Kaplan, N, Newman, G and Scarpino, R. 2015. A new model for managing, documenting, and sharing citizen science data. PLOS Biology, 13(10): e1002280. DOI: 

  57. Wiggins, A and Crowston, K. 2015. Surveying the citizen science landscape. First Monday, 20: 1–15. DOI: 

  58. Wright, DR, Underhill, LG, Keene, M and Knight, AT. 2015. Understanding the motivations and satisfactions of volunteers to Improve the effectiveness of citizen science programs. Society & Natural Resources, 28(9): 1013–1029. DOI: 

  59. Young, IM. 2001. Activist challenges to deliberative democracy. Political Theory, 29(5): 670–690. DOI: