Introduction

Citizen science (CS) programs provide a critical opportunity for researchers and practitioners in STEM fields to connect to the general public for mutual benefit. Generally, to succeed, CS programs need to recruit and, crucially, maintain or replace a pool of enthusiastic participants. This paper presents a framework for understanding volunteer participation dynamics that elucidates the varied ways in which volunteers try out CS programs (nibble) and recognizes that for many participants, there might be a natural end to any particular volunteer activity (drop). The framework thus distinguishes a natural end to participant engagement from a premature one on the basis of perceived challenges, flaws, or problems that emerge from the characteristics of the CS project/program itself. The Nibble-and-Drop Framework provides an overall language and structure for describing participation dynamics, and allows researchers and practitioners to develop reasonable expectations for a particular CS program’s participation dynamics.

Benefits of citizen science

CS programs create a bi-directional flow of benefits between scientists and volunteers, democratizing the practice of science (; ; ). In the majority of CS projects, scientists provide opportunities and structures within which volunteers can engage in the scientific process, strengthen connections to their local community, become more scientifically literate, increase self-efficacy for doing science, and increase their social well-being (; ). Scientists benefit as participants contribute to their research or monitoring programs through data collection, analysis, and at times program design and research-question development, contributing in crucial ways to positive scientific outcomes (; ; ; ; ; ; ).

Benefits for program participants range tremendously and depend on the type of CS activity and the agenda, need, and background of participants. Participants may gain an understanding of something as granular as the habitat and range of specific bird species, and as broad as the process of science and scientific thinking (; ). In turn, CS programs may promote improved attitudes toward science, positive behavior changes, and increased motivation for action and skill development (; ; , ). Skill development is closely connected to self-efficacy toward scientific work—participants’ belief in their ability to do science by actively and appropriately contributing to scientific research (; ). Many CS programs enable communities to monitor local environmental, public health, or other community issues through collaborative data collection, thereby contributing to social well-being at the community level (, ).

Structure of citizen science programs

Participants in CS can be involved in programs from initial project design and development to data analysis, interpretation, and dissemination of results (). Most commonly, participants contribute through data collection for research and monitoring (), referred to as contributory projects in Shirk et al. (). However, the degree of participant agency and involvement ranges from simple contribution to co-creation and autonomous design and implementation (). Consequently, depending on the program’s nature, volunteer contributions may be as simple as taking a photo using a mobile application or as complex as highly technical and skilled observations, analyses that require significant investment in training, and commitments to civic action, participatory democracy, and community science.

While CS programs vary in scale from local to global, the scientific outcomes of CS are often based on harnessing the power of a volunteer collective; in fact, the only clear differentiator between CS and science is the involvement of (usually) unpaid non-professional individuals who volunteer. With their help, research projects can amass datasets covering more considerable spatial and temporal extent and resolution than would be possible with paid researchers (; ). Scientific success depends on participants’ level of engagement and their commitment to specified protocols and procedures. Research on CS participants has shown that different volunteer factors, such as demographics, training level, and familiarity with the methods and scientific concepts, affect the program’s scientific outcomes (; ). Thus, programs must consider various factors when implementing recruitment and retention strategies to optimize their science outcomes.

Participant retention and participation factors

Aside from improving scientific outcomes in CS, retaining participants is also cited as important for maintaining institutional knowledge and overall morale among participants (; ). Previous research on participation has elucidated the relationship between retention rates and various participation factors, including mode of recruitment/initial awareness of the program, motivations to participate, and demographics (; ; ; ; ; ; ). Participants hear about and are recruited into CS programs in various ways—through flyers, a booth or presentation at a science-related event, promotion through a science museum, public libraries or nature centers, and now commonly through social media and other online platforms (), and it appears as if recruitment strategies per se are not tied in any systematic ways to later retention.

More important is the relationships between participant motivations, recruitment, and retention, a subject that has received considerable attention through research (; ; ; ). Studies that have examined participant motivations have found consistent shared features in reported motivations amongst participants of various programs. These include an interest in the research topic, a desire to learn new information or to contribute to scientific research (; ), a desire to engage in the program task,(i.e., how what one does fits one’s life circumstances []), a communitarian agenda as it relates to the topic, interest in research or science in general, and ultimately, the degree to which personal factors sufficiently align with a particular program to make it the choice for spare-time activities over the many other options that might present themselves.

Some studies have examined how initial motivations to engage in a CS project may change throughout the course of participation, and how motivations may differ between highly active participants and those who are less active or who might drop out. For instance, Cox et al. () found that motivations relating to supporting scientific research were less critical as a participant continued engagement. In contrast, motivations related to personal benefits, such as a perception of ongoing learning, continue to drive participation from initial engagement to long-term commitment and sustained engagement. Eveleigh et al. () found differences in motivations between different categories of participation: high contributors, for instance, had a strong personal interest in the subject matter of the program. Tiago et al. () studied drop-outs, using a motivations survey on registered participants of Biodiversity4all. The participants were divided into four categories: never participating, participating occasionally, participating regularly, and participating a lot. They found significant differences across groups in the initial motivations to register: Motivations related to gaining incentives and rewards were high among those who participated the least. In contrast, those who participated more were likely to be motivated by their personal interest in the program and desire to contribute to science. The results suggest that CS projects’ extrinsic motivators might succeed initially in recruitment but might not be as much of a factor in sustaining engagement over time.

Alas, general interest in a topic of a particular program alone is not sufficient enticement for participation, since most participants who express interest in a program may not end up being strong contributors or may not even contribute at all (; ). Other factors need to be in place (or barriers lowered or removed) to turn interested members of the public into participants in predictable ways. For example, Martin et al. () found that barriers to participation include technology access and design; clearly, without computers at home or easy and convenient access to the internet, individuals will not likely engage in projects that require both, nor will they engage if they do not know how to use the technology (because of poor design or lack of technical savvy). However, once individuals manage to engage in a project (and presumably overcome structural or technical barriers), lack of available time and waning interest in the topic become the main reasons for dropping out (; ).

Participant demographics such as age, employment status, educational attainment, and gender have previously been shown to correlate with degrees of CS participation (; ; ). However, gathering data about inactive participants is a challenge for many programs (i.e., ; ; ); thus, much of the demographic information on participants comes from fairly engaged and active volunteers. We know less about who is not engaged in CS in the first place or who is not being retained (; ).

The typical demographic composition of CS programs has resulted in a narrative that describes the archetypal citizen scientist as white, highly educated, likely retired, and female (). The National Academy of Sciences conducted a literature review on CS participant demographics for their 2018 consensus report, “Learning Through Citizen Science—Enhancing Opportunities by Design.” This review of 32 sources and 68 programs challenged that narrative in some aspects: The review found a slight male bias but confirmed an overwhelmingly white and well-educated group of participants. Age and employment status (whether or not the participants are retired or working) varied widely among the programs NASEM reviewed and seemed to largely depend on the nature or type of program. For instance (and not surprisingly), online programs appealed more to younger volunteers.

The Nibble-and-Drop Framework

We built the Nibble-and-Drop Framework to capture the full range of participation degrees, from those exploring a program to those who are highly active, and inclusive of prior volunteers who are no longer active. This framework gives practitioners, evaluators, and researchers the language to describe the dynamics of participation. It can serve as a scaffold to elucidate the relationship between participant factors (such as demographics and motivations) and participant retention. Additionally, the framework provides language to describe reasonable expectations for recruitment and retention in CS programs by acknowledging a somewhat natural arc of volunteer engagement. For instance, do those who drop do so because the project ran its course in fulfilling personal agendas or because the project created too many obstacles, barriers, or inconveniences? Does ease of nibbling provide incentive to try a program before ultimately committing (thereby potentially creating a false sense of participant interest among researchers)? Do higher hurdles on entry (e.g., by requiring initial training) lead to higher levels of retention, presumably by weeding out mildly interested persons?

Structure of the Nibble-and-Drop Framework

The degree of participation in the Nibble-and-Drop Framework is divided into five categories (see Figure 1). 1) Initial-droppers are people who initiate participation in a program but do not contribute. 2) Nibblers are those who continue to participate but are not high contributors. 3) Nibble-droppers are people who participated for a short time, made some contribution, and then left the program. 4) Hooked participants are high and sustained contributors. 5) Hooked-droppers are former high contributors that are no longer participating. Together, these categories comprehensively combine and capture two key aspects of the dynamic engagement of a CS participant—the temporal aspects of their participation and their level of contribution to a program. The angling metaphor used in this paper is a deliberate one, meant to evoke the image of a fish that nibbles at a worm but never bites.

Figure 1 

A grid outlining the categories within the Nibble-and-Drop Framework. What we call initial droppers are people who make no contributions to a program after signing up to participate. The other four categories are at the various intersections of participation rates and recency. Participants who remain active are either hooked or nibbling, depending on how much they contribute. Participants who are no longer active are either hooked-droppers or nibble-droppers, depending on how much they contributed.

Figure 2 shows one possible engagement pathway for CS participants. Initial exploration of a program may involve the putative citizen scientists asking the question, “Given all else I can do, will this be something for me?” In this initial exploration, participants may be initial-droppers and decide early on, after gaining a sense of the program’s mechanics, to discontinue participation. They do not need to fully engage in the program’s entire life cycle to understand whether it is for them. Alternatively, nibble-droppers experience the full extent of the program before they discontinue. Nibble-droppers may also be those who engaged for a particular, limited-time-only effort (e.g., contributing to a one-time event like a data challenge or Bioblitz), after which they may see their commitment to a program fulfilled. Participants who continue fall into two categories: Nibblers continue their involvement, but at a low or inconsistent engagement level, and hooked participants become fully committed and engage at a high rate. The differences between nibblers and hooked might be in motivation and opportunity, but both may appreciate the program and its value. We added the hooked-dropper category to indicate retirement is an essential element in the life cycle of a program (Figure 2).

Figure 2 

A model showing the possible pathway of a volunteer’s participation in a citizen science program over time, demonstrating how their participation may fall into different categories of our Nibble-and-Drop Framework, depending on how they contribute and if and when they choose to stop participating.

Validating the framework

We tested the usefulness of the Nibble-and-Drop Framework by using it to analyze the participation dynamics of a cohort of NASA GLOBE Observer (GO) volunteers. GO is a CS program in which participants use a mobile app (https://observer.globe.gov) to record observations of various earth science–related phenomena and thereby contribute to Earth system research. Based on Shirk et al.’s () classification of CS project types, GO is a contributory project: It is designed by researchers, and members of the public primarily contribute data. Because app users are required to register for an account using an email address, we were able to survey not only active contributors to the program but also those who had ceased participation.

Contributory projects are relatively well represented in the CS literature, and much is known about the way individuals make contributions in these types of CS programs. Wald, Longo, and Dobell (), for instance, noted the “long-tail phenomenon,” whereby the majority of observations in a contributory CS program come from a small number of participants, referred to by Eveleigh et al. () as “super users.” In contrast to super users, Eveleigh et al. describe users who are driven by curiosity and provide intermittent, small-scale contributions to a CS program as “dabblers or low contributors.” The contribution rates for forms of participation other than data contribution (such as data analysis or research-question development) and for project models other than contributory ones are less well documented in the literature. However, the Nibble-and-Drop Framework could easily be applied to other program models because, at its foundation, it does not specify any particular type of CS project. It would predict different participation dynamics between project types insofar as those systematically differ in how they onboard volunteers, provide incentives for continued engagement, or include natural offramps for those who wish to cease their engagement.

Unlike quantity metrics for participation, the temporal dimension of participation has not yet been comprehensively captured by researchers (). However, understanding the changes in a volunteer’s participation in CS over time is an important prerequisite for assessing the degree to which a CS program may meet participant-related goals, such as fostering scientific literacy or supporting science identity development. One example of a framework that captures voluntary participation’s progression over time is Preece and Schneiderman’s Reader-to-Leader Framework (). By describing how people contribute voluntarily to online platforms, such as social media, their framework captures a volunteer’s evolution from a simple contributor to collaborator and eventually leader of other contributors. Another study, by West and Pateman (), introduced the idea of a four-stage participant journey, beginning with awareness of the opportunity and ending with “finished participation.” Although the West and Pateman stage model is a valuable way to understand participants’ varying motivations at different points in their journeys, it implies a linear trajectory with a finite end. This does not fully reflect participants’ experience in many CS programs, where there is no finite end to the program, and many potential exit points or offramps exist.

Building on previous research on the relationship between recruitment and retention rates in CS (e.g., ; ), our framework was designed as a tool for examining the relationship between degrees of participation and participation factors. To validate the framework, we chose to focus on three commonly cited factors of CS participation: 1) demographics (specifically age, education, gender, and employment); 2) initial awareness (i.e., how a participant finds out about the program or what mode of communication is used to recruit participants); and 3) motivations to initiate, continue, or stop participation.

We introduce the Nibble-and-Drop Framework and test its usefulness as a model to study the dynamics of participation in CS programs. Specifically, we ask two questions: 1) How is this framework useful for understanding the degrees of participation in a CS program? And 2) How can this framework be used to understand the relationship between the degrees of participation and participant factors in a CS program? We tested the framework by examining the rate and frequency of contribution to the GO program against participant factors such as demographics, mode of initial awareness, and motivations for participation in the program.

Methods used to validate the framework

We tested the usability of the framework by using it to classify the degrees of participation for GO, and we examined the relationships between the degrees of participation of GO volunteers and the three participant factors described above—demographics, initial awareness, and motivations.

The GO mobile app allows participants to make observations of clouds, mosquito habitat, land cover, or tree height; for a limited time in 2017, the mobile application also featured an eclipse tool to record observations during the 2017 North American solar eclipse event (Figure 3). This paper focuses on three of the protocols that volunteers can contribute to in the app: Clouds, Mosquito Habitat Mapper, and Land Cover. The Trees protocol was not released to the public at the time of data collection. Use of the mobile app and participation in GO are promoted by data collection challenges (such as seasonal cloud challenges and the 2017 eclipse event), social media advertising campaigns, and partnerships with organizations like the National Park Service and the Girl Scouts. These outlets help promote the app to a broad and diverse audience. The GO program is a fairly low-barrier-to-entry CS program, meaning that the initial time and material investment is minimal.

Figure 3 

GLOBE Observer (GO) mobile application interface and photos of volunteers collecting observations. Clockwise from the top, GO volunteers observing clouds for the GLOBE Clouds protocol, GO volunteer counting mosquito larvae for the Mosquito Habitat Mapper protocol, and a GO volunteer beginning to make an observation with the Land Cover protocol tool. The simplest and most popular Cloud protocol asks volunteers to identify clouds and take photos of the sky. In contrast, the Mosquito Habitat Mapper protocol is more labor-intensive. It asks volunteers to identify mosquito larvae and take photos of them, with the option to use additional equipment (a magnifier zoom lens and mosquito trap). The Land Cover protocol asks volunteers to take pictures of their surrounding land cover and optionally classify the types of land cover they have captured. Photos courtesy of Autumn Burdick, NASA GLOBE Observer Communications Director.

To join the GO campaign, participants need to download the app to their mobile devices, then provide an email address and a password to log in. To make observations, volunteers work through a simple training routine embedded in the app, specific to each protocol. At the time of survey distribution, there were more than 160,000 volunteers in the user base worldwide and more than 75,000 in North America (Canada and the United States). We chose to survey a random sample of 4,000 North American GO participants (we limited responses to North American participants because of the scope of a larger evaluation project of the GO program).

Since our framework focuses on the dynamics of engagement in CS, our sampling protocol was designed to recruit GO participants in all five categories of our framework to respond to a survey regarding the various participation factors we are examining. With the existing API (application programming interface) used to retrieve GO user account information, we could determine the number of observations each volunteer made. The 4,000 GO volunteers sampled for the study were made up of a group of 2,000 above-average observers (meaning they made more than the average number of observations in the app, which, at the time of data collection, was 7.8), and a group of 2,000 below-average contributors who made fewer than the average number of observations in the app. Estimating a response rate of 15%–25%, a random sample of 4,000 GO participants would provide an acceptable margin of 3%–4% error at a 95% confidence level.

NASA GLOBE Observer (GO) participant survey

The survey was released to the 4,000 GO participants on January 31st, 2019, and closed on March 31st, 2019 (see Supplemental File 1: GO Participant Survey). Potential respondents were sent a personalized email invitation to participate in the survey with an individualized link to take the survey via the Qualtrics online platform. The nature of the survey items made expert validity sufficient to get quality responses. We piloted the survey with reliability and validity testing. The study was approved for exempt status by the Oregon State University Internal Review Board (OSU IRB study #IRB-2019-0175). All study participants consented to participate in the study and are aware their responses would be used in peer-reviewed publications and project reports.

In addition to basic demographic information (e.g., age, gender, and education level), we asked volunteers about their initial motivations for downloading the app and their reasons for leaving the app, if they had ceased participation. We also asked how they initially learned about the GO app. (Table 1). Finally, respondents had the opportunity to provide feedback on the app’s usability, including the training protocol and the ease of submitting their observations. This set of questions was not included in the study because they were included as part of a larger evaluation of the app itself and were not captured in a way that could be included in our case study. The survey questions and question choices were influenced by similar CS participant surveys such as Raddick (2009), Tiago et al. (), Crall (2013), Fischer and Wentz (), and Domroese and Johnson (), and by the Volunteer Functions Inventory developed by Clary et al. (). The questions and choices were adapted to fit the context and nature of the GO program.

Table 1

Survey question options.


QUESTION TOPICSURVEY QUESTION(S)ANSWER CHOICES

Mode of recruitmentHow did you initially find out about the GLOBE Observer App? (Select all that apply)
* Answered by all respondents
  • Social media
  • Online news article
  • Public event
  • At a science museum, nature center, or state/national park
  • School or after school program
  • Conference or workshop
  • Colleagues
  • Friend or Family
  • GLOBE website SciStarter.com
  • Other websites
  • Other

Motivations to participateWhat initially motivated you to participate in GLOBE Observer? (Select all that apply)
* Answered by all respondents
  • I wanted to contribute to NASA research
  • I wanted to contribute to science
  • I wanted to learn more about clouds
  • I wanted to learn more about mosquitoes
  • I wanted to learn more about land cover
  • I was looking for a hobby
  • I felt that it is important to volunteer for worthy cause
  • I wanted to spend time outdoors
  • I wanted to spend quality time with friends or family
  • I wanted to do something productive with my spare time
  • Other

What motivates you to continue to participate? (Select All that apply)
*Answered by nibblers and hooked respondents
  • I feel my contributions are helpful
  • I like spending time outside
  • I like to do this activity with friends and family
  • I feel appreciated for my contributions
  • It is easy to participate
  • I feel that it is meaningful
  • It is my duty to contribute
  • It is easy to do
  • Other

Why did you stop participating? (Select all that apply)
*Answered by nibble-droppers and hooked-droppers
  • I don’t have enough time
  • I felt like I wasn’t making a contribution
  • It was too complicated
  • I encountered technical problems
  • I did not get what I wanted to out of the app
  • I am no longer interested
  • I signed up to participate at a one-time event
  • It wasn’t what I expected
  • It was not engaging I found some other CS project more fitting for me
  • Other

Why did you not continue to participate in GLOBE Observer? (Select all that apply)
*Answered by initial-droppers
  • I was not interested
  • I did not find time
  • I felt it was too complicated
  • I had technical issues
  • I do not feel like my observations would matter
  • It wasn’t what I expected
  • Other

We used the framework to categorize participants based on the quantity and recency of observations. We correlated the relevant characteristics of participant contributions to the number and timeliness of observations because the science goal of GO is to amass a large-scale, global Earth-observation dataset. However, for other types of programs, other measures of the degree of participation and recency may be appropriate. For this validation, the participants’ categories were determined by the responses to the questions, “How many total observations have you made so far?” and “When was the last observation you made?” Initial-droppers are people who created a GO account and downloaded the app but did not make any observations using the GO app. Nibble-droppers are people who made less than five observations (this represents the bottom quartile of observation frequency for all GO participants) and did not make an observation in the six months before the survey response date. Hooked-droppers made more than five observations and were active six months before the survey response date. Nibblers were active at the time of the survey response date (e.g., made an observation in the six months prior) but not more than five observations. Hooked participants were active at the time of the survey response date and had made more than five observations.

We performed a quantitative analysis of survey questions that asked participants to estimate when they were last active and how many observations they had made at the time of the survey. We used chi-square tests to examine the framework’s ability to understand the relationship between degrees of participation and participation factors, and calculated the standardized residuals to measure the significance of specific variables of participation factors (). Descriptive statistics were used to assess motivations for continuing to participate or dropping out. All chi-square tests, residual calculations, and descriptive statistics were run with R version 4.0.2 with the car and gmodels packages. Many of the questions included an “other” answer option with a subsequent opportunity to fill in a text response; these responses were coded by analytically processing text into themes and categories () to use the text responses in quantitative analyses. Many of the “other” responses were thematically similar to the multiple-choice options in the question. Respondents generally added more detail to their initial response by using the “other” text box. The Dedoose platform was used for analyzing text responses.

Survey Results in Light of the Framework

In total, 1,051 individuals responded to the survey, with an initial response rate of 26%. Of these responses, 782 were considered suitable for data analysis. Respondents who did not complete the first section of questions (asking about their recent participation in GO) were omitted because they did not contribute essential data, lowering the functional response rate to about 20%. The final sample is considered a representative sample of North American GO participants, with a confidence level of 95% and a 3.5% margin of error. Because less than 10% of data was missing from any given survey question, we refrained from inferring missing data to increase statistical power ().

To address our first question on how this framework might help understand the degrees of participation in a CS program, we used the framework to sort participants into the five categories of degrees of participation (Figure 4). Those who made no observations whatsoever are initial-droppers: They comprised 22% of our total respondents. Nibble-droppers were 21% of our sample and were not active in the six months prior to the survey. The smallest category in our sample were nibblers (4%), i.e., those who were active in the six months prior. Hooked participants represented 22% of the sample. Hooked-droppers, those who had not been active in those six months but had made five or more observations before becoming inactive, were the largest category at 31%.

Figure 4 

A grid showing how GLOBE Observer participants were grouped into Nibble-and-Drop Framework categories. Sample sizes indicate the number of survey participants who fell into each category according to the metrics chosen—fewer than five observations versus five or more observations, and activity within the past six months.

To address the second question of how this framework might be used to understand the relationship between the degrees of participation and participant factors in a CS program, we analyzed the responses to three sections of the survey: demographics, initial awareness of the program, and motivations for initial participation, continued participation, and reasons for dropping out. The most typical respondent was between 35 and 44 years old, female, and employed full time, with a college degree or higher (Table 2). Overall, about 56.5% of contributors to GO were women, more than 60% of all contributors were between the ages of 35 And 64, about three quarters of contributors had a college degree or more (and more than 36% held graduate degrees, which is about 3.5 times the national average []), and 55% were employed full time (slightly above the national average of around 50%).

Table 2

Demographics of survey respondents. Percentages shown represent the percentages of individuals who are in a particular demographic, per total number of individuals in each category of participation, excepting the % of total column. Sample sizes are slightly smaller for Gender and Employment because some participants chose not to report responses to those categories.


DEMOGRAPHICSINITIAL DROPPERSNIBBLE-DROPPERSNIBBLERHOOKED-DROPPERSHOOKED% OF TOTAL

AgeUnder 246.16%12.90%26.47%12.02%12.20%11.7%

25 – 3411.64%16.77%8.82%15.88%16.46%15.0%

35 – 4421.23%25.16%29.41%22.75%15.24%21.6%

45 – 5418.49%18.71%14.71%20.17%21.95%19.7%

55 – 6418.49%16.77%14.71%21.46%21.95%19.7%

65 – 7421.92%8.39%5.88%7.30%9.76%10.9%

75 or older2.05%1.29%0.00%0.43%2.44%1.4%

GenderFemale55.86%68.42%45.45%56.22%48.77%56.5%

Male44.14%31.58%54.55%43.78%51.23%43.5%

EducationSome high school1.38%3.23%5.88%5.98%6.10%4.5%

High school graduate3.45%3.23%5.88%2.99%4.27%3.6%

Some college18.62%18.71%20.59%16.24%12.80%16.7%

Associate’s/Bachelor’s33.10%39.35%47.06%36.32%45.73%38.9%

Graduate degree43.45%35.48%20.59%38.46%31.10%36.3%

EmploymentStudent6.29%9.40%18.18%9.50%15.29%10.5%

Employed part-time7.69%15.44%15.15%9.50%14.65%11.8%

Employed full time53.85%53.69%54.55%61.99%48.41%55.2%

Unemployed6.99%7.38%6.06%7.69%4.46%6.7%

Retired25.17%14.09%6.06%11.31%17.20%15.8%

Results from chi-square analysis of these demographic factors against our five participation categories show that age (X2 (28) = 48.8, n = 782, p = 0.009) and employment (X2 (28) = 43.5, n = 782, p = 0.03) were statistically significant factors associated with participation in GO. At the same time, education and gender were not significant. Adjusted standardized residuals show that respondents aged 65 to 74 and respondents who are retired were more likely to be initial-droppers (standardized residual = 4.025 for people ages 65–74 and in the participation category of initial-dropper, which is higher than expected, and standardized residual = 2.954 for retired people and participation castegory of initial-dropper, which is higher than expected).

Respondents were asked to select how they found out about GO. Several groups became aware of GO through social media posts, face-to-face interactions during science festival–type events, science museums, word-of-mouth from family or friends, or through an educational experience. Social media and online news sites were the most commonly selected responses. However, chi-square analysis showed no statistically significant relationships between the degree of participation and how the volunteers initially become aware of GO.

Survey respondents were also asked to select their initial motivations for joining GO. They selected all the answers that applied from a provided list, with the option to write in additional motivations (Figure 5). The most frequently selected motivations for across participation categories were “I wanted to contribute to science,” “I wanted to contribute to NASA research,” and “I wanted to volunteer for a worthy cause.” (i.e., altruistic reasons dominated motivations to initially engage).

Figure 5 

The distribution of the top three initial motivations of all participants who joined GLOBE Observer, by framework category. (These were also the top three reasons for all participant categories except for nibble- droppers, for whom the top three were contributing to NASA science, contributing to science, and learning about clouds, and the fourth was volunteering for a worthy cause).

Chi-square analysis shows that degree of participation and the motivation to contribute to the NASA research is statistically significant (X2 (4) = 57.1, p = < 0.001), and the motivation of contributing to science in general was also statically significant (X2 (4) = 35.9, p = < 0.001). Analysis of chi-square residuals showed that initial-droppers were less likely to select either “contribute to science” (standardized residual = 3.297, for initial-droppers not choosing “contribute to science” as a motivation) or “contribute to NASA research” (standardized residual = 4.898, for initial-droppers not choosing “contribute to NASA research” as an initial motivation) as an initial motivation than any other participation group. Descriptive statistics, however, suggest that “contribute to science” and “contribute to NASA science” were still top motivations for the initial-droppers, just that other motivations like doing something productive with my spare time and volunteering for a worthy cause were also frequently chosen.

Nibblers and hooked survey respondents were asked about the factors that motivate them to continue participating. We analyzed the data for both groups separately and found similar responses. To avoid textual redundancy, the results from both groups are reported together. For both nibblers and hooked respondents, the top three factors were the same: “I feel my contributions are helpful” (75% of hooked respondents, 71% of nibblers), “It is easy to participate” (68% of hooked respondents, 51% of nibblers), and “I feel it is meaningful” (58% of hooked respondents, 43% of nibblers).

Survey respondents who reported that they no longer contributed to GO (hooked-droppers and nibble-droppers) were asked about their reasons for dropping. Both of these groups reported the same top three factors that influenced their decisions to stop participating: “I signed up to participate at a one-time event” (35% of hooked-droppers and 34% of nibble-droppers), “Did not have enough time” (26% of hooked-droppers, and 22% of nibble-droppers), and “I encountered a technical problem” (14% of hooked-droppers and 12% of nibble-droppers). Most of the respondents who noted that they signed up to participate at a one-time event cited the total solar eclipse that traveled over much of the United States in August of 2017 as the initiating event for contributing to GO: A special solar eclipse protocol was developed for the app and promoted by NASA around this time. Respondents who were considered initial-droppers were asked why they did not make any observations. The top three reasons cited were not having enough time (30%), having signed up for just a one-time event (19%), and the process was too complicated (11%).

Discussion of Framework Testing

This paper presents a new framework that provides a language and structure for describing participation dynamics and allows researchers and practitioners to develop reasonable expectations for a particular CS program’s participation dynamics.

We used responses from a survey given to GO volunteers to validate the framework and to answer two research questions about the framework’s capabilities:

  1. How is this framework useful for understanding the degrees of participation in a CS program?
  2. How can this framework be used to understand the relationship between the degrees of participation and participant factors in a CS program?

This validation process successfully helped us categorize GO participants based on degrees of participation and understand the relationship between the degree of participation and demographics, mode of recruitment, and motivations to participate or drop out. Therefore, we posit that this framework promises to help other researchers and evaluators examine program retention and recruitment and help program practitioners develop targeted strategies for recruiting and retaining volunteers.

To answer the first question, we examined whether the framework could be used to categorize the GO program’s participation based on five degrees of participation. The Nibble-and-Drop Framework indeed captured the different degrees. We were able to use it further to describe the dynamics of engagement in the GO program. For a global-scale, app-based program, it is important to capture not only variations in program participation for those who chose to contribute but also understand the group of potential contributors who, after initiating participation, decided that the program was not for them.

Because GO is a low-barrier-to-entry program, it is relatively easy for a participant to drop out at any point, even after being highly engaged. This framework helped us capture that dynamic by separating the initial-droppers from the nibble-droppers and the hooked-droppers. We were able to discern that the largest category of participants in our sample of GO was hooked-droppers, who were at one time contributing large numbers of observations to the program, but at the time of our study were no longer active. Incidentally, the framework allowed us to understand that retirement from the program rather than problems with the program might drive previously active participants to stop volunteering: The initial motivation to contribute as part of a one-time event was mentioned almost three times more frequently than technical difficulties with the program.

The framework also provides us with a baseline participation rate across the different degrees of participation we identified. We plan to continue to track participation using the framework to create a longitudinal dataset of retention and recruitment in the GO program. This categorization also allows us to compare these different groups of participants and understand how different factors such as the ones we focused on—demographics, recruitment, and motivation—may affect degree of participant participation and retention rates.

To address the second question of how this framework might be used to understand the relationship between the degrees of participation and participant factors in a CS program, we examined the relationship between degrees of participation in the GO program and participation factors. The framework helped us distinguish between those who initiated participation (downloading the app and registering) not because they wanted to make contributions but because they were still finding out about a fit between the user experience and their needs and interests, and those who wanted to make contributions but encountered some personal or technical hurdles.

The participant demographics in our study sample were mostly in line with those of other CS programs: Most of the respondents were adults (above the age of 35), and the vast majority were highly educated (; ). Like researchers in other studies (; Crall et al. 2011), we saw a slight majority of respondents identifying as female. Our chi-squared analysis revealed that gender, education, and employment were not individually significant in their correlation with our five participation categories. However, that age range was driven by the overrepresentation of 65-to-74-year-olds in the initial-dropper category, possibly driven by a higher rate of discomfort with smartphone-based productivity apps in this age group. This is contrary to Brouwer and Hessels (), who found that younger participants show a greater probability of dropping out. As GO is a mobile app, this may indicate that technological challenges may indeed have created barriers for older adults who attempted to engage with the program. For instance, research on online CS programs shows they are more popular with younger volunteers (), indicating that CS programs enabled by or focused on computers or mobile technology might favor younger participants.

As a large-scale, global program, GO strives to be accessible for a wide range of participants, including older adults. The insights derived from our framework analysis prompted the GO team to develop specialized programming for older adults. In the summer of 2019, the GO team initiated a partnership with the Osher-Lifelong Learning Institute (OLLI) and piloted an OSHER course for older participants to learn about the GO program at the OLLI Institute at the University of Hawaii at Manoa (UHM OLLI). The course was titled, “Be a NASA GLOBE Observer: Join OLLI-UHM’s Inaugural Citizen Science Group.” The program was designed to show the participants how to use the app to make observations and to explain the science that motivated the GLOBE Observer program, demonstrating how observations matter. Users were encouraged to make observations with family members, creating a multigenerational CS experience. Initial evaluation results from this program are promising and show that the participants are interested and eager to contribute to NASA science. Plans to expand and continue this program were temporarily halted because of the global COVID-19 pandemic, which began in early 2020.

Our framework allowed us to distinguish between the unique needs of different categories of active and inactive participants. By recognizing that many initial-droppers were older adults, we were able to help the program find ways to address this attrition in a targeted way. While not every CS program needs to seek ways to engage every participant, using this framework can help programs understand who they are not engaging and why. This would help practitioners make programmatic decisions about how they could increase the contributions made by certain participants.

We also looked at the different ways participants initially learn about the program, or the mode of recruitment. We did not find any statistically significant difference between the degree of participation and recruitment mode. Social media was the most prominent venue to inform potential participants about the program’s existence across all participation levels. This is similar to Crall’s () results, which showed that social media was the primary source for participant recruitment. Brouwer and Hessels () found that older people preferred a more direct and personal recruitment mode, whereas younger people were recruited via social media. However, although social media interaction is relatively passive, there is no indication that a more in-depth initial face-to-face interaction might result in more active contributors to this program. For instance, Andow et al. () found that regardless of whether participants were sent a recruitment letter (passive recruitment) or engaged directly with educational materials (active recruitment), retention rates were the same. These results tell us that social media is a useful recruiting tool for GO across participation degrees. Crucially, it does not impact whether a participant is likely to be an initial- or nibble-dropper. This might put into question the efforts made by the GO team and other CS programs to have face-to-face interactions with potential participants as a key recruitment strategy, especially in light of the COVID-19 pandemic in 2020, which in many areas of the world limited the ability of CS programs to recruit and engage volunteers through in-person events.

Finally, we used the framework to look at degree of participation and motivation of participants at three different points: their initial motivations when they downloaded the app, their motivations for continued participation, and their reasons for ceasing participation (either after making some contributions or making no contributions at all). The majority of participants were initially motivated by a desire to contribute to NASA research. For hooked volunteers, results showed that they were statistically more likely to cite contributing to NASA research and science as their primary motivations for joining the program. This is in line with previous research by Geoghegan et al. (), Domroese and Johnson (), and Crall et al. (), who also found contributing to scientific knowledge to be a primary motivation.

GO recruitment efforts focus firmly on the link to NASA and the contribution to scientific research. There are some concerns within the program that GO may appeal less to some underlying hobbyism that drives participation in other CS fields such as birding, where a passion for birding may constitute the primary motivation, and the appreciation for research contribution subsequently follows. However, our findings do not bear that concern; scientific research is a strong initial motivation across all participation categories. These results support the GO team’s efforts to focus on the science and contribution to NASA in their communications and recruitment efforts with participants, but perhaps suggest that this may not be the most effective level for retention strategies as the motivation to contribute to science and NASA is not differentiated across the framework’s participation categories.

GO volunteers continue to have altruistic motivations as they continue to contribute. The hooked and nibbler participants had the same motivations for continuing to participate: They think their contributions matter, and they got to the point in the program where contributing became routine for them. Other studies have found that motivations shift over time from more egocentric motivation (self-interest) to more altruistic motivations (; ). We did not see this shift in egocentric to altruistic motivations, which is more in line with what Geoghegan et al. () found, namely that most participants did not feel their motivations shifted over time. Our findings might be explained by the nature of the CS program studied here and the nature of GO’s activities.

The top reason that initial-droppers, nibble-droppers, and hooked-droppers gave for ceasing contribution was that they signed up for a one-time event and did not feel there was a need to continue. Although it might be expected that a one-time event, such as the 2017 solar eclipse or the data challenges hosted by GO, might lead to an increase in nibblers who eventually would become hooked, this was not the case. Like Crall’s () results, after participating in their event of interest, these volunteers stopped contributing: The key experience was the event (the 2017 eclipse), and GO was mainly a vehicle for engagement with the event. These results have prompted the GO team to consider different tactics for keeping participants engaged after these events are over. Many of these ideas center on improved and increased communications with participants. However, our framework suggests another strategy—re-recruitment. If the event was the motivation, and the CS program was merely a way for participants to engage in the event, they were not fully recruited into the CS program. It suggests that after an event of this nature, a secondary round of (re-)recruitment with those participants may help participant retention. It also suggests that practitioners may need to accept considerable attrition after events, and acknowledge that event-based recruitment might lead to a limited number of hooked participants.

Another common reason for nibble-droppers and hooked-droppers to cease participation in GO was that they felt they were not making a meaningful contribution. In the survey’s open comment section, some hooked participants indicated that they did not receive feedback and felt they were not doing a good job. Other studies have found that low self-efficacy, feeling like they are not doing a good job contributing to a program, made participants more likely to drop out of a program (). Feedback and communication about participant performance were found to be a positive motivating factor in other programs as well (). Participants were motivated to continue participation in a program in which scientists regularly offered feedback, provided research progress reports, and thanked them for participation (; ; ; ). However, self-efficacy is only part of the sense of accomplishment; the other is whether the CS program is actually meaningful from a research perspective. Reinforcing the significance of the program to scientific knowledge-building is, therefore, another mechanism for ensuring that participants will know they are making a meaningful contribution to the program and science.

In response to this finding, the GO team is considering various tactics to leverage social media and other online communication platforms to communicate with participants, provide feedback, and encourage them to continue contributing to the program after an event like a data challenge. This can also help people who sign up for a one-time event stay informed and motivated, believing that their contributions are still needed. Currently, the GO program provides feedback to the Cloud protocol participants through the satellite-matching emails, which send a message when a GLOBE Clouds observation is taken within 15 minutes of a satellite overpass. The team is actively exploring how to provide similar feedback for the other protocols.

By applying this framework to the GO program, we were able to capture the full dynamics of participation in the program—from people who never truly participated (initial-droppers) to those who are highly engaged (hooked). Using this framework allowed us to identify the distinct exit points from the program and critically consider different ways to define meaningful participation. The framework provided a structure to better understand the baseline rate of participation across the five participant segments, which can longitudinally track the program’s retention and recruitment trends across the participation categories. Using the framework to explore the relationship between degrees of participation and different participant factors, we could also see how the participants’ demographics may influence their participation. The framework also suggested new avenues for capitalizing on participant motivations for recruitment and retention initiatives within the GO program. This approach might be useful for other researchers and evaluators of CS programs and would then provide a functional basis for comparing participation dynamics across various CS programs.

Conclusion

The Nibble-and-Drop Framework was developed to provide a common language around typical recruitment and retention issues in volunteerism, translated to CS. Applying the framework to the GO program helped us assess whether the framework can capture the degrees of participation in a global, mobile, app-based CS program and help us understand the relationship between the dynamics of engagement and participation factors. Although we focused on categorizing participants based on their contribution of data to the program, we believe the framework can help programs categorize their participants based on other forms of contribution and systematically evaluate the dynamics and how those relate to different aspects of participation. This can help program practitioners to target their recruitment and retention initiatives to the specific needs of different participants, depending on which category of participation they would like to address. Over time, the framework will help programs develop reasonable expectations for a particular CS program’s participation dynamics, and track recruitment, retention, and retirement trends in more granular detail.

This framework also allows programs to categorize their participants and see where different demographics of participants drop out of a program or continue participating. Although we focused on age, education, and employment status, other demographics can be examined with this framework, such as race, ethnicity, and sexual orientation. As STEM disciplines look to a more diverse future, CS programs need to connect to a broader pool of participants. To do so, those in CS responsible for recruitment and retention need to understand participants on the long tail, i.e., participants who make minimal contributions (), to find ways to target these participants for retention. Understanding how participant demographics might systematically influence CS programs’ participation dynamics is an important step toward more effective strategies for building greater diversity in CS and STEM more broadly.

Contributory CS programs may define successful contributions differently. Some may opt to quantify scientific products (). Others may focus on how participants benefit through their engagement (). We chose to focus on contributed scientific data as a key program outcome because it is an integral element of GO. Many CS programs define active participation based on the quantity and distribution of data generated from the program. However, the applicability of the framework is not limited by how success is defined. It could be used with different metrics for contribution, for example, other scientific products such as data analysis (see ) or data quality. If the geographic distribution of data is vital to a program, researchers could quantify data collected in the desired area. It could also be applied to participant outcomes such as knowledge acquisition, community involvement, and/or interaction with scientists (), personal scientific research, learning outcomes, or taking on other roles in the program ().

In essence, the framework concerns itself with the arc of volunteering, and outcomes of volunteering are relevant only insofar as they influence the arc. For example, participants may cease their engagement in a CS program once they fulfill their personal objectives, irrespective of whether further volunteering may be fulfilling or seen as useful or relevant. Participants who engaged in an educational program may see their social contract with GO fulfilled when the educational program ends. Similarly, GO makes it easy for curious individuals to test whether the activities associated with the program appeal to them. Low barriers of entry to GO may create a false impression that GO generates a large degree of dropping from the program, when in fact, many who downloaded the app or even made a few contributions probably never really committed to the program in the first place. Testing it once was the equivalent of window shopping only. We would expect a different finding from CS programs that require extensive training. Window-shopping, in this case, does not yet involve making a first contribution, but more likely is associated with reading promotional material, watching or reading testimonials, or talking to active participants. The Nibble-and-Drop Framework’s flexible nature thus encourages practitioners to think broadly and inclusively about their program’s goals and objectives, nudges them to consider prioritizing outcomes beyond amassing a large dataset, and encourages practitioners to define the arc of volunteering in alignment with the nature of the experience their CS program affords.

To ensure sufficient numbers of participants, programs are incentivized to market widely and create low barriers to entry. Although it is important to get the word out and to make a program look attractive, the side effect of effective marketing and low barriers is a high rate of nibblers and nibble-droppers. At the other end of the program participation life cycle, practitioners need to understand better how active participants become inactive, accept natural retirement from a program, and distinguish this phenomenon from participants who drop out for reasons that can be addressed through quality control and incentive systems. We hope the framework will prove to be a dynamic and robust tool that can be applied across many different types of CS programs and can facilitate innovative research on program participants and ultimately help programs improve retention rates—or at least make practitioners less worried about the constant need to replenish their volunteer base by recognizing it as a natural phenomenon of volunteerism itself (; ; ).

Supplementary File

The Supplementary File for this article can be found as follows:

Supplemental File 1

GO Participant Survey. DOI: https://doi.org/10.5334/cstp.350.s1