Introduction

While government-based natural resources monitoring is notoriously hard to fund and implement owing to a variety of political and practical challenges (), recent reviews suggest that citizen science has great potential to meet monitoring needs cost effectively (; ; ; ; ). Consequently, increasing numbers of citizen science programs and projects are striving to meet agency needs for monitoring data. For example, the US National Oceanographic and Atmospheric Administration has formed a citizen science community of practice (), and the Scottish Environmental Protection Agency has published a guide describing when and how to use citizen science for environmental monitoring (). A recent survey of 83 citizen science projects found that more than 40% had generated data that were used by natural resource managers (). Yet, while citizen science is gaining legitimacy in decision making and in mainstream science, questions about the credibility of citizen science results are still common (e.g., ; Nature 2015; ).

Potential audiences such as scientists and resource managers must recognize the credibility of citizen science in order to use its data. But citizen science is tremendously diverse, and credibility standards vary by project. Thornton and Leahy () suggest that the typical academic signals of trust, peer review, and quality assurance are not sufficient for building trust in citizen science data. However, while citizen science groups have been urged to explicitly consider credibility (), practical guidance for what such consideration should entail is still limited.

Credibility is broadly defined as the quality of being believable or worthy of trust. For science (citizen or otherwise), credibility has both technical and social components. The field of science and technology studies (STS) consistently highlights the socially and politically contextualized nature of any scientific endeavor (e.g., ; ). New forms of science, including many kinds of citizen science, are not judged simply on technical attributes. They also require new social practices encompassing the governance, execution, vetting, and use of new science (). Therefore, a citizen science group may require many pathways to achieve the necessary credibility to imbue trust in its data (). The ways in which citizen science groups, like any scientists, perceive the social and technical challenge of credibility will impact the data-quality practices they put in place.

Data users within a regulatory or management setting look for signals of credibility accompanying traditional academic science, such as peer review or investigator reputation (). While rarely explicitly discussed, these signals are an important element in arenas where legal or constituent defensibility is a requirement (). It remains unclear whether, when presented with citizen-generated data, resource managers should seek the same, similar, or different signals of credibility, and the answer is probably context-dependent.

The credibility challenge specific to citizen science

While all science can face challenges to its credibility, the specific context of citizen science, along with external assumptions about citizen science, can make establishment of its credibility particularly difficult. Implementing science projects with volunteers, often outside traditional science institutions and typically with limited resources, contributes to the credibility challenge. Preconceptions about citizen science also can be an issue. For example, citizen science data can be distributed, hard to access, and may have incomplete metadata, presenting obstacles for their use (; ). Many of these issues can be overcome by using sophisticated methods of data analysis, however ().

Another common assumption is that volunteers collecting data have an agenda that will impact their ability to record objective scientific observations (, Nature 2015). While this is a valid concern for any type of science (; ), volunteers can learn to eliminate bias through training and experience (). A related assumption is that volunteers collecting data are untrained or unable to follow complex scientific protocols (). However, comparative studies have demonstrated that volunteer capabilities can be equal to professionals (e.g., ; ). In addition, some citizen science projects are designed specifically to tap into the expertise outside of professional science, as with collaborative fisheries research (; ) or ethnobotanical studies (; ).

Precedents for credibility in citizen science

Some national-level communities of practice have set precedents for expectations of credibility-building activities in citizen science. For example, the water quality community long ago embraced citizen science as an effective means to keep track of the many water bodies important to local communities and the national water supply. In response, the US Environmental Protection Agency provides methods manuals, official protocols, and protocol certifications, and it endorses groups that follow its guidance (). Statewide networks have followed suit, such as the Surface Water Ambient Monitoring Program (SWAMP) in California, which provides technical assistance, training, quality assurance support, equipment loans, and communications help ().

America’s longest-running citizen science project, the Cooperative Weather Observer Program, was established in 1890 before the distinction between volunteer and professional scientist was commonly made (). Volunteer weather observer data are used by current weather forecasting models and help to calibrate technologies such as remote sensing and automatic weather stations. The high quality of these data are consistently proven by the accuracy of weather predictions at the local scale, and they allow for detecting more localized microclimates than other technological approaches ().

The Data Observation Network for Earth (DataONE) includes citizen science as one of its main stakeholders when developing data management plans. Its work addresses common problems for all types of science: Dealing with different types of data, standards for format and metadata, and provisions for archiving, access policies, and eventual transition or termination of data. DataONE describes data management as a process that must occur at each stage of inquiry throughout the lifecycle of a scientific project. The DataONE citizen science working group agreed that the staged approach is appropriate for citizen science but has added some special prescriptions about the particularities of involving volunteers regarding data quality, data usability, and data accessibility (). Adherence to this guidance is one way to gain credibility.

The birding community–in particular, the data platform eBird–is often held up as the gold standard in citizen science credibility (; ). eBird’s enormous data set allows statistical methods to detect and accommodate volunteer error. For example, detected spatial or temporal bias might result in the project directing volunteers to cover less popular times or spaces or for its analysts to statistically weight observations (). Large data sets also allow a high degree of filtering focused on suspect data–i.e., first-year participants, people who submit data erratically, or participants who are known to submit erroneous reports (). Big data metrics also allow for spatiotemporal modeling to flag and verify observations that are outside the expected patterns ().

Finally, a growing number of cases show that citizen science volunteers can maintain community data standards through self-policing. This is especially true in the open source software community, which has demonstrated benefits to science from crowdsourcing which far outweigh the costs. For example, the crowdsourced mapping platform OpenStreetMap is stronger than professional platforms and can address more kinds of geographic questions, largely owing to interactions among volunteers and the culture of correcting one another (). These large-scale projects showcase a set of well-respected strategies for gaining credibility within citizen science.

Here we examine credibility-building strategies across citizen science projects with different structures, priorities, and volunteer bases to illuminate what it means to balance a need for credibility with the realities of a volunteer-based program. By organizing practical approaches into a framework, we show pathways that projects can follow to achieve greater credibility and to combat monolithic assumptions about the nature of citizen science (). An understanding of how citizen science projects explicitly pursue the goal of credibility can be useful to both producers and users of citizen science products.

Methods

Our study focuses on state waters in the central coast region of California, stretching from Pigeon Point to Point Conception up to three nautical miles offshore, a region that is home to a network of 29 marine protected areas. The Marine Life Protection Act (MLPA) mandates monitoring to inform adaptive management of the network, and the state is meeting this mandate through a public-private partnership. The monitoring framework, adopted as state policy (), explicitly acknowledges the potential role for citizen science. Several citizen science groups participated as partners in early monitoring efforts, but many others are operating in the region. Understanding the credibility-building strategies of these citizen science projects is one piece of a broader effort to understand the region’s citizen science capacity and its potential to inform decisions about California’s oceans ().

To inventory project strategies we first compiled a roster of citizen science groups through a comprehensive Google search followed by references from our initial contacts (snowball-style), yielding a total of 30 groups. Once we were not receiving any new suggestions we felt satisfied that our roster was complete. However, we recognize that citizen science groups and projects within groups may come and go over time.

We then conducted semi-structured phone interviews, each lasting between 30 and 90 minutes, with project coordinators from each of the 30 groups. We also held seven focus groups with ten of the groups that were willing and able to organize in-person meetings (Elkhorn Slough groups participated in a combined focus group). Protocols for interviews and focus groups focused on program goals and ways that leaders guide their programs to meet those goals, including tradeoffs, challenges, and priorities. (Our interview protocols are available in Supplementary Materials.)

We transcribed and coded the interview and focus group data using the Dedoose software package. We used a grounded theory analysis, which relies on several rounds of coding and annotation to identify key themes emerging from the data (). This process highlighted data credibility–with related concepts of rigor and trust–as an important challenge that citizen science groups face in developing partnerships with managers, scientists, and other potential users of their data. Although every project coordinator referenced the concept of credibility during initial conversations, we asked several coordinators additional questions about data quality to provide greater detail for our analysis of credibility strategies.

Next we held a two-day workshop with representatives of 18 of the 30 identified citizen science groups and six resource managers identified through interviews as key data users. Many workshop sessions directly focused on credibility and built on results from the interviews and focus groups. Each discussion group had at least two note takers, and notes were later coded and analyzed along with our interview and focus group results. Workshop discussions reinforced the importance of credibility strategies and helped to build our credibility strategy framework.

Finally, the authors inductively developed a credibility strategies framework. We did not seek universal agreement on the categories used in the framework; rather, the categories were developed internally as a result of our grounded theory analysis. However, while opinions about appropriate categories of credibility strategies may vary, we are confident that our data accurately reflect practices of the citizen science projects that we examined. Results (see Table 1) were vetted by workshop participants and by other project leaders who were not able to be present.

Table 1

Summary of credibility-building strategies and related context of 30 citizen science groups working in the Central Coast of California. Symbols in each column are explained in detail in the text, but each activity column was either Y/N for yes or no regarding whether the activity exists within the project or H/M/L/N for high/medium/low/no indicating the level of the activity. Each context column is Y/N for yes or no in answer to the question, S/M/L for small/medium/large depicting the size of a program component, or G/I for group or individual activity.

Credibility-building Strategies
totalContext for Strategies
early actionsin the fieldin the office

prior expertisetrainingscience advisingranking systemin-person oversightre-trainingtechnological aidsValidation of observationscross comparisonpublicationmanagement useQuality assurancesole source of data?institutional affiliationsize of volunteer poolgroup vs. individualtime commitment

Beach WatchNHYNNoptionalNYNNYN5YYLGM
BeachCOMBERSNHYNNoptionalNYYYYN6YYMGM
BeachkeepersNNNNNNNNNNYN1YNLGL
Black Oystercatcher MonitoringYNYNNNNNNYYN4YNMGM
Blue Water Task ForceNLYNYNNNYYYY7NNLGM
CA King TidesNNNNNNYNNNNN1YNLIL
CCFRPNLYNYNNYNYYN6NYLGM
CWC (First Flush)NMYNNNYNNNYY5NNMGL
CWC (Urban Watch)NMYNmaybeoptionalYNNNYY7NNSGL
Elkhorn Slough (otters)YLYNNNYNYYYN7NYSGM
Elkhorn Slough (algae)YLYNYNNNNYNN5NYSGM
Elkhorn Slough (nestboxes)NMYNNNNNNYYN3NYSIH
Elkhorn Slough (shorebirds)YLYNYNNNYYYN6NYMGL
Grunion GreetersNMYNmaybeNNNYYYN5NYLGM
iNaturalistNNNYNNYYNYYN5NYLIL
JellywatchNNYYNNYYn/aYYN6YYLIL
Leatherback WatchNNYYNNYYNyNY6NNSIL
LighthawkYNYn/an/an/aYn/an/aYYn/a5YNSIM
LiMPETSNMYYYYNYYYYN8NYMGM
Marine Debris Action TeamsNMNNYNNYYYYN6NYMGM
Marine Debris TrackerNNNNNNYYNYYN4NYLIL
Monterey Bay NMS VMPNMYNmaybeNYNNYYY7NYLGH
Morro Bay NEP VMPNLYNYNYNYYYN6NYMIM
MPA WatchNLYNNNYYn/aNNN4YNMIH
Phytoplankton Monitoring ProgramNMYNNNNYYYYN6NYMIL
REEFNLYYNoptionalNYYYYN8NNLIL
Reef Check CAYMYYYYNYYYYN10NNLGH
Seabird Protection NetworkNLYYNNNYNYYN6NNLGM
Shorebird Monitoring (Morro Bay)NLYNYNNNNYYN5NNMGL
SPLASHNLYNNNYYNYYN6YNLIL
average:
% employing strategy2073832340204350378087175.5

Results

From our survey of 30 citizen science groups we identified 12 distinct strategies for demonstrating credibility. We included only activities that constitute a formal component of a project and which project leaders reported built credibility (future studies will have to verify with data users how much the strategies actually worked). We did not evaluate or include in our framework such informal factors as developing close personal ties with data users or personal reputation of project staff. While factors such as these may be important in developing credibility, they were not amenable to analysis by our methods and would require further study.

We grouped credibility strategies around three stages of a research project: Planning (early actions), data collection (in the field), and data analysis and interpretation (in the office–see Table 1). Explanations of each strategy shown in the table are provided in the following section. We also identify and discuss five contextual and programmatic factors that may be related to patterns observed in the credibility strategies of each of the projects. Note that group leaders did not feel that doing more strategies was a good way to increase credibility (no one, for example, strove to achieve all 12 options). Therefore, this chart should not serve as a checklist of prescribed credibility-building strategies that a project “must” follow.

Strategies for demonstrating credibility

Our framework for establishing credibility follows the suggestion from the DataONE Data Lifecycle concept () to separate strategies by stage of the research process in which they occur. We identified three important stages that should be addressed and then identified common credibility-building strategies within each stage.

Early actions

Early actions are steps taken to increase credibility before any data are collected. They include working with volunteers to ensure that they can successfully complete required tasks and structuring methods to best answer the scientific questions at hand. Decisions about how to apply these strategies have direct consequences for the kinds of science that volunteers can do and the availability of qualified volunteers. For example, more stringent expectations of volunteers allow for more complex protocols but decrease the size of the potential volunteer pool.

  1. Prior expertise – The expectations that project leaders have of volunteers—in terms of skills or knowledge—when they join the program might set a barrier to entry. In the table, “yes” indicates that a project has a formalized minimum standard that volunteers must meet in order to participate.
  2. Training – Projects often train their volunteers on the project protocol and logistics and sometimes offer substantial training in the form of classes, readings, and online materials. In the table, an “H” designation represents required training with a substantial time investment (a course or apprenticeship stage), while “none” indicates that training is not required. “L” and “M” fall in between.
  3. Science advising – Scientific advice during the project development stage can help strengthen methods, tailor data to their intended use, and ensure standard practices in the field. Advice may come in the form of a partnership with a university lab, a science advisory team, or other formal arrangement. A “yes” in the table refers to any of these options.

In the field

Credibility strategies implemented while volunteers are collecting data tend to focus on individual data point quality. Two of these strategies, in-person oversight and technological aids, involve real-time mediation of data collection. These strategies can take advantage of a diverse volunteer pool by leveraging skills from long-term volunteers, local experts, and those savvy with technology to assist the new or less-skilled volunteers in learning. Other strategies in this category track volunteer learning and give formal credit to volunteers as they improve the quality of their data.

  1. Ranking system – Volunteers join projects at different levels and add skill through experience in a project. Increased numbers of designated “experts” can instill trust in the project’s data. Projects designate experts as they cross hurdles such as tests, tenure in the program, or trainings attended (any of which are indicated as a “yes” in the table).
  2. In-person oversight – Many data errors happen in the field. To address this, some projects designate staff, science partners, or “expert” volunteers to directly oversee data collection (indicated as a “yes” in the table).
  3. Retraining – Opportunities for continuing education can advance volunteer skill through classes, online trainings, readings, and other training opportunities. In the table, “yes” indicates that continuing education is required, “optional” indicates that it is available but not required, and “no” indicates that no further training is available.
  4. Technological aids – Challenging forms of data collection can be simplified and streamlined using technology. For example, technology can automate location recording, photo-based validation, or water quality sensing. “Yes” in the table indicates that technology simplifies data collection in some way.

In the office

We observed the highest number of credibility strategies in the later stages of project implementation. Most of these strategies are designed to improve the reputation and therefore the credibility of the project as a whole. Here, opportunity for outside review plays a key role in contextualizing citizen science among a community of scientific peers through publishing peer-reviewed articles, enrolling data users, and employing disciplinary best practices.

  1. Validation of observations – Many projects conduct checks for human error and answer questions about species identification or other difficult evaluations. Validation can range from ensuring that data sheets are complete and legible to including statistics-driven flagging of possibly incorrect data and expert identification of voucher samples (all indicated as “yes” in the table).
  2. Cross-comparison – Side-by-side comparisons of citizen science data with data collected by trusted professionals can document credibility of methods and data while demonstrating that volunteers can collect data accurately. Cross-comparison requires an existing data source with which to compare. “Yes” in the table indicates that a program participated in such a comparison.
  3. Publication – Academic peer review puts the research through the same gauntlet of critique as research conducted by professional scientists. Publications may be written by project staff or by other scientists using the data. A less common, but growing, strategy is peer-reviewed and published data sets. Any variety of publication is indicated as “yes” in the table.
  4. Management use – Managers who use citizen science data to inform their decision-making are expressing trust in the data. “Yes” in the table indicates that managers used data produced by the citizen science group.
  5. Quality assurance protocol – For some topics, standard quality assurance protocols are a required part of scientific practice in order to calibrate methods, technology, and practice over time. For citizen science, these protocols also certify volunteer capability in addition to the methods. “Yes” in the table indicates that a QA protocol is required.

Program structure and context

Columns at the far right of the table show some factors that project staff suggested might be related to a project’s choice in employing a particular mix of credibility strategies.

Sole source of data

The “sole source of data?” column in the table refers to whether data provided by the group are available through any other source. If a group produces the only data on a given subject, then the acceptable standard of quality may be lower than if there are many established groups in the field (). For instance, sightings data from JellyWatch are often the only indication of ephemeral jelly blooms that otherwise receive little research attention or funding but are of high concern for ocean health (). In another example, MPA Watch tracks recreational uses of marine protected areas, a topic for which there are no standard quality assurance protocols, so in-house data quality checks are the best they can do. Most of the projects in our census that utilize few credibility strategies are also the sole source of the kind of data they are producing. These projects are trying something new, and therefore certain strategies for credibility that require scientific partnerships or comparisons are not yet available to them. Therefore, their data have may great value by virtue of the fact that they exist – in the future such projects may increase their data quality with greater time and resources, but for potential data users today, it’s these data or nothing.

Institutional affiliation

The “institutional affiliation” column in the table refers to whether a group is officially affiliated with a larger institution such as a university, government agency, or museum, and therefore may benefit from institutional support. According to participants at our workshop, affiliation often comes with support for grant writing and management from a budget office, statistics software and expertise from partner departments, and communications support from a news office. They also suggested that larger institutions may have an established reputation in the scientific community which furthers the credibility of any citizen science taking place under the umbrella of the larger institution.

The projects with the most credibility-building activities were not affiliated with a university, government agency, or museum (most notably REEF and ReefCheck, both diver-volunteer programs). Some leaders of these groups reported feeling the need to strengthen their program in response to external assumptions of questionable credibility. Some leaders for projects with an intermediate number of credibility activities felt that institutional support alleviated the pressure to add more.

Size of volunteer pool

The “size of volunteer pool” column in the table characterizes the number of volunteers in three categories (Small < 20 people), Medium = 20–100 people, and Large > 100 people) This number matters especially for credibility strategies focused on individual volunteers, such as training or in-person oversight, because more volunteers increase the resources needed to implement these strategies.

The three projects in our census with the most credibility-building activities have large volunteer pools. During focus groups, volunteers frequently discussed the need to accommodate a volunteer pool with a range of capabilities. A larger volunteer pool likely includes a wider range of volunteer capabilities, and therefore demands more credibility strategies to accommodate the lower-performing volunteers.

Group versus individual

The “group vs. individual” column in the table refers to whether volunteers collect data in organized groups or as a solo activity. Solo activities are often designed to take advantage of a volunteers’ normal routines, such as taking walks on the beach or SCUBA diving on a family vacations. The data-collection activities of these projects could be fundamentally altered by in-person oversight. They also make group activities such as trainings more difficult to coordinate. In these cases, credibility must come from strategies focused on the project as a whole more than from strategies focused on individual volunteers.

All but two of the projects in our sample that are designed for solo data collection are app-based. With the exception of REEF, these projects also employ few credibility-building activities. They generally prioritize fun in order to recruit and retain volunteers, so they minimize the number of tasks that are tedious, often by building them into an app. The app-based platforms help with data management and ensure that more complex data are consistently measured (like GPS coordinates), demonstrating the important intersection between technology and citizen science outcomes.

Time commitment

Finally, “time commitment” in the table refers to the time required of the typical volunteer: “L” refers to one-time events; “M” refers to a medium time-commitment by volunteers such as a short season; and “H” refers to a large commitment such as year-round data collection. One might expect that programs requiring only a low time commitment would avoid resource-intensive credibility strategies focused on individual volunteers, as the expected return on that investment would be low.

Time commitment expected of volunteers varies widely, sometimes within a single project. Some participants might come to all possible events while others pick and choose depending on their schedule. Our census did not reveal a solid connection between average time commitment and the number of credibility-building activities; variation within a single project might explain why.

Two contrasting examples

To further illustrate credibility-building strategies we next describe two projects with a similar number of strategies but very different approaches based on the focus of their work and the volunteer experiences that they strive to create. This comparison helps to demonstrate the importance of scientific context and project structure in understanding how citizen science groups approach the challenge of signaling credibility. The examples also highlight some of the tradeoffs involved in deciding how many and which kinds of credibility-building strategies to employ.

iNaturalist

iNaturalist, based at the California Academy of Sciences, relies upon broad participation from a wide variety of people to create a global database of biodiversity observations. Through its web- and app-based platform, anyone can sign up, take a picture of any organism, and report it, regardless of expertise. Most contributions come from smartphones, whose photos automate and standardize location and time records. Users can also set up projects within iNaturalist, asking additional questions about observations such as weather conditions or animal behavior.

iNaturalist uses five strategies to demonstrate credibility: Ranking system, technological aids, validated observations, publication, and management use. The large number of observations cultivated through a large volunteer base tease out the signal from the noise and data can be compared from different volunteers at the same location. Verification also happens through discussion of observations among users. When no designated expert is available to definitively identify an observation, consensus among at least two-thirds of identifiers can elevate an observation to “research grade.” All such data are published to the Global Biodiversity Information Facility (GBIF), from which researchers around the world frequently download data for biodiversity analyses.

Observational data is where much of ecological science begins, and volunteers gain skill through participation. The leaders of iNaturalist are open about the limitations of opportunistic observations submitted by users but maintain their importance in global ecology. Volunteers are encouraged to be honest about their capabilities – i.e, to report observations as unknown rather than guessing at identifications. In addition, the social nature of the platform allows users with more expertise to lend credibility to the data by identifying unknown observations and verifying others. Developers are currently working on features that will support additional kinds of data collection such as transects or effort reporting.

Because iNaturalist relies on a large number of volunteers distributed throughout the world, checks on credibility are loaded at the end of the data-collection process. Rather than ensuring that all observations are accurate, the system works to identify high quality data from within the larger pool of contributions. This means that many observations will never be used. However, it also means that anyone with access to the requisite technology can participate regardless of prior knowledge, and that anyone can take advantage of the educational opportunities that iNaturalist provides through access to a community of online experts. The program requires very little commitment from volunteers and does not need to invest heavily in individual volunteers to sustain the model.

BeachCOMBERS

Conversely, BeachCOMBERS begins to build credibility at the front end of the project by investing heavily in individuals through training and one-on-one attention, relying on a carefully structured protocol and experimental design to ensure success. BeachCOMBERS volunteers walk designated beaches and record detailed information on beach-cast birds, mammals, and tar balls. They also help local researchers with short-term projects, recording other kinds of data or taking samples from the birds. The bird data are used primarily by the project to write scientific publications focusing on birds as an indicator of ocean health.

BeachCOMBERS employs six credibility strategies: Training, science advising, retraining, validation, cross-comparison, publication, and management use. Joining BeachCOMBERS requires first attending an 80-hour course including classroom lessons and in-the-field practice sessions. Volunteers must commit to participating for at least two years, a barrier that significantly limits the number of potential volunteers. However, volunteers tend to stay longer than their initial commitment and report a sense of camaraderie with their training cohort.

BeachCOMBERS doesn’t rely solely on strict training of participants, however; it also includes a series of checks on data once they’ve been collected. Volunteers have close connections to local experts and program leaders, so they can send pictures of their observations when they have questions. The science advisory team and neighboring beach-cast bird citizen science programs stay in close contact to maintain a high level of review for program, methods, and future opportunities. This broader community has helped to pioneer the technique of using beach cast birds as indicators of ocean health, and thus has gained credibility in creating useful, novel methods for better understanding and monitoring the Pacific Ocean while creating a highly structured and rigorous program.

Discussion

DataONE recommends that all research should address credibility at each stage of the scientific process (). Examining our table of findings indicates that this advice is both tractable and is actually heeded across a broad range of project structures: 87% of projects do at least one credibility-building activity in the “early actions” category; 87% do at least one “in the field,” and 97% do at least one “in the office.” However, some stages are easier than others for implementing credibility strategies, reflected in the popularity of strategies in “early actions” and “in the office.”

Indeed, few citizen science groups in our sample have implemented significant strategies for credibility in the field. Programs that do commented on the commitment involved in sending staff out with each group of volunteers. Often projects do not have sufficient staff to go around. To address this problem, one project leader emphasized “I’m only a phone call away” should volunteers at different sites have questions at the same time. Another group considered the idea of transitioning to app-based data reporting to improve reporting accuracy, but realized that the cost of the technology and the investment in smartphone training for its mostly older volunteers would be prohibitive.

One set of credibility activities seems to be favored by a majority of programs: Training, scientific advising, publication, and management use. These are likely the most logistically feasible strategies, and some offer benefits in addition to credibility. For example, scientific publications and use of data for management motivate volunteers who can see the results being used in a tangible way.

These additional benefits also demonstrate the potential for feedback loops linking credibility strategies. For example, if managers use the data, thus raising the profile and impact of the program, more volunteers may be motivated to join. With increased volunteer demand, the program can institute stronger training or requirements for joining. These same managers might also motivate, or directly request, more directed scientific review of the methods or data interpretation to better fit their agency’s needs. Increased credibility through all of these activities might then enroll more managers to use the data, completing the feedback loop.

In short, the number and types of credibility-building activities both influence and are influenced by program structure. This constitutes a tradeoff in decision-making and program planning that project leaders must balance.

Conclusions

Our investigation identified twelve strategies to ensure and/or demonstrate the credibility of citizen science. According to our interviews, some of these strategies mirror the precedents set by established, high-profile programs such as eBird and the Cooperative Weather Observer Program; some mirror industry standards; and others worked with existing resources to create a uniquely tailored credibility-building system.

While it is important to pursue standards related to credibility in citizen science, it is crucial that we avoid monolithic thinking in this endeavor. Each group should consider its context and goals in deciding what strategies to employ. In particular the number of volunteers, the group nature of the activity, and the time commitment expected of participants in the program each play an important role. For example, a training requirement would drastically alter the iNaturalist program. For BeachCOMBERS, however, training is essential for development of adequate expertise and the program’s culture. Building credibility, which has both technical and social components, is a dynamic process with built-in feedback loops.

Just as citizen science projects must balance priorities internally, they also must work with potential users to establish shared expectations around credibility. Ultimately credibility is in the eyes of the data user, and regular communication as part of a relationship is critical in navigating the tradeoffs associated with employing credibility strategies. Explicitly planning credibility expectations and performance helps move past the simplistic question of whether or not citizen science is credible into how it can be credible and for what purpose.

In our workshop, citizen science leaders expressed a desire to be held to the same credibility standards as academic science. Thus the practices and expectations described will become increasingly important as the popularity of citizen science continues to grow and citizen science becomes a more common approach to understanding the world around us.