Introduction

On July 17–18, 2017, the National Science Foundation funded a workshop in Raleigh, North Carolina, on the topic of “Filling the ‘Ethics Gap’ in Citizen Science.” The workshop was motivated by the recognition that citizen science has become increasingly prevalent in a wide variety of different fields, but relatively little work has engaged with the ethical dimensions of citizen science. The workshop explored a wide variety of ethical issues related to citizen science, including concerns about treating citizen research participants with appropriate respect; avoiding unintended harms; creating appropriate oversight mechanisms; navigating concerns about data quality, privacy, and ownership; and creating equitable opportunities for a wide range of people to participate.

In addition to these narrowly defined ethical issues, however, some recurring issues raised at the workshop involved broader conceptual concerns about whether citizen science has features that make it unlikely to meet standards of sound scientific practice. Specifically, academic scientists and community members involved in citizen science projects felt that their work was hampered by widespread perceptions that it could be lacking in rigor or quality. These concerns are significant because it is ethically problematic when the quality of scientific work is severely misjudged. On the one hand, when low-quality research is inappropriately treated with too much respect, one result can be poor-quality decision making and a waste of scarce resources that could have been used to support better studies. On the other hand, when high-quality research is inappropriately dismissed or prevented from taking place, decision makers can be deprived of valuable information that could benefit society.

The philosophy of science has long been concerned with questions about what distinguishes science from non-science or pseudoscience and what qualities are essential for good scientific practice. Thus, scholarship in this field has the potential to shed light on these concerns about citizen science. In this essay, we examine three prominent concerns that were raised at the workshop, and we show how the philosophy of science can provide resources for addressing them:

  1. The worry that, because citizen science often focuses on collecting large quantities of data, it can become an uninteresting exercise in “stamp collecting” rather than a properly scientific, hypothesis-driven enterprise;
  2. The worry that the data collected by citizens and the methods used in citizen science may not be of high quality;
  3. The worry that citizen science participants may fall into advocacy rather than maintaining a properly disinterested approach to their work.

While one or more of these concerns may pose a genuine problem in particular cases or situations, this paper argues that none of them provide in-principle reasons for questioning the quality of citizen science in general.

Concern #1: Data-Intensive versus Hypothesis-Driven Science

One worry is that citizen science tends to focus on the collection of data, and in many cases, these data are not associated with a particular scientific hypothesis. In some cases citizen science does incorporate specific hypotheses, such as when professional scientists collaborate with citizens to perform epidemiological studies of the effects of pollutants in their communities (e.g., ). In other cases, however, citizen scientists collect information without a specific hypothesis (or even a clear research question) in mind. Instead, they collect information about specific phenomena of interest to them or to regulators or monitoring agencies. For example, they might collect data about the presence of particular species in specific locations at particular points of time (; ), or they might collect information about the environmental quality of the air or water near them (; ), or they might monitor local temperatures or precipitation (). Critics sometimes dismiss these kinds of studies, whether they are performed by citizens or professional scientists, as uninteresting “fishing expeditions” or exercises in “stamp collecting” (; ).

A recent essay by two program officers at the National Institute for Environmental Health Sciences (NIEHS), Liam O’Fallon and Symma Finn (), helps to sharpen these concerns. They identify important differences that they perceive between community-engaged research (which they view as being driven by academic scientists in most cases) as opposed to citizen science (which they view as being driven by citizens). They argue that community-engaged research is typically driven by a scientific question associated with the principal investigator’s expertise and interests. In contrast, they observe that citizen science is more likely to be driven by practical concerns, such as the desire to collect information about harmful environmental exposures. While O’Fallon and Finn do not use this observation to denigrate the importance of citizen science, it is easy to see how one could come to the conclusion that citizen science fails to meet standards of good scientific practice. Given that many accounts of scientific method revolve around the testing and proposal of hypotheses, the potential for citizen science to focus on collecting information without a clear hypothesis or research question seems to relegate it to a lesser status.

Despite the initial plausibility of this perspective, recent literature in both the history and philosophy of science challenges the idea that hypothesis-driven science is always of higher value than other approaches to science. First, the history of science illustrates that very important scientific advances were made by investigators who did not see themselves as functioning in a straightforward mode of proposing and testing hypotheses. For example, Isaac Newton insisted that his methodology was not based on the proposal of hypotheses but rather on inductive generalizations from the available evidence (; ). Newton’s suspicion of proposing and testing bold hypotheses was typical of most scientists throughout the eighteenth century. Natural philosopher Émilie du Châtelet stood out as an exception to most eighteenth-century thinkers when she defended the use of hypotheses and contended that Newton himself actually formulated hypotheses (). It was not until the late nineteenth century that the “method of hypothesis” gained dominance, and then it received further support from the ideas of twentieth-century thinkers like Karl Popper (; ; ).

A further problem with the notion that hypothesis-driven science is of more value than other approaches to science is that, according to a number of contemporary philosophers, science is best seen as an iterative endeavor that moves back and forth between multiple activities or “modes” (e.g., ; ; ; ). On this view, scientists engage in multiple activities that are typically not arranged in a linear fashion (). According to one philosophical account, scientists engage in at least four different “modes” of research: (1) exploratory inquiry; (2) technology-oriented research; (3) question-driven research; and (4) hypothesis-driven research (). Thus scientists move iteratively between different practices as they seek to understand phenomena. Initially, they may have general questions that result in exploratory efforts to look for important patterns, regularities, or structures. To engage in this exploratory inquiry, they may have to develop new technologies (e.g., tools or methods). Over time, they are often able to narrow their questions, propose underlying mechanisms responsible for the patterns that they observe, and develop precise hypotheses that can be tested. As they study these mechanisms and test these hypotheses, they are often launched into new exploratory inquiries and succeeding rounds of questioning, technology-development, and hypothesis generation ().

The upshot of these philosophical accounts of scientific practice is that one cannot evaluate the quality or appropriateness of specific scientific practices without considering how they are situated within a broader network of research activities (). Thus, even if most citizen science is not explicitly designed for testing hypotheses, it can generate questions and facilitate exploratory inquiries that turn out to be very valuable scientifically. For example, ecologists are becoming increasingly interested in studying the changes that occur at large spatial and temporal scales in response to development, pollution, habitat loss, and climate change (; ). Unfortunately, professional scientists often have difficulty finding the time and money to engage in the amount of data collection needed to answer these questions. Faced with this challenge, ecologists are finding monitoring data collected partly or wholly by citizen scientists to be an extremely valuable source of information ().

Concern #2: Quality of Data and Methods

Another recurring concern about citizen science projects is that the quality or integrity of their data and the sufficiency of their methodologies could be compromised by the involvement of non-professional scientists (). From a certain perspective, concerns about data quality and methodology are reasonable. Professionalization and long-term education in particular disciplinary frameworks are important processes through which scientists develop expertise for producing high-quality knowledge. Moreover, institutional incentives and disincentives (such as pressures to maintain one’s funding sources, employment, and reputation) provide important checks on the behavior of professional scientists. Relying on the data and work of non-professional scientists bypasses a large part of our institutional systems for quality assurance. Thus, a certain amount of care and attention to the quality of citizen science is appropriate.

Nevertheless, an important body of philosophical literature suggests that the quality of scientific work (including citizen science) cannot be adequately evaluated without considering the aims for which it is produced (e.g., ; ; ; ). For example, when deciding how to model complex phenomena such as climate change, researchers must consider what questions they are trying to answer, at what levels of spatial and temporal resolution, and with what levels of accuracy (). Similarly, when risk assessments are produced for the purposes of making regulatory or policy decisions, scientists have to balance the goal of producing reliable results against the goal of obtaining results in a timely fashion. In some cases, social costs could be minimized if scientists used less accurate assessment methods that generated results much more quickly (and thus facilitated faster regulatory decision making) but with a modest increase in false positives or false negatives (). In the same way, when research is geared toward identifying potential public-health threats faced by communities, it might be appropriate to accept data or methods that are more prone to false positive errors in order to minimize the potential for false negatives (; ). The approach of using data and methods that are appropriate for the specific purposes for which they will be used is especially important when scientists or citizens are engaging in exploratory studies that can provide impetus for subsequent investigations.

Given this philosophical perspective (i.e., that data and methods must be evaluated based on the purposes for which they will be used), empirical evidence suggests that the quality of citizen science data has often been sufficient for the projects being pursued. For example, although some evidence suggests that volunteers engaged in reef monitoring consistently overestimate fish density, their data appear to accurately track long-term changes in coral reef ecology (; ). Similarly, although volunteer contributions to European Union biodiversity monitoring appear to be less complete than expert-collected data, they also appear to be fairly accurate in general (). Citizen volunteers have also been found to provide data of equivalent quality to professional scientists and experts in the identification of marine debris, as well as equivalent data when identifying human impacts on the environment and land cover (; ). In other studies, volunteers with appropriate training were able to accurately identify 94% or more of the insects that visited specific flowers (), and aggregated citizen-generated classifications of animals in photos agreed with expert classifications 98% of the time ().

When weaknesses in citizen science data have been identified, researchers have sometimes been able to tailor their investigations and hypotheses so that they take account of these limitations. For example, volunteers taking part in urban tree inventories are at least 90% consistent with expert identification across a number of areas, including site-type and genus-identification (). However, they do occasionally miss or count extra trees (~1%), and they lag behind experts in the correct identification of tree species, agreeing with expert identification only 84% of the time. Yet, as Kallimanis et al. () point out, that fact need only discourage researchers from using citizen data for projects that require fine-grained identification of trees; their data can be fruitfully used for a range of other projects.

In other cases, steps can be taken to strengthen the quality of data so that they meet the needs of particular scientific investigations (). For example, the quality of data provided by volunteers can often be improved by increasing or modifying the training given to volunteers (; ; ; ). If training is either too costly or too difficult, a variety of other methods can be used to increase the quality of data. Given that new volunteers often improve over time even without training, one can simply discard results from new or inexperienced volunteers to improve the overall quality of data. If available data are so scarce that results cannot be discarded, or if discarding the data would be unethical for some reason, then unlikely results can be flagged for investigation by an expert (). Further, even when the reliability of particular volunteers is low, the data can often be aggregated in ways that result in expert quality data (). In the absence of particular data validation or quality assurance procedures, if enough data can be generated by volunteers – which is one of the primary selling points of citizen science – the absolute volume of data sometimes can limit the influence of low quality contributions by volunteers (). In some cases, this alone may be sufficient to allow confidence in citizen science results.

It would be a mistake, however, to focus solely on the extent to which citizen-generated data are compatible with professional scientific practices. To do so would be to ignore important distinctions between different kinds of citizen science. For example, Ottinger () compares scientific authority-driven citizen science, which strives to emulate the protocols and practices of professional scientists, with social movement-based citizen science, which is sometimes designed to challenge traditional scientific approaches (see also ; ; ; ). Social movement-based citizen science is frequently driven by communities that are striving to address public-health concerns related to environmental pollution. In some cases, these communities argue that professional scientists demand too much evidence before they are willing to draw linkages between environmental pollution and health effects (; ). They also sometimes challenge assumptions involved in expert risk assessments (; ). Moreover, in some cases they are more concerned about using the data they collect for the purposes of creating effective narratives to facilitate political activism rather than using the data for more traditional forms of scientific analysis (). As discussed in the following section, this does not imply that when citizens engage in political advocacy the quality of their scientific work is threatened; rather, it illustrates that different kinds of data are needed for different sorts of projects and analyses.

When evaluating the legitimacy of these activities, it is once again helpful to appeal to the philosophical literature that emphasizes the importance of evaluating whether particular scientific activities serve the aims of inquiry in specific contexts. For example, if citizens are trying to identify potential health threats for the purposes of motivating further scientific, legal, or political action, it might be entirely appropriate for them to demand lower standards of evidence than scientists would typically require before drawing causal inferences in scholarly publications (; ; ; ). Similarly, if they are striving to challenge the assumptions involved in expert risk assessments, performing alternative assessments in an effort to determine whether they yield different results might make sense. For example, Ottinger () has reported on efforts by citizen groups in Louisiana to collect information about air quality near industrial facilities using low-cost sampling equipment colloquially known as “buckets.” Regulators have pointed out that the buckets provide very short-term measurements of air quality, whereas Louisiana air-quality standards focus on long-term (24-hour or annual) average exposures to air pollutants. However, this departure from standard approaches makes sense, given that one of the goals of the citizens is to challenge current regulatory policy by exploring connections between their health problems and short-term spikes in air pollution that are not addressed within the current regulatory framework ().

It is noteworthy that the citizen science community is already sensitive to the fact that quality standards for scientific data may appropriately vary depending on the nature of the question being asked. Rather than trying to address general questions about the quality of citizen science data, practitioners argue that it is more fruitful to understand and define quality in terms of data’s “fitness for intended use” or “fitness for intended purpose” (; ). For example, in response to a survey of professional scientists who had been involved in citizen science projects, one respondent noted, “there is no such thing as quality of data, it’s what you use the data for. Different uses will require different quality” (). Fitness for use has even been recommended as a guiding principle for U.S. federal agencies that incorporate citizen science. John Holdren, former director of the U.S. Office of Science and Technology Policy, warned that quality assurance for citizen science cannot be thought of as “one-size-fits-all”; rather, oversight and quality assurance must ensure that “data have the appropriate level of quality for the purposes of a particular project” ().

More recently, the citizen science community also has recognized that the concept of fitness for use requires further elaboration and specification. Clarifying this concept would be an important step in standardizing citizen science projects, as well as providing practitioners with the vocabulary necessary to articulate the particular data quality needs of their project. It would also facilitate further collaboration and data sharing between projects and help generate a framework for discussing questions about data quality (; ).

Thus, the philosophical literature on the aims of science, the empirical literature on data quality in citizen science, and the citizen science community’s own reflections on how to define data quality all converge on the conclusion that it is unfruitful to pose concerns about the quality of data and methodology as a universal critique of citizen science. Rather than voicing general concerns about the quality of citizen-generated data, we should direct our attention towards particular projects aimed at particular results. In those specific contexts, we can ask whether the data provided by particular citizen groups and the methods that they have employed are sufficient for addressing the epistemic or practical task at hand. We can also explore whether there are ways of altering the questions being asked or improving the data being collected so that they are compatible. In some cases, citizen-science projects may even challenge professional scientists to reconsider their assumptions about what kinds of data or methods are most helpful and what kinds of questions ought to be investigated.

Concern #3: Inappropriate Advocacy

Participants at the July 2017 workshop also raised the persistent concern that their work was challenged for being too focused on advocacy. This concern is particularly relevant in the context of the “social movement-based” citizen science projects discussed in the last section (). Worries about inappropriate advocacy can take a number of different forms, some of which have already been addressed in the preceding sections of this paper. For example, one might worry that social movement-based projects tend to be focused on monitoring potential hazards rather than on testing scientific hypotheses. One might also worry that these projects tend to employ methods that are not sufficiently rigorous. A different worry, which we have not explored in the preceding sections, is that good science ought to be “value free,” in the sense that those conducting it should be motivated by the disinterested search for truth rather than by a social or political agenda (). Those who promote “value free” science are likely to worry that citizens or academic researchers working on social movement-based research projects might not maintain an appropriate level of objectivity when analyzing results, and those collecting data for these projects could be inappropriately influenced by ideological conflicts of interest ().

In recent decades, philosophers of science have written a great deal about whether good science can or should be value free (; ; ; ). One important contribution of this literature has been to clarify a variety of different senses in which science can be value-laden. For example, it is now widely recognized that scientists unavoidably appeal to epistemic (i.e., truth-oriented) values in their work, including values such as explanatory power, empirical accuracy, scope, and consistency (). Moreover, ethical values play important roles in guiding the kinds of inquiries that are allowed to proceed (). For example, prohibitions against subjecting human or animal research subjects to undue suffering are based on ethical values. In addition, social and ethical values often have legitimate roles to play in determining what areas of research deserve the most funding and support (). Thus, it seems clear that values have many important roles to play in scientific research.

Nevertheless, one could acknowledge all these legitimate roles for values in science and still worry that the kind of advocacy associated with many citizen science projects goes too far. For example, one might worry that both professional scientists and citizens involved in these projects could be so concerned about promoting environmental health that they are no longer objective when designing, interpreting, or communicating their work. This is an important concern, but recent work in the philosophy of science suggests that scientists can be influenced by values like the concern for environmental and public health without inappropriately jeopardizing scientific objectivity ().

Although philosophers do not speak with a unified voice on this issue, most argue that scientists can legitimately be influenced by values under at least some circumstances (e.g., ; ; ). Indeed, it is psychologically unrealistic to think that scientists who perform research on policy-relevant topics could prevent their personal, social, and political values from influencing their work in a variety of subtle ways (). Some philosophers argue that the key to maintaining objectivity is for scientists to be as transparent as possible about the roles that values might have played in their reasoning (). Others argue that scientists should allow values to influence some aspects of their reasoning (such as the standards of evidence they demand for drawing conclusions) while doing their best to prevent values from playing other roles in their reasoning (e.g., supplanting empirical evidence; ). Still others argue that the secret to maintaining scientific objectivity is to focus on the scientific community rather than on individual scientists (; ). On this view, we should be striving to promote scientific communities with diverse perspectives and with procedures in place to uncover and critically evaluate value influences.

This philosophical literature suggests that it is not inherently problematic for those engaged in citizen science to bring a perspective of advocacy to their work. What is more important is that the values they bring are made sufficiently transparent and scrutinized appropriately. Consider the case of the Louisiana Bucket Brigade and their use of special “buckets” to collect air samples near polluting facilities (). As mentioned in the previous section, these buckets meet some scientific and regulatory standards, but they depart from typical standards in other respects; for example, they collect data over short periods of time rather than extended time periods. This created controversies among community members, regulators, and scientists about the propriety of departing from typical approaches for measuring air pollution (). From a philosophical perspective, what is important in a case like this one is to be explicit about employing methods or assumptions that depart from standard scientific norms (). By doing so, all parties to the dispute can decide whether they feel comfortable with the findings.

Admittedly, it is unrealistic to think that those engaged in citizen science can be fully aware of (and transparent about) all the subtle ways in which their values might have influenced their work. But concerns about the failure to clarify important methodological and interpretive choices are not unique to citizen science. A growing body of empirical evidence suggests that some areas of industry-funded research are biased in favor of the funders (; ). There are also growing concerns that a great deal of academic research is not reproducible (; ; ). Of course, one might worry that citizen science can be prone to particularly severe conflicts of interest and that citizens may be less likely than professional scientists to have training or institutional constraints that could mitigate these conflicts. But the seriousness of these concerns varies dramatically from case to case, depending on the kinds of conflicts of interest that are present and the steps taken to alleviate those conflicts. In citizen science as well as industrial and academic science, procedural steps can be put in place to help bring implicit methodological choices and value judgments to light. These steps include systematically publishing scientific results, providing open access to data and methods, using quality-control procedures, following reporting guidelines, and creating venues for critically evaluating reported findings (; ; ; ; ; ).

Furthermore, citizen science projects can potentially play an important role in bringing implicit methodological choices and value judgments in scientific research to light, thereby increasing objectivity and reproducibility. Philosophers of science have pointed out that when values are shared across a research community they can easily go unrecognized; thus, one of the best ways to draw attention to implicit values and subject them to critical scrutiny is to incorporate investigators or reviewers with diverse perspectives (; ). Non-professional scientists who participate in research projects are often able to bring unique perspectives because of their social situation or life experiences. For example, citizens affected by environmental pollution might value precaution to a greater extent than many professional scientists and therefore challenge the standards of proof typically required for drawing conclusions about public-health hazards (; ). Citizens might also recognize important knowledge gaps or questions that have received inadequate attention from professional scientists (). For example, two scientists involved in a citizen group called the Madison Environmental Justice Organization (MEJO) in Madison, Wisconsin, have reported how citizens helped to challenge a variety of questionable assumptions held by professional risk assessors, including assumptions about the level of pollution in nearby lakes, the number of fish being consumed by subsistence anglers, and the parts of the fish being consumed (). Thus, not only is it possible for those engaged in citizen science to take a perspective of advocacy without destroying scientific objectivity, but sometimes their value-laden perspective can actually increase scientific objectivity by uncovering values or assumptions in traditional scientific work that would otherwise have escaped critical scrutiny.

Conclusion

As citizen science has become more widespread, questions have arisen about its bona fides as a legitimate form of scientific inquiry. These questions are ethically significant, because wasting scarce resources on low-quality research is problematic, as is dismissing valuable research that can contribute to better social decision making. We have shown that the philosophy of science provides helpful resources for addressing three concerns about research quality: (1) that citizen science is not sufficiently hypothesis-driven; (2) that citizen science does not generate sufficiently high-quality data or use sufficiently rigorous methods; and (3) that citizen science is tainted by advocacy and therefore is not sufficiently disinterested. The message of this paper is that these broad concerns are something of a red herring. From a philosophical perspective, one cannot evaluate the overall quality of citizen science—or any form of science—without considering the details of specific research contexts. In many cases, citizen science provides one of the best avenues for achieving scientific goals and moving scientific research forward.