So, you suspect that someone participating in a citizen science project has committed research misconduct. What do you do?1
Consider the following scenarios:
Scenario 1: A small non-profit group is conducting a citizen science data-gathering project in which users download an app to facilitate data collection. To encourage data gathering and repeat engagement with the project, the non-profit has “gamified” the app, holding occasional real-time competitions. The project yields the common result that a small percentage of users are very engaged and supply the bulk of the data, especially during competitions.2
After analyzing and disseminating their findings and corresponding open-access datasets, the group is contacted by someone who noticed some statistically unusual data. Upon further examination, they realize that one user was responsible for nearly 10% of all the data collected by the project, and only these data are statistically aberrant. However, when they remove those data to correct the problem, the results of the project fall below the threshold of statistical significance. They realize that this user’s data have compromised their results and undermined their credibility. After this episode is widely publicized and criticized on social media, some of the non-profit’s donors decide to withdraw their funding.
Version A: When the non-profit contacts the user, he is embarrassed and tells them that he just got caught up in the competition and recorded random data in order to win.
Version B: When the non-profit examines the problematic data, they notice that some of them seem to result from a bug confined to that user’s phone and were not likely to have been noticed by the user.
Version C: When the non-profit tries to contact the user, they discover that he is actually a member of a group opposing the non-profit’s work, and that he has planted fraudulent data in its work and subsequently published a blog post about his “sting” operation.
Scenario 2: A small, private group of researchers conducts a study using common citizen science tools and led by trained scientists. The results are published on the group’s website, and they urge policy makers to take the information into account. Multiple news and social media outlets with the same commitments and values as the group publicize the study and cite its results in various opinion pieces. Not long after the results are released and publicized, the entire project is found to be fraudulent and designed to seed the scientific literature with work that can be cited in support of a pre-determined goal.
Version A: The project is designed so that although data contributors collect information in good faith, the study results are a forgone conclusion.
Version B: The contributors are aware of the project’s intention and contribute false data.
Version C: Contributors collect data in good faith, but a trained scientist on the project later changes some of the data.
These scenarios are meant to illustrate a few ways in which citizen science results could end up being accused and/or guilty of research misconduct.3 Traditional science already has provided reason to be wary of research findings; with every new revelation of scientific misconduct, plagiarism, and irreproducibility, we became a little less inclined to trust initial findings and more inclined to establish mechanisms to aid verification and prevention. What does this imply for citizen science?
The scenarios are not meant to imply that citizen science is uniquely problematic, but it does have a unique problem. Traditional research can be at risk of fraud and abuse in the same kinds of ways, but precisely because it is at risk, regulations have been established to guard against and confront instances of misconduct.4 Citizen science may lack an institutional framework for addressing research integrity, however, which exposes the field to reputational risk. Citizen science research will become a force only if it is used, and it will be used only if it is trusted. As the field grows and its research findings contribute increasingly to the scientific literature and to policymaking, it is critical for citizen scientists to think deliberately about fostering trust in the results of citizen science.
We might be tempted to assume that because citizen science projects often do not rely on salaried workers or grants, there would be no reason for anyone to manipulate results. But such a perspective problematically reduces the possible motives for manipulation to merely financial ones. The fact that citizen scientists typically do not stand to gain financially from manipulating results does not mean that no other gains would be possible. Moreover, independent of that problem, the more that the public blindly trusts the results of citizen science, the more attractive a target the field might become to those who intend to sow misinformation for their own purposes by co-opting the field’s methods.5
In traditional research, the term “research integrity” is used to capture the wide array of factors that contribute to trust in research. As defined by the National Institutes of Health, for example, research integrity means “the use of honest and verifiable methods in proposing, performing, and evaluating research; reporting research results with particular attention to adherence to rules, regulations, [and] guidelines; and following commonly accepted professional codes or norms” (https://grants.nih.gov/grants/research_integrity/whatis.htm). This definition undergirds a general approach to ensuring that research is conducted ethically and is trustworthy.
One reason that citizen science research can be trusted is because it usually uses commonly accepted principles of data collection and processing.6 However, the field lacks an overall, widely accepted approach to research integrity, which would at least include training citizen scientists in concepts and methods of research integrity, establishing mechanisms to protect the reliability of research results, and instituting processes to address instances of research misconduct. Ironically, one reason for the lack of an overall approach to research integrity is also one of the distinctive features of citizen science: its decentralized, open-access ethos means fewer organizational “gates.” Citizen science is exciting because it embraces people from a wide variety of backgrounds, with a diversity of values and goals, and uses inexpensive, shared, and/or open-access technology to enable broader participation. But a lack of gates also might mean a dearth of gatekeeping, the traditional approach to quality assurance.7 Therefore, as citizen science creates new approaches to scientific discovery, it also must consider new approaches to ensuring research integrity. By establishing means of preventing and addressing misconduct and communicating them widely, citizen science advocates can convey their commitment to research integrity to the public, collaborators, and participants.
In considering mechanisms that might be adopted for citizen science, it may be helpful to build on or incorporate some aspects of the current approach to research misconduct. Below, I briefly outline such approaches in the United States, and describe ways in which citizen science research might fall outside the reach of these approaches.
If research is not trusted, it will not form a reliable basis of application, policy, or further research. Historically, researchers were trusted to produce reliable results, until a series of research fraud cases made it increasingly clear that such trust was not always warranted.8 As a result, research integrity and the prevention of research misconduct has become a priority for the governments of the most research-intensive countries around the world (Resnik, Rasmussen, and Kissling 2015). Research misconduct is also a problem from the standpoint of both government and private funding agencies because it squanders scarce resources and risks the erosion of “sustained public trust in the research enterprise” (which is often translated as support for federal funding of research) (https://ori.hhs.gov/federal-research-misconduct-policy). Thus both as trustees of taxpayer money and in the public interest, the United States and other governments have taken on the responsibility of establishing regulations to limit research misconduct and address it if it occurs.
The simple definition of research misconduct is research that involves intentional misrepresentation.9,10 In the United States and many other countries, the conventional regulatory interpretation of research misconduct is that it involves fabrication, falsification, or plagiarism (“FFP,” as this is known) (Resnik, Rasmussen, and Kissling 2015). This definition and its interpretation has varied somewhat over time (sometimes including sabotage, for example), and has sometimes been tendentious.11
In the United States, research misconduct regulations enjoy their regulatory force by dint of conditions attached to federal funds received by individuals or institutions. Two of the most significant sources of federal academic research funding are the Department of Health and Human Services (DHHS) and the National Science Foundation (NSF).12 Although federal research funding fell below 50% of total research funding for the first time in 2017 (Mervis 2017), this does not mean that less than 50% of research is covered by federal research misconduct regulations due to a condition in federal regulations known as an “assurance.” To receive federal funding, institutions must commit to establishing and following research misconduct policies for federally funded projects.13 Presumably for reasons of efficiency, institutions seem to use the federally required process for all research at their institution rather than setting up a second, completely distinct process for non-federally funded research.14 Thus, although the regulations attach to federal funding, by virtue of this assurance they cast a much longer regulatory shadow, covering most research conducted at academic institutions. The effect of the federal assurance and institutional policies means that traditional research misconduct investigations are best understood as they occur in academic institutions and as outlined in federal regulations.
Although the force of regulations is tied to funding, an accuser will not always be aware of the funding source of a project. This is an important way in which the regulatory purview is neither visible nor intuitive to potential accusers. As a result, research misconduct complaints can be reported in a variety of ways, including to a publication’s editorial office, a federal agency, or the employer of the accused. Because assessing such complaints often requires access to (and sometimes sequestration of) research records, initial consideration of research misconduct allegations is usually the responsibility of the institution in which it is alleged to have occurred, where a person accused is employed, or where the research records are stored. Institutional policies vary, but usually include a layer or two of assessment of the claims prior to official investigation in order to prevent malicious or baseless accusations, and require notification of federal agencies if federal funds are involved. Frequently, federal agencies accept the findings of institutional inquiries and investigations rather than conduct an inquiry themselves.
Consequences of research misconduct vary, but at the level of the United States Federal Government, cases can involve a debarment of the individual from receiving federal grant funding (for anywhere from 3–5 years to, in rare cases, life); an agreement to have one’s research overseen by others for a defined period of time; and/or an agreement to not serve in a consulting or advisory role for the funding agency.15 Consequences at the institutional level can include expulsion, termination, or degree revocation. Misconduct also can be tried in criminal court (e.g., for racketeering if it involves fraudulent use of federal funds) or civil court for restitution or repayment (AnnArbor.com staff 2010), and can involve the retraction of affected publications. These consequences can sometimes effectively end an individual’s academic research career (though not always; see Galbraith 2017).
Though there is no way to know for sure, a majority of citizen science research in the United States is likely covered by federal research misconduct regulations by virtue of including collaborators whose work is subject to them (for example, academic employees or employees of non-profits receiving federal funds). However, because of the way that citizen science can be organized, funded, conducted, and disseminated, it may increasingly fall outside of existing research misconduct regulations.16 For example, citizen science projects can be exclusively crowd-funded, or covered entirely by a small non-profit’s shoestring budget. If the institutions involved in a citizen science project do not receive federal funding, and no one subject to federal regulations is involved in the project’s planning, the project is not covered by the federal regulations. Similarly, if the results of the project are not published in a journal subscribing to standards articulated by organizations such as the Committee on Publication Ethics (COPE), an allegation of misconduct might not be possible at the editorial office (Wager and Kleinert 2012).
The limits of the regulatory approach quickly become clear: When research is not subject to conventional regulatory coverage, it is no more immune to misconduct than any other research, but it is immune to the typical research misconduct consequences of funding revocation, grant debarment, employment termination, and/or correction of the research record. In such cases,
The very trait of decentralization in citizen science that makes it nimble, exciting, and open to new ideas and users is what makes addressing research integrity difficult: There is no single gate to keep (Figure 1). This problem requires serious consideration about how to approach the possibility of research misconduct in the absence of regulations and an entity that could enforce policies or deliver consequences.17
Citizen scientists have the same interest in ensuring integrity of their research as any other researcher. In fact, there may be even more reason for citizen scientists to focus on the integrity of their work: Its sheer novelty means that the products of its research may be met with more skepticism than identical research conducted under the purview of an institution with familiar policies and guarantees of quality.18 In addition, to the extent that citizen science informs policy decisions or contributes to the scientific basis of federally funded research, the public has a right to expect that citizen science research is conducted with integrity, and that there would be a mechanism to address allegations of fraud or misconduct.
Many citizen science research projects will likely continue to be covered by federal regulations due to one or more of the principal investigators being institutionally employed or receiving federal funding. In such cases, questions about the integrity of a project would use the policies of the Principal Investigator’s (or a Co-PI’s) home institution to investigate research misconduct allegations. However, even here there are potential problems. Jurisdiction is likely to be limited: If a project participant was alleged to be responsible for the misconduct, could the institution address the charge at all? The volunteer would have no obligation to participate in an inquiry; moreover, sequestration of records might be impossible, and no punishments or consequences delivered by the institution would be likely to significantly affect the volunteer. However, if the research were published, the institutionally employed researcher and coauthors might be required to retract a paper, which would have the benefit of cleaning up the scientific record and could help to maintain trust in citizen science.19
There are at least three ways in which, under current systems, research misconduct in a citizen scientist project not covered by federal regulations could potentially be addressed. First, publishers and editorial offices have the power to retract papers; second, licensure or society membership requirements can be revoked; and third, tort law could be invoked against someone committing research misconduct. Each, however, would offer only narrow, spotty coverage; lack the resources or authority to investigate such allegations or deliver consequences; and/or be impracticable. In the discussion below, I consider only citizen science research not subject to the United States’ federal research misconduct regulations.
After federal regulations and institutional policies, the second major bulwark against research misconduct stands in the offices of editors and publishers. It is often during peer review of a paper, or after publication, that allegations of research misconduct are made, and contact is often initiated at the publishing journal’s editorial office.20 If the issue concerns something that the journal’s editorial staff can verify themselves (for example, image manipulation identified with the help of a journal’s forensic software), and they receive no satisfactory explanation from authors, they may choose to issue a retraction unilaterally. However, when such allegations are made, journal offices often refer the allegations back to the authors’ home institution(s) for additional inquiry, for reasons as simple as lacking authority to sequester evidence, being physically distant, or lacking sufficient financial resources. If a scientist leading a citizen science project is not employed at an institution with policies regarding misconduct, there is simply no locus or authority for an investigation. In the absence of an institutional investigation, it is similarly difficult to imagine an editorial office developing and funding an investigatory process that would fairly and accurately distinguish between legitimate and baseless (or “trolling”) allegations. Even in the current for-profit situation, “Journals don’t really like going back to investigate when things go wrong…. They complain that it’s time-consuming and laborious and difficult” (Kupferschmidt 2018).
Given that the central values of citizen science include the democratization and broad (and free) dissemination of science, it is not even clear that all citizen science results would eventually be “published” in conventional ways. A variety of alternatives, from open-access conventional journals, to new open-access, online-only journals, to blog posts and user groups, pose alternative possibilities for research dissemination. The ubiquity of these alternatives does not mean that material is necessarily read or trusted, of course, but it highlights the ways in which the reach of citizen science is not tied to conventional publication mechanisms. Thus, although journals can provide some help in setting the record straight after an instance of research misconduct, the reach of this tool is limited in important ways.
If a citizen scientist who needed to belong to a professional organization for licensure requirements committed research misconduct, the power of expulsion from that society could be quite effective against research misconduct. Typically, such societies already have mechanisms for de-licensing members. Those mechanisms usually depend on criteria more general than research misconduct, but the concept of “conduct unbecoming,” for example, could encompass a charge of research misconduct. It bears noting, however, that making, defending, and investigating charges of research misconduct can be very demanding of both time and resources.21 With a few exceptions, it is difficult to imagine who might bring such a charge against a citizen scientist and be willing to see it through when not backed by the resources of an institution or government.
Societal membership (rather than licensure) is another possible, but even weaker, option for addressing research misconduct. If a citizen scientist belongs to a society merely as a way to participate on professional email and listserv chains, or to receive professional journal subscription and conference discounts, expulsion from such a society would likely have little to no deterrent effect. Moreover, this approach also assumes that the scientific society in question has the resources to investigate such allegations, and the membership’s trust that they can do so in a fair and rigorous way. Although potentially useful in specific circumstances, the fact that citizen science is not typically associated with societal or professional memberships makes this a weak tool for addressing the possibility of research misconduct.
If it could be shown to have caused harm or broken a law, research misconduct could conceivably be prosecuted in court, independent of where the responsible party is employed or conducted the research. A simple case example is plagiarism: The owner of a copyright could take a plagiarizer to court. Or, someone who suffered physical or financial harm due to use of a product based on fraudulent results may have cause in court.
While these legal paths are theoretical possibilities, it is highly unlikely that they would actually be pursued in citizen science. The shoestring budgets of most citizen science projects are recognized deterrents to those in search of a payout. The contemporary panorama of citizen science activities – counting animals, measuring environmental markers, interviewing people – are also much less likely to cause bodily harm (key to large legal awards) than are other kinds of research. This option seems to be only minimally promising as a mechanism for handling research misconduct in citizen science.
Though these three approaches could be helpful adjuncts for addressing research misconduct in citizen science, none is likely to play a central role in either confrontation of research misconduct or in reassuring the public that citizen science has integrity. What other options should citizen science pursue to protect the integrity of the field and its research results?
The lack of regulatory oversight for significant swaths of citizen science research means that the community needs to propose its own processes for preventing research misconduct, identifying it, rectifying the research record, and clearly communicating to stakeholders a commitment to the highest standards of research integrity. But the very values of the field of citizen science may pose a barrier in this regard. As some of the most important values of citizen science favor allowing as many groups as possible to contribute their research to the public domain, inevitably some less-rigorous or even fraudulent research will be disseminated. Conversely, as gatekeeping becomes more stringent, we may be able to depend increasingly on the integrity of the research, but fewer groups may be able to contribute their efforts – which would begin to reproduce the very structure of professionalism in science that some view as problematic. In this way, citizen science is similar to other areas in which a domain formerly controlled by experts becomes more open, with consequent changes in how we must gauge the reliability of newcomers’ work. In the absence of formally codified and enforced processes, citizen science must either articulate a process for helping to establish and ensure the integrity of research, or risk the marginalization of its contributions. What can and should citizen science be doing now to address this? There are two aspects to the answer: prevention and confrontation.
Most people do the right thing most of the time. Yet evidence suggests that to some extent, the perception that an individual has about how ethical those around them behave is correlated with their self-reported level of ethically behavior (Martinson 2010). Thus, fostering a robust culture of integrity in citizen science may be one of the most important ways in which the field can confront the possibility of research misconduct.
There are multiple ways to do this. First, the more that ethics is an explicit focus in the field, the more that citizen scientists are reminded to consider ethical issues in their research. For example, journals or other venues for disseminating citizen science can include ethical criteria for submission. This could be mandatory (e.g., obtaining IRB approval for human subject research as a condition for publication) or voluntary (e.g., requesting authors to describe ethical issues in their projects and how they were addressed by the group; this might or might not be included in the actual publication). Conferences in the field could encourage submitters to reflect on ethical issues arising in their projects in their submissions and/or presentations. This need not consume a major part of a presentation, but engaging in common, public reflection on ethical issues could significantly help to foster an ethical culture.
Leaders in citizen science have a particular burden to ensure that they are educated about ethical challenges and standards, so that they can emphasize the importance of those issues with their collaborators. Project managers similarly have a duty to guide and instruct their collaborators, participants, and volunteers about particular ethical issues in their project. Groups or institutions with significant and ongoing citizen science projects may need to incorporate aspects of research integrity into their sustained training and education programs. Scholars in research ethics can help with this task, bringing the results of decades of study to bear on citizen science. More generally, greater collaboration between citizen scientists and scholars in the field of research ethics would allow for mutual benefit.
Under the auspices of the Citizen Science Association (CSA) and its peers in other geographic areas, such as the European Citizen Science Association (ECSA) and the Australian Citizen Science Association (ACSA), the field can offer considerable support to its practitioners by developing free, easily accessible materials and tools on ethics and integrity. For example, tutorials on ethical treatment of collaborators, animal species, and ecosystems could be compiled in webinars, conference workshops, consensus/best-practice panels, and white papers. Multiple versions of these could be submitted and peer-reviewed to allow both broad representation and quality. It is particularly important that such tools not appear behind subscription firewalls if the intent is to foster a collaborative and ethical culture. Fostering a culture of commitment to ethics in citizen science can prevent problems by sensitizing organizers and participants to ethical issues, educating them about solutions, and supporting them by removing obstacles and/or incentivizing doing the right thing. It emphasizes “upstream ethics,” tackling an issue or potential issue early in a project rather than waiting for it to arrive later as a problem.
Another effective approach to enhancing integrity in citizen science would be to make as much of the research as possible transparent to others. For example, recent United States federal regulations that direct agencies using citizen science to make citizen science data publicly available would, as Guerrini et al. point out, “[create] opportunities to investigate questionable or poor-quality data and asses fitness for use through independent examination” (2018: 135). Another possibility is providing for post-publication peer review by ethicists (see discussion of the conference organized by Vayena in Marcus 2014).
As helpful as all of this is, however, it will never be 100% effective in preventing research misconduct. There will be intentional disregard for the standards of research integrity, and there will be oversights. Thus, the field also must grapple with the question of whether to try to hold researchers and/or participants accountable for research integrity violations and, if so, how.
One approach that citizen science might take is to deliberately not establish accountability mechanisms for its research. This approach certainly has practical and in-principle benefits. Practically speaking, no sensational case of fraudulent citizen science has yet been reported, and in conventional research, a series of sensational cases have been necessary to prompt regulatory action to hold scientists accountable. Thus it may seem alarmist to raise the specter of misconduct in citizen science. This is compounded by the fact that citizen science as a practice does not have significant resources with which to establish a robust accountability process, nor does it have authority to hold people accountable.
There also may be principled reasons for resisting anything resembling the conventional regulatory approach for assuring research integrity. For example, citizen or “DIY” scientists can be contemptuous of current regulatory mechanisms that seem ill-suited to new research approaches. When uBiome was criticized for failing to obtain IRB approval for its crowd-funded and volunteer microbiome sequencing project, the uBiome founders responded by arguing that “IRBs belonged to the ‘Old World of scientific inquiry’ and didn’t address the unique challenges of citizen science” (Marcus 2014; see also Richman and Apte 2013). One benefit of citizen science might be that it can produce research more quickly and in more targeted ways than conventional research precisely because it is not impeded by regulation. Moreover, one might argue that regulations actually outsource ethical accountability to “professionals” rather than assigning responsibility to the researcher herself.
However, the most important reason for establishing means of holding citizen scientists accountable for research misconduct is to reassure potential users of the work that it can be trusted. Because the public may not be moved by practical or in-principle arguments, particularly if citizen science experiences the kinds of abuse that prompted regulation in conventional science, it is worth beginning to consider what steps might be taken to hold citizen science research accountable.
One approach is that the oversight of scientific research in the United States could be federalized. For example, in 2017, Denmark established a new “Board for the Prevention of Scientific Misconduct” (Retraction Watch 2017b). The press release announced that unlike the regulatory approach in the United States, which covers only certain research attached to federal funding or applications for federal approval, “Private research will also be included in the board’s supervisory area” (emphasis added).22,23 This is highly unlikely to occur in the United States for many reasons, not least because of the power of lobbying on behalf of private corporations, but also for principled reasons protecting the free association of individuals who might collaborate on a project.
It seems unlikely that, in the absence of federal or state laws mandating oversight of research or the instigation of citizen science “licensing boards,” citizen science will ever be centralized enough to make research misconduct investigations involuntary. Its very nature, and much of its appeal, is that it transcends disciplinary, institutional, economic, and regulatory domains, so that even if some research projects are subject to some restrictions, no set of restrictions or standards will ever apply to all of them. Citizen science is not one thing; it is an umbrella term for a large set of diverse research practices whose uniting theme is involving the public in research in a more active way than they have been involved in the past. The only alternative to formal, mandatory mechanisms is to consider creative, thoughtful, voluntary measures that could assure those using citizen science research that it can be trusted.
A very informal starting place would be for individual research projects to include descriptions of what was done to address ethical issues in the project. As the field develops standards, researchers could voluntarily declare that their projects adhere to them. A model for this might be the way in which individual academic journals state that they follow the Committee on Publication Ethics (COPE) or International Committee of Medical Journal Editors (ICMJE) standards for publication ethics. Although these voluntary measures would enjoy no specific force, the more they became embraced by practitioners in the field, the more effective peer pressure would be in ensuring research integrity.
However, a “voluntarily involuntary” arrangement may be particularly effective. Consider a “research integrity insurance” agreement. A research integrity board could be set up under the auspices of an organization such as the Citizen Science Association, and citizen science projects could pay a nominal fee to declare that they hold themselves voluntarily to the authority of the board. Fees could be small due to the low likelihood of misconduct, but accumulated fees could be used to support the costs of inquiry or investigation when necessary. (Alternatively, such fees could be put toward an insurance policy–if an insurer could be found--designed to pay for misconduct proceedings should the need arise.) Even if it were never used, such a mechanism could have a significant effect on the reliability of citizen science research, simply because it could be used if necessary. Thus, such a mechanism would need to be visible, located in an institution that could sustain an investigation even without the help of the researchers in question, and have some kind of consequences.
It also would be important to consider the possible consequences of a finding of misconduct under this kind of voluntary arrangement. One possibility is publicity: If an investigation discovered that misconduct had been committed, the names of those involved could be posted on the organization’s website – a “naming and shaming” consequence. Alternatively, admission to an organization, its conference, or other membership benefits. also could be rescinded – a “shunning” consequence.
Setting up such a mechanism would not be a small task. Among the many challenges, some of the most difficult to resolve would be definitional. For example, what would count as research misconduct vs. sloppiness or accident? What evidence would be required to deter trolling but allow laypeople to bring a claim forward? Would misconduct refer merely to fraudulent research, or would other ethical violations count as misconduct as well?24 Who should be held accountable – project managers? All collaborators? Just a “chief scientist”? There is much to be learned from the field of research ethics regarding these questions, but there are also important differences in citizen science that will require these guidelines to be tailored appropriately.
Another set of challenges involves harms that could potentially result from these processes. For example, what obligations does the field have to avoid reputational or career harms resulting from citizen science research misconduct processes? This might be a risk particularly when processes are insufficient to identify baseless claims, or when confidentiality is breached during the process. Good legal guidance will be required to ensure that such obligations are met. The possibility of legal consequences for initiating such mechanisms is daunting, yet so too are the possible consequences of failing to act altogether.
One of the main challenges in considering how to ensure the integrity of citizen science is that our conventional regulatory mechanisms track categories that are being rearranged or ignored altogether in this new field. This is a multi-tiered problem: The regulatory requirements often don’t map citizen scientists’ employment status; investigatory avenues are not clear; and when they are (e.g., pursuing retraction from a journal), the resources required might not be available. And none of this even begins to address the difficulty that someone alleging misconduct might have when trying to find a place to lodge a complaint against a group of private citizens acting together yet independent of an employer or governing body.
Citizen science will not be used if we cannot be confident in its findings. It can be undermined at its foundations if citizen scientists contribute fraudulent research that then becomes infamous. The only way to prevent such an undermining is for citizen science to commit itself publicly to rigorous standards of practice that ensure the integrity of research. The answer cannot be to assume that research misconduct will not happen in citizen science; this was precisely the view that presaged the rife abuse of traditional science and the advent of research misconduct regulations. As this paper has described, it would also be wrong to assume that research misconduct in citizen science would be covered under existing mechanisms. In keeping with its ethos, citizen science must collaborate on new approaches to securing the integrity of the field and its research.
1For simplicity, this discussion focuses on United States policies regarding research misconduct. Other countries have similar policies, but due to the unique features of law within each country, it is impossible to generalize all of these points internationally. See Resnick, Rasmussen, and Kissling (2015) for a comparison between research misconduct policies in 40 countries.
2This is an instance of the general phenomenon known as the “Pareto Principle,” in which 80% of effects comes from 20% of cases. For an account of how this manifests in citizen science, see Haklay (2016).
3The fact that good data and methodology practices could have avoided some of this is important, but insufficient for avoiding misconduct, because it is possible for citizen science data collected using bad methods to be propagated among those who lack the knowledge to discern bad practices.
4For lack of a widely accepted term to contrast with citizen science, I will adopt the term “traditional” science to refer to the typical way in which research has been conducted in recent decades (i.e., for the most part federally funded and/or occurring within institutions of higher education).
5This could happen in at least two ways. “Infiltration” might occur when enemies of a citizen science project are able to participate in it and sabotage it from the inside, and “fake science” might be designed from the outset, as is “fake news,” to sow manufactured “data” to support a particular viewpoint. As one article put the possibility, “If science really is a populist phenomenon, then aren’t we at risk from science demagogues?” (Engber 2017).
6Even when citizen science adopts existing data practices, however, it is not always a straightforward application of established disciplinary norms. Citizen science is often interdisciplinary, which among other things means that it might draw on multiple and conflicting standards of data collection, processing, etc. In this way, an important contribution that citizen science might make to science is to encourage greater integration of methods between disciplines that frequently operate independently.
7For example, a traditional gatekeeping mechanism is the requirement of scientific journals that the publication of research involving human subjects must be accompanied by an indication of research approval by a body charged with protecting human subjects of research. This has been fairly successful due to the fact that publication is the currency of conventional science and higher education practices, but as that currency becomes less valuable, its success as a gatekeeping function will also diminish.
8For a helpful summary of past cases of research abuse and the genesis of the Office of Research Integrity, see Price (2013). For a partial history of research misconduct cases internationally, see Lock (2001).
9The original practice was to call this “research fraud” rather than “research misconduct.” However, as Schachman notes in his discussion of the term’s definition, “The change to ‘misconduct’ instead of ‘fraud’ was initiated and effected by lawyers and not by scientists. It was because of the legal burden of having to prove intent and injury to persons relying on fraudulent research that counsels for NSF and PHS wanted the change to misconduct….” (1993: 148). The boundaries of what counts as research misconduct remain contentious; for example, the American Geophysical Union has recently added sexual harassment to its definition of research misconduct: “Scientific misconduct also includes unethical and biased treatment of people, in a professional setting and while participating in scientific programs, as identified in the Code of Conduct section of this Policy. Included are actions such as discrimination, harassment, and bullying.” See https://ethics.agu.org/ for an overview of the development of this policy, and https://ethics.agu.org/files/2013/03/Scientific-Integrity-and-Professional-Ethics.pdf for the full policy.
10Research involving human subjects or animals is covered under separate regulations (the Common Rule and the Animal Welfare Act), so mistreatment of humans or animals (what one might view as misconduct by another name) would be addressed under those regulations – though if the researchers also committed research fraud, they might additionally be subject to misconduct regulations.
11There is a history of significant dissent regarding what ought to count as research misconduct (Buzzelli 1993; Schachman 1993; Rasmussen 2014). Even now, many institutions go beyond the federal definition and include phrases like “other serious deviations” in their research misconduct policies.
12Although they are subject to the same federal research misconduct policy, the DHHS and NSF each have their own separate processes for investigation, and the demographics of the subjects of their investigations vary significantly (Parrish 2004).
13As this is summarized recently in a Notice from the National Institutes of Health, “To be eligible for PHS funding, domestic and foreign institutions must maintain an assurance on file with ORI. The assurance is the institution’s certification that it has developed and will comply with its written policies and procedures for responding to allegations of research misconduct in PHS-supported research that meets the requirements of 42 CFR 93 [the specific regulatory code where the federal research misconduct policy can be found]” (NIH 2018).
14However, according to Committee A of the American Academy of University Professors, as of 2006, “Some institutions have evidently decided to make the effort [to set up a distinct review process]: to date, 164 have explicitly declined to commit themselves to imposing on research that is not federally funded the regulations that govern federally funded research….” (AAUP 2006).
15Consequences can even extend to not being able to use any lab containing any equipment purchased with federal funds (Stein 2015, #8).
16The very possibility of escaping regulations that can be onerous (particularly regarding human subject research) may in fact incentivize the use of citizen science methods to escape these perceived burdens.
17This problem is not unique to the United States: for example; during a retreat to discuss the role of journals in research misconduct in India, “a representative from the Indian Council of Medical Research … said that the council had authority only over research that it had funded” (Office of Research Integrity 2003: 3).
18For a response to such worries, see Elliott and Rosenberg’s paper in this issue. It is worth noting that traditional scientists probably also have a stake in ensuring the integrity of citizen science. At some level, very few people will make a distinction between citizen science and other types of science, so what is seen to be true in one area of science will likely be seen by an average layperson to be true in other areas as well.
19Of course, if this were a frequent occurrence and/or garnered publicity, it might also shake confidence in citizen science, as it may have already in conventional science in the wake of retractions and lack of reproducibility of some research.
21One study estimated the costs of an actual case at their institution, concluding that it approached $525,000 in direct costs (Michalek, Hutson, Wicher, and Trump 2010). Another estimated the costs for cases reported by the Office of Research Integrity to range between approximately $116,000 to over $2,000,000 per case (Gammon and Franzini 2013).
22See Retraction Watch’s translation of the Danish regulation at: http://retractionwatch.com/wp-content/uploads/2017/05/DCSD_EN.pdf. Even in Denmark’s approach, however, the first step is still to report an allegation of research to a researcher’s institution, which has the responsibility of forwarding the notice to the Board. Given the fact that citizen science research sometimes occurs outside of any particular institution, it is not clear how such a policy would be implemented.
23In the United States, the National Academies of Science recently issued a call for a nonprofit “Research Integrity Advisory Board” (NAS 2017; see also Retraction Watch 2017a). However, they suggest that this would be advisory only, and primarily directed at fostering research integrity within institutions, not in the private sphere.
24Recall from note 8 above the history of the term “research misconduct” in the United States’ federal regulations: Some prefer to include “serious deviations from accepted practice” under the definition of research misconduct, but eventually that term was eliminated, and the regulations currently classify only falsification, fabrication, or plagiarism as research misconduct.
An early version of this work was presented at the Citizen Science Association conference in May 2017; I am grateful for helpful discussions there. I am also grateful to two anonymous reviewers for the Journal, whose comments helped to improve this paper.
The author has no competing interests to declare.
American Association of University Professors, Committee, A. 2006. Research on Human Subjects: Academic Freedom and the Institutional Review Board. Academe, 92(5): 95. [online access at: https://www.aaup.org/report/research-human-subjects-academic-freedom-and-institutional-review-board last accessed 28 November 2018]. DOI: https://doi.org/10.2307/40253500
AnnArbor.com staff. 2010. Former cancer researcher at U-M sentenced for sabotaging student’s work. The Ann Arbor News, 30 September. [online access at: http://www.annarbor.com/news/former-cancer-researcher-at-u-m-sentenced-for-sabotaging-students-work/ last accessed 23 November 2018].
Buzzelli, DE. 1993. The definition of misconduct in science: A view from the NSF. Science, 259: 584–585; 647–648. DOI: https://doi.org/10.1126/science.8430300
Engber, D. 2017. The Grandfather of Alt-Science. FiveThirtyEight, 12 October. [online access at: https://fivethirtyeight.com/features/the-grandfather-of-alt-science/ last accessed 23 November 2018].
Galbraith, K. 2017. Life after Research Misconduct. Journal of Empirical Research on Human Research Ethics, 12(1): 26–32. DOI: https://doi.org/10.1177/1556264616682568
Gammon, E and Franzini, L. 2013. Research misconduct oversight: Defining case costs. Journal of Health Care Finance, 40(2): 75–99. DOI: https://doi.org/10.1371/journal.pmed.1000318
Guerrini, CJ, Majumder, MA, Lewellyn, MJ and McGuire, AL. 2018. Citizen Science, Public Policy. Science Magazine, 361(6398): 134–136. DOI: https://doi.org/10.1126/science.aar8379
Haklay, ME. 2016. Why is participation inequality important? In: Capineri, C, Haklay, M, Antoniou, J, Kettunen, J, Ostermann, FO and Purves, RS (eds.), European Handbook of Crowdsourced Geographic Information, 35–45. London: Ubiquity Press. DOI: https://doi.org/10.5334/bax.c
Kupferschmidt, K. 2018. Researcher at the center of an epic fraud remains an enigma to those who exposed him. Science Magazine, 17 August. [online access at: http://www.sciencemag.org/news/2018/08/researcher-center-epic-fraud-remains-enigma-those-who-exposed-him last accessed 23 November 2018].
Marcus, A. 2014. The Ethics of Experimenting on Yourself. Wall Street Journal, 24 October. [online access at: https://www.wsj.com/articles/the-ethics-of-experimenting-on-yourself-1414170041 last accessed 23 November 2018].
Martinson, BC, Crain, AL, De Vries, R and Anderson, MS. 2010. The Importance of Organizational Justice in Ensuring Research Integrity. Journal of Empirical Research on Human Research Ethics, 5(3): 67–83. DOI: https://doi.org/10.1525/jer.2010.5.3.67
Mervis, J. 2017. Data check: U.S. government basic share of research funding falls below 50%. Science Magazine, 9 March 9. [online access at: http://www.sciencemag.org/news/2017/03/data-check-us-government-share-basic-research-funding-falls-below-50 last accessed 23 November 2018]. DOI: https://doi.org/10.1126/science.aal0890
National Academies of Science, Engineering and Medicine. 2017. Fostering Integrity in Research. Washington, DC: The National Academies Press. DOI: https://doi.org/10.17226/21896
National Institutes of Health. 2018. Responsibilities of recipient institutions in communicating research misconduct to the NIH. [online access at: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-19-020.html last accessed 23 November 2018].
Office of Research Integrity. 2003. The journal’s role in scientific misconduct. Available at: https://ori.hhs.gov/sites/default/files/editor_retreat.pdf.
Parrish, DM. 2004. Scientific Misconduct and Findings Against Graduate Students. Science and Engineering Ethics, 10(3): 483–491. DOI: https://doi.org/10.1007/s11948-004-0006-8
Rasmussen, LM. 2014. The Case of Vipul Bhrigu and the Federal Definition of Research Misconduct. Science and Engineering Ethics, 20(2): 411–421. DOI: https://doi.org/10.1007/s11948-013-9459-y
Resnik, DB, Rasmussen, LM and Kissling, GE. 2015. An international study of research misconduct policies. Accountability in Research, 22(5): 249–266. DOI: https://doi.org/10.1080/08989621.2014.958218
Retraction Watch. 2017a. U.S. panel sounds alarm on ‘detrimental’ research practices, calls for new body to help tackle misconduct. RetractionWatch.com, 11 April. [online access at: https://retractionwatch.com/2017/04/11/u-s-panel-sounds-alarm-detrimental-research-practices-calls-new-body-help-tackle-misconduct/ last accessed 23 November 2018].
Retraction Watch. 2017b. Denmark to institute sweeping changes in handling misconduct. RetractionWatch.com, 19 May. [online access at: https://retractionwatch.com/2017/05/19/denmark-institute-sweeping-changes-handling-misconduct/ last accessed 23 November 2018].
Richman, J and Apte, Z. 2013. Crowdfunding and IRBs: The case of uBiome. Scientific American blog, 22 July. [online access at: https://blogs.scientificamerican.com/guest-blog/crowdfunding-and-irbs-the-case-of-ubiome/ last accessed 23 November 2018].
Schachman, HK. 1993. What is misconduct in science? Science, 261: 148–149; 183. DOI: https://doi.org/10.1126/science.8305005
Stein, C. 2015. 8 things you might not know about research misconduct proceedings: Guest post. RetractionWatch.com, 15 August. [online access at: https://retractionwatch.com/2015/08/13/guest-post-8-things-you-might-not-know-about-research-misconduct-proceedings/ last accessed 23 November 2018].
Vayena, E, Brownsword, R, Edwards, SJ, Greshake, B, Kahn, JP, Ladher, N, Montgomery, J, O’Connor, D, O’Neill, O, Richards, MP, Rid, A, Sheehan, M, Wicks, P and Tasioulas, J. 2016. Research led by participants: A new social contract for a new kind of research. Journal of Medical Ethics, 42: 216–219. DOI: https://doi.org/10.1136/medethics-2015-102663
Wager, E and Kleinert, S. on behalf of COPE Council. 2012. Cooperation between research institutions and journals on research integrity cases: Guidance from the Committee on Publication Ethics (COPE). March. [online access at: https://publicationethics.org/files/Research_institutions_guidelines_final_0_0.pdf last accessed 23 November 2018].