Introduction

The phenomenon of online citizen science, defined as a participative way of running scientific research projects, in which researchers and citizens work together through the Internet primarily collecting, processing, and/or analysing data (; ), has become a topic of research in itself. Research about citizen science has mainly focussed on two key areas of interest for project leaders: volunteer engagement and the quality of project outcomes.

We focus on quality, as this is a key concern for project leaders striving for high-quality project outcomes () that depend on volunteers. The quality of citizen science outcomes is essential because the reliability of research depends on it. Concerns about quality in citizen science (; ; ; ; ) are not surprising given the involvement of (usually unknown) diverse and distributed citizens with different levels of expertise versus the complexity of research tasks for which academics have been trained for years ().

In many well-known citizen science projects, citizens usually perform straightforward tasks, such as classifying images based on predefined categories as in project Galaxy Zoo, or transcribing structured information like ships’ logbooks in Old Weather (; ; ). The quality of those types of tasks is mainly ensured by aggregating multiple contributions or comparing them with a gold standard (; ). However, not all problems and activities are suitable to division into simpler tasks (). In complex citizen science, tasks are less modularizable, and more knowledge-intensive and time-consuming. Examples of complex citizen science include the transcription, translation, and contextualisation of handwritten manuscripts commonly used in the humanities, in particular in historical and literary research (). Given the knowledge-intensity of complex tasks, the quality of their outcomes are usually difficult to evaluate ().

Quality is essential for the outcomes of citizen science, and earlier research has recommended keeping outcomes in mind when designing projects (), but we still know little about how quality is ensured, especially in complex citizen science (); thus, the aim of this study is to understand how project leaders in complex citizen science ensure the quality of project outcomes.

Recent studies indicate that the feasibility of delegating research tasks to (usually) unknown citizens through the internet depends on the type of knowledge needed to perform such tasks and the quality requirements of resulting outcomes (). Because scientific research is the knowledge-creating process par excellence, and quality is a characteristic of knowledge as input to task performance and of the outcome of knowledge work (), we take a knowledge perspective to understand how quality is ensured in complex citizen science projects. Knowledge management refers to a set of processes that facilitate knowledge creation, finding and connecting knowledge that is geographically, structurally or functionally scattered, to support innovation and improve performance (; ).

In this paper, we focus on the humanities and use the word science to refer to all fields of academic research, hence our consideration of crowdsourcing projects in the humanities as citizen science (). The humanities are a suitable setting for studying complex citizen science because they involve knowledge-intensive tasks, such as the transcription of manuscripts, in which citizens contribute to the interpretation and processing of textual data (). Quality of manuscript transcriptions is essential because transcriptions are used as input for linguistic, literary, or historical research.

In the remainder of this article, we review citizen science literature and identify knowledge management activities in these types of projects. We describe the research setting and methods, and compare the knowledge management activities used in five collaborative online citizen science projects in the humanities. This study expands prior frameworks for the design and implementation of citizen science projects (; ) by examining in detail the activities used to manage knowledge work and to address quality issues in complex citizen science.

A Knowledge Perspective on Quality in Citizen Science

Citizen science projects involve knowledge work. First, tasks contribute to the scientific research process () and hence to knowledge creation. Second, these tasks depend on human skills () involving creativity and leading to unique outcomes () to support research objectives. And third, unique outcomes entail the integration and application of knowledge () of both professional researchers and citizen participants.

The characteristics of citizen science projects, and the importance of knowledge in it, lead to two knowledge challenges. The first challenge is the uncertainty resulting from the diversity and geographical distribution of citizens, who are a priori unknown to scientists leading a project (). This results in uncertainty about citizens’ knowledge and time availability for projects (). The second challenge is the weaker control that project leaders have on knowledge flows, as citizens are not employed by the research organization (; ) and are thus not subject to formal supervision ().

Given the characteristics and knowledge-related challenges of citizen science, a knowledge perspective seems a suitable lens for studying how project leaders ensure quality in this context. Therefore, we review and integrate the citizen science literature from a knowledge management point of view.

Accessing versus acquiring knowledge

Knowledge acquisition refers to the activities used to obtain new knowledge for an organization. Traditionally, new knowledge can be acquired through research and learning () and/or by hiring new employees () with distinct knowledge and expertise, evidenced by their curriculum and qualifications. In citizen science, however, recruiting can be either to acquire new knowledge and learn from the public (; ) or to access and use specific skills of the public.

In citizen science, project leaders do not select and hire employees (). Instead, citizens voluntarily decide whether to participate in a project, which is referred to as self-selection (; ), which leads to the abovementioned challenge of knowledge uncertainty. To avoid such uncertainty and to ensure quality, the literature suggests selecting participants a priori with specific qualifications as a prerequisite for participation () or by testing their skills (). However, given the voluntary nature of citizen science, such practices raise questions about how to deal with participants whose knowledge and skills do not fit expectations. Moreover, since only a small group of people contribute regularly to such projects (), it is unclear how selecting participants could still result in time and resource efficiency, key benefits of citizen science ().

Sharing and integrating knowledge

Knowledge sharing refers to the communicative activities by which individuals make part of their knowledge available to others (). Traditionally, the knowledge management literature distinguishes different ways to communicate and share knowledge, depending on the distribution of people () and the characteristics of knowledge (). Activities, such as storing and distributing documented knowledge through websites, are suitable for transferring explicit knowledge, whereas personal interactions are better for sharing tacit knowledge (). Knowledge sharing through face-to-face interaction is easier among people collocated in space and time, while asynchronous communication tools are more suitable for sharing knowledge between distributed people (). Citizen science participants are geographically dispersed and mainly contribute online, which could mean that only explicit knowledge is shared through asynchronous means, raising questions about the type of knowledge and how it is shared.

Coordinating and applying knowledge

Knowledge has value for an organization only when it is applied in action (). In citizen science this refers to the research-related tasks outsourced to and performed by the public. Scholarly research includes tasks with varying degrees of knowledge tacitness, which can make the tasks complex. In online citizen science, tasks are usually simplified () or modularized () to allow their online performance. This usually results in a pooled interdependence (), as tasks are performed independently from each other and integrated into one final outcome. Yet we know little about how knowledge-intensive tasks are coordinated and knowledge is applied in complex citizen science (). Moreover, whereas in research organizations work is allocated on the basis of employees’ skills and expertise, in citizen science, people self-allocate or decide which tasks they want to perform (). This brings us back to the question of how to deal with knowledge uncertainty resulting from self-allocation.

In knowledge management, the integration of effort refers to the embedding of expert knowledge into routines, rules, and procedures, so that non-experts can perform tasks without having to learn (). Similarly, citizen science projects use protocols or standardized procedures, project plans, training, and supervision (; ; ; ). However, unlike employees who receive monetary rewards for applying their knowledge, citizen volunteers are not paid nor subject to contractual agreements (; ) or to the rules of scientific practice (). This makes them free to follow or ignore any plans, procedures, or training. Altogether, this raises questions about how to manage knowledge flows and reduce knowledge uncertainty.

Assessing knowledge

Knowledge assessment has barely been examined in the knowledge management literature. In citizen science, knowledge assessment is usually associated with the evaluation of citizens’ contributions (). Suggested assessment activities include comparing contributions with existing scientific literature or professional observations (). This implies that literature or observations already exist (), but it is unclear how quality is assessed for new research topics.

Another way to assess contributions is through multiple-keying with voting; that is, having multiple participants perform exactly the same task and assume that what the majority inputs is correct (). This is usually used for modular and structured tasks (; ), but as tasks become more complex it is more difficult to validate the quality of their results (). Other means to assess citizen contributions is through expert reviews () or citizen peer-reviews (). However, expert reviews require time investments that citizen science was supposed to reduce, and citizen peer reviews raise again questions about the selection of participants (). Therefore, it is still unclear how the results of knowledge work are assessed in complex citizen science.

Based on this review, it is still unclear how leaders of complex citizen science projects deal with diverse and distributed knowledge and ensure quality outcomes. We therefore carried out empirical research to understand how citizens are recruited, knowledge is shared, tasks coordinated and performed, and outcomes assessed in complex citizen science.

Research Methods

This research focuses on the transcription of old handwritten manuscripts, their translation into modern language, and contextual annotation—tasks that are knowledge-intensive, time-consuming, and prone to errors. These tasks are knowledge-intensive because they involve diverse and hard-to-decipher handwriting styles (), and require the recognition and interpretation of words and abbreviations based on the context of the manuscript, the peculiarities of the author’s handwriting, and the historical period. These tasks are also time-consuming because manuscripts vary in length and condition of the paper; that is, it takes time to complete a transcription or translation and to indicate which parts are unreadable because of damaged paper or smudged ink. Moreover, manuscripts contain vast amounts of textual data at many levels (i.e., characters, words, sentences, paragraphs, and pages), increasing the likelihood of errors. Transcribing manuscripts is susceptible to human mistakes because it is easy to skip a line while reading, and people tend to finish off sentences or words before they actually read them completely.

To understand how knowledge work is managed and quality ensured in these complex projects, we conducted a qualitative multiple-case study (, ), with the citizen science projects as cases and the activities performed by project leaders as focus of analysis. We purposefully contacted three core organizations in the field of cultural heritage and humanities research in the Netherlands: The Cultural Heritage Agency, the Meertens Institute, and the Huygens Institute for Netherlands History. We then used theoretical sampling () to select projects involving citizen participants who performed knowledge-intensive tasks through the Internet. The following five projects (Table 1) were examined and compared to build an explanation () of how quality is assured in citizen science.

Table 1

Overview of the cases at the time of the study.

Letters and Correspondents around 1900Digitizing Belle van Zuylen’s CorrespondenceSailing LettersGouda on PaperTranscribe Bentham

Start of project20092010November 2011November 2011September 2010
End of projectNovember 2016OngoingOctober 2012OngoingOngoing
No. of recruited (registered) citizens207100603.000
No. of active* citizen participants2051005011
Type of documentsLettersLettersLettersBooks and ManuscriptsManuscripts and letters
Scope of project **1,912 letters1,762 letters5,862 letters1,000 pages15,634 pages
Type of tasksAdding metadata Transcribing
Annotating
Modernizing
Annotating
Adding metadata
Transcribing
Transcribing
Translating
Transcribing
Encoding

* Refers to people who have been actively engaged in the project during the period of study.

** Number of letters or pages transcribed or translated in the project up to March 2016.

The first project, Letters and Correspondents around 1900, started in 2009. The project leader was a scholar in a Dutch research institute and the transcriptions were meant to be used by other humanities researchers. Participants included volunteers with a literature background and literature students at a Dutch university. Together they constituted a small community of about 10 to 20 people. They used a web-based tool to integrate transcriptions. The scans and transcriptions have been available online since November 2016. The second project, Digitizing Belle van Zuylen’s Correspondence, started in 2010 and was also led by a professional researcher and a research assistant from the same institute. Citizen contributors were, at the time of writing, members of an association interested in the work of this 18th-century writer. They use e-mail as means of communication and a web-based tool to integrate all contributions into one searchable online edition. The third project, Sailing Letters, started in 2011 and took just over a year to transcribe about 5,800 scans of handwritten documents from the 17th and 18th centuries. Participation was open to everyone who felt capable of carrying out this task. About 100 citizen volunteers contributed to the project.

The fourth project, Gouda on Paper, was initiated and led by one expert volunteer with transcribing experience and an educational background in language and literature, and a professional archivist from the regional archive. The call for participants took place in November 2011 through local media. Participation is open to anyone who feels capable of performing the proposed tasks. During our study, the project counted 50 active participants. They use various technologies to support their tasks: e-mail, Dropbox, and a web-based tool to integrate transcriptions. Finally, the Transcribe Bentham project of University College London started in 2010 with the aim of transcribing and encoding (TEI-compliant XML) the handwritten original work of Jeremy Bentham, to support existing research projects (; .). Participation is open to everyone, and from October 2012 to June 2014, about 400 people had transcribed or partially transcribed at least one manuscript, and of these 11 had transcribed 100 folios or more. The project uses an online transcription environment (based on open source software) where all (diplomatic) contributions are posted and integrated. All these projects are collaborative () because they were designed by project leaders of different institutions and citizens volunteered to transcribe, translate, or annotate data—tasks that entail the analysis and processing of textual data.

Data collection

We followed the activities in these five projects over a period of more than two years. Data were collected by the first author between December 2012 and December 2015. Our data (Table 2) consist of: semi-structured interviews (over 26 hours) with project leaders and volunteer citizens, observations of meetings and training sessions (45 hours), and documents including project manuals, screenshots of website pages, news articles, and other project-related documents (83 documents). Several interviews were conducted via Skype, follow-up information and clarifications were obtained via e-mail and telephone, and numerous documents were gathered to complement and triangulate findings.

Table 2

Data sources.

SourceLetters and Correspondents around 1900Digitizing Belle van Zuylen’s CorrespondenceSailing LettersGouda on PaperTranscribe Bentham

Interviews
(formal and informal)
97679
Observations*615
Documents
Manual (versions)556123
Website/blog15142
Other project documents114122
News articles (incl. recruiting open calls)1241
Minutes of meetings172

* Projects with no observations had no training sessions during the research period.

Semi-structured interviews allowed for consistency in the topics covered across cases and for flexibility in adjusting questions depending on the type of interviewees and the flow of the conversation (; ). The length of the interviews was about one hour on average. We interviewed project leaders and through them gained access to citizen participants. Project leaders explained the ways of working, provided supporting documentation, and allowed observations of project meetings and training sessions. Interviews with project leaders took place in their offices, while interviews with citizens were held in their homes or via Skype. Most of the interviews (27) were taped (upon prior informed consent) and transcribed verbatim, and notes were taken of the informal conversations.

Observations of meetings and trainings facilitated our understanding of the activities and the dynamics of collaboration among participants and with project leaders. In general, the first author took an observer-as-participant role (). Project members knew about her presence, which allowed her to freely observe and ask questions to get to know participants, to clarify and understand their activities and their use of the online tools. For the Transcribe Bentham project, given the lack of training sessions and the broad geographical distribution of participants, the first author examined the transcription desk and performed a few partial transcriptions, which helped us to understand the type of task, its complexity, and the tool used to perform it. Since the focus of the research was understanding project leader activities in real-life projects, we did not quantitatively measure quality, nor was this possible because only one project kept track of the differences between contributed transcriptions and reviewed versions.

Data analysis

Data analysis was aimed at explanation building () by identifying, describing, and connecting the activities performed by project leaders to manage knowledge work and ensure quality. First, through an iterative process, we coded the activities or work practices within these projects, including the main task and supporting activities. A distinction was made between activities performed by citizens and by project leaders. We also coded communication activities among citizens and between citizens and project leaders, and identified the technology and manuals used in each project.

Second, activities across projects were compared in terms of their purpose from a knowledge management perspective. For instance, training sessions and manuals were seen as different ways to share project leaders’ expert knowledge with citizens. Third, we grouped activities with the same purpose and assessed similarities/differences across projects (). We compared the different ways (i.e., patterns) in which participants were recruited, how project leaders shared knowledge, how tasks were performed, how quality was assessed, and the roles of technology. Finally, we looked for similarities and differences (, ) among the cases to explain () how and why specific combinations of activities were chosen on a case-by-case basis.

Findings

In the studied projects, textual data are interpreted and processed by citizen volunteers. The expected outcomes are transcriptions of such quality that they can be used in further academic research. In this context, quality involves two essential characteristics: accuracy, the match between processed information and the original object of research; and uniformity, the standardized way in which data is presented.

When asked what a good quality transcription entailed, one project leader described accuracy as, “…all the letters in the old form are converted into letters in the modern form.”

Similarly, one citizen volunteer explained: “In the transcription you have these really old textual characters, so you convert them into modern day writing… so, that strange curl, is it an ‘L’, is it a ‘B’? In the end there is only one character. It’s about finding the right letter.”

Another citizen said, “For instance, the word “immediately,” that’s with double “m” but [author] writes it with one “m” and you can think that you know how it should be, but it’s not how it’s written.”

Accurate transcriptions require, at least, having the basic cultural competence to recognize handwritten characters and the structure of sentences. Moreover, understanding the language of the period in which manuscripts are written (e.g., Latin, 17th-century Dutch, 18th-century French) helps in the interpretation of textual characters, and can contribute to a greater accuracy.

Uniformity refers to the presentation of textual information in a standard manner. When explaining what makes a good transcription, a project leader said, “following the guidelines. Because you must, of course if you work with lots of different people, well… stick to the agreements. So, for example, what do you do with indentation? And underlining? And what do you do with words that you cannot read?”

Standardization or uniformity requires citizen volunteers to be aware of the rules of the field and the project, and to be thorough in applying them consistently. Standardization is important for quality because information needs to be searchable and allow aggregation (i.e., into periods, authors, location) to support research analyses.

In the subsections that follow, we discuss the knowledge-management activities used to facilitate quality contributions.

Accessing knowledge: recruiting participants

We distinguished two types of recruiting approaches in the studied projects: an open call, in the form of a public announcement of the project; and a targeted call, where the invitation to participate was directed to only a specific group of people (Table 3). We also found that some projects used both types of call. More importantly, our findings indicate that the different ways of recruiting participants influenced the number and type of people, and the knowledge they brought into the project.

Table 3

Different configurations of knowledge management activities.

Letters and Correspondents around 1900Digitizing Belle van Zuylen’s CorrespondenceSailing LettersGouda on PaperTranscribe Bentham

Accessing knowledge: recruiting participantsTargeted callTargeted callTargeted and open callsOpen callOpen call
Sharing and integrating knowledgeTraining
Regular comm.
Manual
Training
Regular comm.
Meetings
Manual
Regular comm.
Forum
Manual
Training
Regular comm.
Meetings
Manual
Regular comm.
Forum
Manual
Coordinating knowledge: organizing tasksIndividualIndividualIndividual Rotation (in workflow) Group discussionsIndividual Rotation (optional)
Assessing knowledge work: evaluating contributions’ qualityPeer-reviews and Peer-expert reviewsProfessional-expert reviewPeer-expert reviewsPeer-expert reviewsProfessional-expert reviews

The projects Gouda on Paper and Transcribe Bentham used a true open call to recruit participants. In both cases, project leaders announced the project through various media and set no restrictions for participation, thus creating a greater pool of potential participants. In contrast, the projects Letters and Correspondents around 1900 and Digitizing Belle van Zuylen’s Correspondence recruited people within the networks of their respective project leaders. That is, they targeted people who they thought would be interested or who they knew had relevant knowledge to perform the main task. The project Letters and Correspondents around 1900 targeted primarily university students with a history and literature background. Digitizing Belle van Zuylen’s Correspondence recruited participants among the members of the long-established association dedicated to the work of this female author. Finally, the Sailing Letters project targeted citizens who had participated in a previous citizen science project, namely transcribing a 17th-century bible, but additional people joined after hearing about the project in the media or through the project leader’s network.

At the time of our study, the projects Gouda on Paper, Sailing Letters, and Transcribe Bentham had reached about 50, 100, and 400 contributors respectively with their open call approach, whereas the targeted call of Digitizing Belle van Zuylen’s Correspondence and Letters and Correspondents around 1900 enabled them to recruit 7 and 20 participants respectively.

Project leaders and citizen participants were very much aware of self-selection. For instance, one project leader said, “Volunteers are not selected by me […] everyone who wants to contribute can do that […] though they should believe that they can do it.” Volunteers explained their decision to participate with comments such as, “I have gained a lot of experience in these 20 years […] most people who enrol [in project] are very interested, they are well-educated. So, most of them know that they can handle this [task].” Another volunteer commented, “And because we are very interested in [author] and because we thought that we had some knowledge that could be useful, we said: let’s do it!”. That is, they referred to the fit between the project and their knowledge and interests.

Sharing and integrating knowledge

Following or parallel to recruiting citizens, project leaders shared their expert knowledge and facilitated communication among citizens through trainings, manuals, regular online communication, meetings, or online forums. Training was used in some projects (Table 3) to teach participants basic transcription and annotation norms, to agree on standardization rules, and to become familiar with the online tools or work environment. One project leader explained, “It is about a workshop we had twice. It was mainly technical, how it works, and after that we had one about how to actually use it. Because you transcribe, but how should you do that? A note here is different than when you put it on paper. How do you do that in the system?.”

Regardless of how knowledgeable citizen participants were, project leaders provided training to ensure that transcriptions were standardized and to avoid problems with integrating multiple contributions. Training sessions were not intended to teach participants about the content or language of the text, but rather were aimed at sharing project leaders’ expert knowledge about transcription conventions and using the online transcription environment. In projects with larger numbers of people, training sessions at the research institute were not organized very often; instead, project leaders chose manuals and other supporting materials that participants could use online before or while performing the task.

In fact, all projects used manuals or guidelines to share knowledge. Manuals included rules that were too broad or too specific to a transcription methodology and could not be embedded in the technology. That is, the extent to which knowledge was codified and standardized in technological artefacts (in metadata fields, encoding buttons, or drop-down lists) influenced the rules included in manuals. For instance, manuals included rules to standardize dates, spelling, and punctuation, and explained when and how to solve abbreviations. One of the manuals indicated, for example, how to enter dates: “Look whether you can find a date on the letter and fill it in, in the order: day, month, year.” Similarly, in another project the guideline stated: “Date: (of the letter, this order holds for all the dates in the metadata!)17531216 (letter 0010) yyyymmdd. In case the date is not complete, then write as follow: 175312??.”

Manuals could either substitute training or be an additional means to communicate project rules. They were used during training sessions and distributed to participants before engaging in the task, to ensure awareness of rules and expectations. Manuals were also used during task performance, as a reference in case of doubt and to ensure contributions fulfilled expected criteria, and after task completion, to assess and improve submitted contributions.

Manuals were revised throughout a project, on the basis of discussions during training sessions or frequently asked questions. For instance, one of the project manuals explicitly stated, “Instructions are by definition work in progress: they are modified on the basis of questions, comments, specific user cases and new insights.” This was mainly the case during pilot phases of the projects, as project leaders tried to find the best way to codify knowledge and communicate field conventions and standardization rules. Another project explained the manual’s work-in-progress status in its newsletter: “We are not there yet, so the manual is not final. Some issues will come from practice and they will be modified. That happened during the training evening, when we found some problems that we had not seen before. These have been added in the manual right away.”

Moreover, because of the variability in citizens’ knowledge and skills, some manuals offered extra supporting information. For example, the Sailing Letters manual included a detailed list of abbreviations common in 17th- and 18th-century documents, and links to specialized websites. Similarly, participants in Transcribe Bentham had access to online examples of Bentham’s handwriting.

Knowledge was also shared through regular online communication. Project leaders or coordinators answered questions and resolved issues that citizens encountered while performing the task. As one participant explained, “If you are not sure about a thing or you put something, highlight it as questionable, or you do not quite understand it, you are not sure whether your reading of it was correct, you just put a little question and they will always get back to you.” And one of the manuals stated, “In case of problems and special issues that are not covered in the manual, please contact the project leader.”

Regular communication also included instances of feedback, about the quality of contributions and advice on how to improve them in the future. Citizen participants appreciated regular communication from project leaders, specially feedback and prompt reaction to questions, as evidenced by this participant’s statement: “You can always turn to [project leader] with questions. [Project leader] answers quickly, I was really amazed, and if that is not the case then it is for a good reason and you get an answer quite soon. This is really nice, because you are busy [with task] and if you do not know something, it is really convenient that someone gives you the answer right away, then you can move on [with the task]. This is good, I like it.”

Meetings and online discussion forums were organized to support knowledge-sharing and interaction among citizen participants. Projects Digitizing Belle van Zuylen’s Correspondence, Gouda on Paper, and Letters and Correspondents around 1900 organized meetings most often. In Gouda on Paper, where tasks were performed in groups, proximity allowed regular meetings between representatives of each group (i.e., a group coordinators meeting). In these meetings, coordinators provided a group progress update, discussed problems, and tried to find solutions. In Sailing Letters and Transcribe Bentham, the larger number of participants and their wide geographical distribution meant that fewer face-to-face meetings were organized, and projects offered instead an online discussion forum.

Coordinating knowledge: organizing tasks

We distinguished three different approaches to organize the transcription and translation of manuscripts: individual, group discussion, and individual rotation (Table 3). In projects with a larger number of people and few prior knowledge requirements, such as Gouda on Paper, Sailing Letters, and Transcribe Bentham, project leaders allowed or actively encouraged the informal revision of transcriptions by rotating texts among participants or discussing them in groups. Both the group discussions and the rotation of transcripts had the same objective: having multiple people perform the main task (transcribing, translating) to improve quality. The rotation of transcripts was explained as follows:

“… as a second step we have the transcription, these have been rotated twice, still among volunteers, then is the level … it gets better all the time […] it can also happen that the second volunteer is not better than the first one, so he might add little to it, but it can also be that he does actually see something… you just get the chance. After that another volunteer goes over it and then it [the text] is removed from this process.”

The choice between rotating or discussing in groups was influenced by the proximity of participants and the technology used in the project. That is, if the online (transcription) tool used in the project affords versioning, this will facilitate the rotation of texts and their corresponding transcription, by tracking changes and deciding on the best transcription. If the tool does not allow versioning, rotating the transcription of texts becomes more complex and requires more coordination among participants. The studied projects had different levels of versioning, ranging from saving various versions of a transcription, keeping track of daily changes, to tracking changes at the word level. In Gouda on Paper, for instance, given the number of participants and their proximity, the project leader urged citizens to organize groups. However, technology did not afford word-level versioning; hence, groups first transcribed (or translated) the text individually in Word; then at an agreed date, participants met to compare their individual work, discuss it, produce the best transcription (or translation) possible, and add it in the online tool. In the Sailing Letters project, rotation was part of the normal workflow, organized in steps, and for each step the transcription versions were saved. Transcribe Bentham was the only case in which word-level versioning was possible. Surprisingly, despite the possibility to track and reverse word changes, very few people worked on transcriptions started by others (i.e., rotation), mostly preferring to start transcriptions from scratch. In contrast, projects based on a targeted call, such as Digitizing Belle van Zuylen’s Correspondence and Letters and Correspondents around 1900, used tools that did not afford word-level versioning, and citizen participants therefore mainly worked individually.

Regardless of how tasks were organized—individually, through group discussion, or in rotation—quality was primarily accomplished through individual task performance. Some individuals proofread, assessed, and improved the quality of their own transcriptions before saving or submitting them in the online work environment. One volunteer said, “I tend to go through it two or three times to figure out what the gaps are, what have I missed out. I do that as part of a proofreading process to check it all: does it all make sense? is it something that the editors will find semi-useful at least?”

Other volunteers performed their work all in one go, very carefully, so that they felt confident enough to submit their work without proofreading, as the best they were able to do.

Assessing knowledge work: evaluating contributions’ quality

In all the studied projects, citizen contributions were assessed and improved. We identified two assessing approaches: professional-expert reviews and peer-expert reviews (Table 3). Professional-expert reviews were carried out in the projects Transcribe Bentham and Digitizing Belle van Zuylen’s Correspondence, where tasks were performed individually online. Contributions were assessed and improved individually or by a small group of two or three professional researchers. Though the number and distribution of participants was greater in Transcribe Bentham than in Digitizing Belle van Zuylen’s Correspondence, only a smaller group of citizens transcribed regularly and did not know each other. Therefore, it seemed more efficient to let citizens focus on the core task and leave the assessment and correction to the professional project staff.

Gouda on Paper initially used professional-expert reviews to assess and correct participants’ contributions. Over time, however, peer-expert reviews became a better option for the project. Project leaders were not able to keep up with the high number of transcriptions, and there were also more people transcribing than translating manuscripts, which resulted in workflow disconnections. Most importantly, the need to make transcriptions and translations available online to the public meant that all contributions needed to be assessed and corrected more quickly. Therefore, committees or teams of peer-experts were organized to assess the accuracy of transcriptions, check interpretations and corresponding translations, review the language of translations, and improve the readability for present-day people. The creation of committees was explained in the project’s newsletter:

“We want to ask [research institute] to publish the transcribed and translated texts in [online tool]. Before we do that, we need to thoroughly go over everything again. This should be done by people with the educational background, training or profession. We have these people in the project, spread over the different groups. We have asked them to participate in the committees that will perform this final control.”

Such a comment in the newsletter indicates that project leaders were aware of the expertise level of participants.

Peer-expert reviews were an essential part of the Sailing Letters project. Participants who had a relevant educational or professional background (history, literature, linguistics) and extensive experience in transcribing were asked to review and improve preceding contributions. Peer-experts were targeted by checking their short biography, usually requested by the project leader when they joined the project, and their time availability. These assessment and correction tasks were also rotated among the peer-experts.

Finally, in Letters and Correspondents around 1900, reviews changed during the course of the project. Initially the project had three main steps: transcription, assessment, and final editing. However, transcriptions and reviews done by students were not always accurate and resulted in long discussions in the annotation field. Because of this, the project leader asked experienced volunteers to carry out a second assessment round. Hence, the project combined peer reviews with peer-expert reviews.

Discussion

We set out to investigate how citizen science projects involving complex tasks are managed and quality outcomes ensured. From a knowledge perspective, project leaders ensure quality by recruiting citizens (accessing knowledge), sharing and integrating their expert knowledge, coordinating knowledge work, and assessing and improving outcomes. Together these knowledge management processes contribute to the quality of citizen science outcomes (Figure 1).

Figure 1 

Possible configurations of knowledge-management activities in citizen science.

While the citizen science literature recommends project leaders to announce projects through various communication channels to recruit people with different motivations (), the knowledge management approach proposes other recruiting strategies based on knowledge access. That is, some project leaders access knowledge through targeted calls to reduce knowledge uncertainty and to increase the chances of quality outcomes. Targeted calls are based on the idea that only a subset of the public has the knowledge and interest to contribute to the production of scientific public goods (). These calls are based on the judgment made by professional scientists about citizens’ knowledge. This assessment is influenced by prior knowledge and similar social identity (; ). Scientific project leaders seem to evaluate citizens on the basis of similarity between their educational and professional backgrounds, characteristics of their own social identity, to reduce knowledge uncertainty (; ).

Targeted calls, however, contradict one of the main characteristics of citizen science: namely open participation or unrestricted entry (), and they are not enough to guarantee quality. Whether targeted or not, citizens performing complex tasks, such as manuscript transcriptions, also need to fulfil scientific quality standards. To facilitate this, project leaders share and integrate their expert knowledge, and coordinate and assess knowledge work. The configuration of knowledge management activities depends on the number, distribution, and knowledge diversity of recruited participants (Figure 1).

Prior citizen science research has recommended providing opportunities to learn (), to give personalized feedback (), and to facilitate social interactions () on the basis of the motivations of citizen participants. A knowledge perspective shows how these activities are related to knowledge, quality, and the choices made for recruiting participants. Knowledge-sharing activities in complex citizen science are similar to the way knowledge is shared in other organizational settings (; ). Open calls make a project widely known and are likely to result in a larger number of distributed participants with diverse knowledge; hence, knowledge-sharing usually takes place online, through manuals and links to extra information sources. In contrast, targeted calls are more likely to lead to a smaller and more manageable group of participants with more relevant knowledge, who may perhaps be in closer physical proximity. In those cases, organizing face-to-face meetings and training sessions is more feasible (Figure 1). Moreover, expert knowledge is integrated in rules, standards, and routines, and is embedded in technology (; ; ). But, because not all scientific knowledge is unambiguous, project leaders also share knowledge through interpersonal communication () such as meetings, trainings, and online forums. It seems that knowledge sharing in citizen science requires the combination of first-generation (i.e., manuals and standard procedures) and second-generation (i.e., learning within the community of participants through training and meetings) knowledge-sharing practices ().

To coordinate a large number of participants, project leaders are likely to organize tasks in a collaborative manner, through group discussions or task rotation (Figure 1). Recent research shows that collaborative transcription methods result in better quality than the aggregated majority of multiple individual transcriptions (). A collaborative approach is in line with concepts such as the wisdom of crowds and Linus’ law (; ) by which quality improves as more people go over the same text. Rotating tasks or discussing in groups can be seen as different ways of organizing community revision (), each depending on the proximity of participants and the technology used. If people are geographically close, they can work in groups. If technology affords versioning, tasks are easier to rotate. This confirms prior research on distributed work, as the coordination and performance of citizen science tasks depends on the type of tasks and their dependencies (), as well as on the distribution of participants and the affordances of technology (). Coordination through rotation and discussion in groups is also intertwined with the assessment of contributions, with some form of feedback depending on the possibilities of technology and the way tasks are organized ().

Finally, prior citizen science literature indicates the importance of monitoring and evaluating citizen science projects (), but it does not discuss how the quality of outcomes is assessed. Our study shows different ways by which project leaders manage the assessment and correction of contributions. Knowledge assessment is performed differently depending on the field, profession, and task at hand (). In complex citizen science, assessment approaches are influenced by the number of participants and the extent to which project leaders are aware of citizens’ level of expertise (Figure 1). If the number of participants is small, project leaders tend to rely on professional-expert reviews to assess contributions. Because professional reviews do not scale well (; ), projects with a large number of citizen participants are likely to use peer (-expert) reviews, as long as the project leaders know participants’ level of expertise. Multiple reviews seem to be the common way by which transcriptions (as outcomes of citizen science) are assessed (), which fits the interpretative nature of humanities fields such as literary studies (), requiring various views to agree on an outcome.

This study expands earlier citizen science frameworks () by examining in detail which different configurations of activities project leaders can adopt to manage knowledge flows and ensure quality. First, project leaders should be aware of the consequences of choosing between an open versus a targeted call. Open calls are likely to lead to a greater number of diverse participants, which makes task coordination () and quality assessment () challenging, while targeted calls may result in more manageable projects but with slower completion pace. And second, we show how knowledge management practices play a role in ensuring quality contributions from voluntary citizens. These practices might change, however, when new technologies are integrated into projects. For instance, in the digital humanities, the application of machine learning algorithms, such as Handwritten Text Recognition software (e.g., TRANSKRIBUS), reduces the complexity of transcriptions and therefore modifies the types of tasks that citizens perform and how these are coordinated (; ). The assessment of contributions might also change over time as algorithms fitting natural language are used, for example, in text similarity software to compare multiple citizen contributions (e.g., project Mutual Muses by the Getty).

Conclusion

Knowledge-management practices contribute to address quality issues in citizen science. Project leaders access and harness the knowledge of citizen volunteers by applying multiple knowledge-management activities that facilitate the performance of complex tasks and ensure quality outcomes. The way knowledge is accessed seems to lay the foundations for the different configurations of activities aimed at ensuring quality. These configurations are also influenced by citizens’ proximity, the characteristics of knowledge, the affordances of technology and the extent to which project leaders are aware of citizens’ skills.

Though the depth and details of this study may be limited by the scope, which covers five projects, the findings include plenty of specifics about the different ways that knowledge flows and quality are managed in complex citizen science. Since the focus of the study has been on the activities adopted by project leaders, to gain a deeper understanding of knowledge-sharing in citizen science, future research could dig deeper into the interactions between citizen volunteers.

Finally, our study has not included any quantitative measures to assess task performance and quality. Different knowledge-management configurations might require different time investments and coordination efforts, aspects which are sometimes underestimated (), or could result in different levels of quality. Future research could quantitatively measure the duration of projects and the quality of crowd contributions over time, to assess the efficiency and effectiveness of different knowledge-management practices and compare them across projects. Moreover, tracking the quality of citizens’ contributions over time could provide information about the learning effect that occurs when people contribute to a project for an extended period of time.

Data Accessibility Statement

Given the qualitative nature of the research and to maintain anonymity of the participants, no data from the interviews, observations, and documents is made available, other than the quotes included in the article.