Introduction

Citizen science usually refers to the voluntary participation of citizens in different phases of the scientific process, often data collection or analysis, of projects run by scientists (). In recent years, citizen science has enjoyed growth and popularity owing to web-based and mobile technology advancements (). As a result, beyond the offline, field-based projects that take place in the physical world, there are now two additional settings for citizen science: Blended settings are mainly offline, but require the use of technology, mainly for data collection; for example, Garden Wildlife Heath (https://www.gardenwildlifehealth.org) invites citizens, and in particular bird watchers, to report dead or diseased wildlife in their gardens through an online survey tool. Virtual settings are exclusively online; for example, Old Weather (https://www.oldweather.org) involves volunteers in transcribing weather observations recorded in US books dating from the mid-nineteenth century.

The aims of this paper are to identify empirical studies that report on learning outcomes in online citizen science, and to review the methods they used to investigate learning. In this section, we briefly introduce an overview of learning outcomes encountered in mainly field-based citizen science programmes, we conceptualise learning in online learning environments, we present the rationale and research questions of this paper. In the Literature Search Process section, we elaborate on the methodology we have used for collecting and critically assessing the studies under review. In the Results and Discussion sections, we present, visualise, and reflect on the review findings. The last section presents conclusions from this study, and points to future directions.

Learning in field-based citizen science programmes

Empirical studies in field-based programmes have highlighted the potential of citizen science to increase scientific literacy, the promotion of knowledge, and the understanding of scientific concepts and processes, (e.g., ; ; ; ; ; ; ). More specifically, a notable study by Bonney et al. () reviewed ten citizen science projects and concluded that there were impacts on participants’ scientific knowledge, ranging from increased understanding of the scientific process to project-specific knowledge about birds. Similarly, empirical studies in school-based settings have reported positive impacts on science learning (e.g., ).

In an overview study, Phillips et al. () examined project web pages and surveyed citizen science practitioners; they reported that studies tend to measure science content and process as well as participant interest in both science content and process, behaviour changes, attitudes towards science, and science inquiry skills. However, existing empirical research has not yet systematically analysed learning impact from participation in online citizen science projects. An online citizen science project enables different forms of participation than a field-based project and thus likely results in different forms of learning; the project design may hinder or facilitate specific interactions (with, for example, the science content, other citizens, or scientists) that may determine certain learning processes and outcomes.

Conceptualisation of online learning

Online learning is a form of distance, web-based learning with synchronous and asynchronous components such as real-time interaction with peers and teachers, participation in virtual classes, and the ability to study anytime and anywhere. It may refer to either formal education activities such as free or paid online courses hosted on a learning management system or virtual learning environment, or informal learning experiences such as participation in massive open online platforms (See MOOCs), social networks (e.g., ), online museums () and game-based communities (e.g., ).

In this paper, we are interested in informal learning because informal learning experiences are often designed without an explicit learning or curriculum objective, and because this type of learning is rather random, spontaneous, and hard to measure, and it is less likely to lead to any form of recognition (e.g., ). Learners are viewed as self-directed individuals driven by their own personal interests and as individuals who make sense of the world through an inquiry approach to learning—that is, through manipulating, testing, observing, and questioning (e.g., ; ). Facilitation is a significant aspect of online learning and one that can transform digital environments to learning spaces (). The affordances of online technologies influence how learners interact with each other and with the content, yet this is often not adequate for promoting shared dialogue, group learning, and ongoing interactions. The presence of individuals who monitor and facilitate interactions has been shown to motivate participation in online citizen science communities (), and in formal settings, facilitators have led to enhanced learning outcomes (e.g., ; ). Facilitation is often related to scaffolding, which refers to how learners are supported in informal learning conditions. Scaffolding can lead to deeper cognitive gains, yet overformalisation, or highly scaffolded conditions (through, for example, the use of response sheets), may restrict the informal participation behaviours that take place in online citizen science settings, including experimentation and questioning (). One of the challenges of online learning is to offer high-quality learning experiences that are comparable to face-to-face or classroom-based education (e.g., ).

Rationale

A systematic review of the learning impact of participation in online citizen science is timely for a number of reasons. The use of technology has not only escalated participation (), but it has also enabled some projects to take place entirely online, allowing geographically dispersed people to take part. This increasing participation raises the need to understand whether and what citizens learn from engaging with citizen science projects. Examining the impact on volunteers’ learning can help project designers improve the design of their programmes and cater to citizens’ learning needs. A better understanding of learning impacts can support the educational merit of participation in online citizen science, thereby informing existing approaches and initiatives that aim to engage people with science. Gaining STEM skills is considered both a challenge and an asset in the US and in Europe (; ), and participation in citizen science projects may help to tackle the challenge aspect. School and higher education programmes may form the mediators of STEM skill cultivation.

Moreover, there is evidence that learning stimulates intrinsic motivation to participate, which positively associates to contribution quality () and encourages the loyal and sustained participation of engaged members (). Self-directed learning is important for lifelong learning (), but it is also a challenge to sustain participation in such communities (). Understanding and supporting learning within online citizen science projects may balance participation in such flexible and uncontrolled participatory environments.

Furthermore, the flexible participation and the lack of stability and of physical space lead to difficulties in forming a community of practice among fellow online citizen science members. Although communication can be challenging, interaction with peers and experts is thought to be one of the main factors that supports learning in citizen science ().

Additionally, project scientists usually have to serve as teachers, offering learning content and instructions, although they may not be trained for this role (). Acknowledging what people actually learn and what the communication gaps are may help to train scientists to deliver more suitable lessons through better design that is inspired by more mature research undertaken in the context of online learning environments.

Finally, detecting areas of learning in online citizen science can contribute to setting up a common framework for evaluating the impact of different online citizen science projects. Assessing the educational potential of citizen science has been one of the latest requests from funding agencies. An example of an effort to assess learning in online citizen science is the Informal Learning in Citizen Science (ILICS) model (), which suggests a range of potential learning outcomes based on empirical research within the Citizen Cyberlab project. Furthermore, the US National Science Foundation (NSF) has funded the Learning and Environmental Science Agency Research Network for Citizen Science (LEARN CitSci) programme (http://bit.ly/2A0zxNo), stressing the significance of understanding and effectively supporting citizens’ learning across settings. It is anticipated that evidence-based findings of this review will work as a channel for stakeholders to advance existing learning evaluation frameworks and shape future directions in researching online citizen science.

Aim and research objectives

The aim of this paper is, through a systematic review of resources in education-, technology-, and citizen science–related databases, to identify and critically analyse the learning impact of citizens’ participation in online citizen science projects, and to examine how this has been explored in recent literature. This review can benefit diverse stakeholders, including citizen science designers and researchers, educators (of science and other disciplines), parents, and policymakers, by providing insights as to whether participating in online citizen science has an impact on learning, and how this impact can be documented and evaluated. This work aims to produce an in-depth account of learning outcomes in online citizen science alongside the methods and instruments used for capturing these outcomes.

This systematic review aims to collect evidence about whether participation in citizen science programmes impacts learning, and if so, what is the learning impact? The following research questions (RQs) will be addressed:

  • RQ 1: What methods/instruments have been used to capture learning from citizens’ participation in online citizen science programmes?
  • RQ 2: What is the learning impact of citizens’ participation in online citizen science programmes?

Literature Search Process

In May 2019, we undertook an extensive automated search of the electronic databases Google Scholar, Web of Science, ERIC, Wiley online library, Science Direct, the European Citizen Science Association (ECSA) collection of Citizen Science publications (http://bit.ly/2zLRpsq), the Zooniverse publication database (http://bit.ly/2ijUuZf), the Citizen Science: Theory and Practice journal, and The Open University library search engine. Two different sets of keywords were utilised to extract relevant resources: online (or virtual) citizen science and learning (or science learning or scientific literacy). No chronological restriction was applied, and a Boolean logic search or individual keyword combination search was used as allowed by each database. The search returned 75 unique results; these were manually checked against a set of inclusion and exclusion criteria (see the section Inclusion and exclusion criteria).

Inclusion and exclusion criteria

A set of inclusion criteria ensured that only literature relevant to the research objectives was included in the analysis. Studies were required to:

  • report on online (or virtual) citizen science communities or projects,
  • examine learning impact (learning in general, science learning, or scientific literacy),
  • describe empirical research, and
  • use English as the written language.

Examples of studies excluded from the analysis were those that reported on the motivations for participating in online citizen science projects (e.g., ), those that offered perspectives without empirical data (e.g., ), and those that measured levels of engagement solely as evidence of learning (e.g., ).

Assessing the literature resources

A Mendeley shared folder was created with all the results from the database search. The authors read the abstract and skimmed the full text of resources to identify whether studies met the inclusion and exclusion criteria. Ten successful items were found. In a shared Excel sheet, the following information was recorded: the year of publication, author name/s, number of research participants, research instruments/methods used for data collection, and findings about learning. Studies were also categorised in terms of whether they were journal or conference items and whether they adopted a top-down or a bottom-up approach. Top-down approaches propose types of learning outcomes from online citizen science and suggest ways to investigate whether these outcomes were present or not in particular projects. Bottom-up studies, rather than making use of existing frameworks, focus on self-reports from citizen scientists, or close observation of the participation and engagement of volunteers, to identify evidence of learning.

Results

This section articulates the methods and instruments that have been used to capture learning from citizens’ participation in online citizen science programmes and examines the resulting learning impact.

Methods and instruments (RQ1)

Methods and instruments that have been used to explore the effect on citizens’ learning (RQ1) include instruments for measuring scientific attitude and epistemological beliefs (), scales for measuring the nature of scientific knowledge (), citizen surveys to capture self-reported learning (; ; ; ; ; ; ), interviews with scientists to capture their perspectives on citizen learning (; ), pre/post surveys to measure conceptual knowledge and reasoning abilities (), questionnaires with questions regarding the project (right/false) (; ), visual science quizzes for capturing topic-specific and other science knowledge (), multi-level frameworks to map the types of learning captured in interviews (), and log data and forum discussion analysis for correctness check of contributing data (; ) and for analysing possible inquiry interactions ().

More specifically, Price and Lee () administered a scientific attitude instrument and the Nature of Scientific Knowledge Scale (NSKS) to measure participants’ scientific attitudes and epistemological beliefs in the Citizen Sky project, analysed pre- and post-test data collected from 333 participants, and conducted nine interviews; the mean age of the Citizen Sky project participants was 41 years old, with 78% males and 19% females. Prather et al. () collected and analysed responses from 160 participants, through an assessment instrument developed specifically to measure the conceptual knowledge and reasoning abilities of the Galaxy Zoo participants; the mean age of participants, in a previous Galaxy Zoo study, was 43 years old, with 81.3% male participation (). Kloetzer et al. () conducted 32 semi-structured exploratory interviews with participants from the Old Weather, BOINC (Berkeley Open Infrastructure for Network Computing), and Eyewire projects, and identified learning dimensions in online citizen science projects. Jennett et al. () extended this work with 39 interviews with 28 participants and eleven researchers from BOINC, Old Weather, Eyewire, Transcribe Bentham, Bat Detective, EveryAware, and KRAG projects; the participants’ gender was not surveyed and it is only mentioned that people from a very diverse age range participated in the projects. Scanlon, Woods and Clow () analysed the learning progress of a sample of 407 users in iSpot by identifying their first 50 observations; the participants’ age and gender were not surveyed. Mugar et al. () explored the learning experience of newcomers in Planet Hunters drawing on 21 interviews and the data log analysis of nine participants; the participants’ age and gender were not surveyed. Land-Zandstra et al. () examined the understanding of the project, attitudes towards science, and the perceived learning impacts of 1,123 participants in the iSPEX project via Likert scales, project questions, and Boolean survey questions; the average age of the respondents was 51 years and the majority were male (71%). Kloetzer, Schneider, and da Costa () explored learning outcomes in volunteer computing projects and observed forum interactions on the BOINC Alliance Francophone community. They interviewed ten participants, analysed forum logs, and received responses on their ILICS survey from 147 members. Masters et al. () collected 1,921 responses from Galaxy Zoo, Planet Hunters, Penguin Watch, Seafloor Explorer, and Snapshot Serengeti participants, through a survey examining scientific (general and project-specific) knowledge via visual science quizzes and self-reported science learning; the age of participants was not surveyed, and 56% of the participants were males. Finally, Aristeidou, Scanlon, and Sharples () explored the types of learning in the nQuire community by examining self-reported learning responses and by analysing the log data of 125 participants; the participants’ age and gender were not surveyed.

Learning impact (RQ2)

The learning impact from participating in online citizen science projects has emerged from both studies that were examining specific aspects of learning (top-down), and exploratory studies collecting evidence about various forms of learning (bottom-up). The main learning categories were: attitudes towards science (; ; ; ; ), understanding of the nature of science (; ; ; ; ; ), topic-specific knowledge (; ; ; ; ; ; ; ; ), science knowledge (; ; ; ), and generic knowledge (; ; ; ).

Attitudes towards science

Five studies (n = 5) examined the effects of participating in online citizen science projects on attitudes towards science (i.e., ; ; ; ; ). In two of the studies (; ), Likert scale instruments were designed to explore attitudes towards science, and three of the studies (; ; ) focused on self-reported attitudes captured through interviews.

Participants answering a Likert scale during their participation in Land-Zandstra et al. () reported limited involvement with science in their daily lives (low scores for reading science magazines, attending lectures/events, and following science news), although they agreed that science can have a positive impact on their lives. Changes in attitudes among adults require many interventions over longer periods of time (), and this might be one explanation for not finding any significant changes in the participants’ attitudes. However, Likert scale pre- and post-tests () detected a change in scientific attitudes that the authors believe was derived from the reinforcement of existing positive attitudes towards science, as a citizen science project alone is not likely to effect significant change in attitudes towards science. One unexpected finding of this study was that participants reported lowered self-perception of applying scientific thinking in daily life due to the realisation that they understand only a small fraction of the scientific field. In addition, participants considered their engagement with science a fun way to spend their free time (Aristeidou, Scanlon, and Sharples (). This attitude contradicts , which examined public attitudes to science reports and found that the public views science and scientists as serious. Finally, in interviews conducted by Mugar et al. (), participants contributed to research following their own individual or collaborative route and showing agent-centred presence. Similarly, in Kloetzer, Shneider, and da Costa (), interviews showed that participants volunteered to further contribute to the scientific topic in which they’d engaged as citizen scientists by promoting the project and by participating in local action groups. This sense of influence over the project structure can be key to scientific literacy, as it can lead to citizen science becoming an important learning process ().

Nature of science

Five studies (n = 5) (i.e., ; ; Masters et al. 2015; ; ) examined participants’ beliefs about the nature of science, and in particular, their experience of scientific research and learning about science. One study (i.e., ) designed an instrument for measuring epistemological beliefs about the nature of science, on the basis of the Nature of Scientific Knowledge Scale (NSKS) (); another study (i.e., Masters et al. 2015) used a self-reported quantitative survey including measures of science experiences; and three studies (i.e., ; ; ) focused on self-reported interview responses.

Epistemological beliefs about the nature of science were significantly increased in Price and Lee () NSKS results, with participants debating particularly on whether creativity exists in the scientific process and lowering their beliefs that the goal of science is to create universal laws. Similarly, in a quantitative survey by Masters et al. (2015), participants agreed on gaining a new perspective on scientific research. Evidence also showed that participants in some online citizen science projects (a) gained experience on how to approach a scientific investigation (Masters et al. 2015; ); (b) realised that science involves the use of rigorous procedures and controlled protocols (); (c) understood that science takes time to progress (); (d) understood the importance of the scientific debate (); (e) realised that failure is a normal risk and can contribute to improvements in science (); and (f) became more familiar with the process of peer-review and revisions for the production and publication of scientific papers, and in some cases, were co-authors in scientific publications (; ). These findings differ from published studies about offline or field-based citizen science, where participants demonstrated little change in scientific process and understanding (; Cronje et al. 2010; ), and difficulties in describing what scientific research is (). This difference may be explained by the different settings (for example, online visualisations could contribute to better understanding of the scientific process) and by the type of participants (for example, online and offline participants could have different ages or motivations for participating).

Topic-specific knowledge

All ten studies (n = 10) (i.e., ; ; ; ; ; ; ; ; ; ), examined how participation in online citizen science communities affected conceptual knowledge and skills in science, and they investigated whether participation in online citizen science projects facilitated topic-specific knowledge and skills related to the particular scientific field.

In particular, assessment survey findings from Prather et al. () suggested that many of the participants do not possess a comprehensive understanding of a particular project-specific concept (in this case, the relationship between a galaxy’s morphology and other properties). The authors also compared the achievements of participants on the assessment surveys to their level of participation and concluded that participants who completed more tasks appeared to gain greater knowledge than those who completed fewer tasks. This finding suggests that increased participation in projects more likely results in enhanced learning gains. Furthermore, the authors reported difficulties in getting a significant and representative sample of completed pre/post assessment items to measure the conceptual knowledge of participants, as citizen involvement changes greatly over time. Similarly, Masters et al. (2015) compared the project-specific knowledge, assessed via the quizzes, to the level of participants’ engagement and found a positive association between the two. However, the association depended on the topic of the project (e.g., performance in the astronomy-related project showed more improvement than performance in other projects) and on the project’s level of public engagement (e.g., blog and twitter posts by the project managers).

Exploratory studies examined topic-specific knowledge through analysing self-reported learning and observing data and forum logs. Participants were asked what they learned from their participation in the projects; most of the participants’ responses in Aristeidou, Scanlon, and Sharples () mentioned gains in topic-specific knowledge as a result of their interactions with the material used in the project, with the scientists, with other participants, and with internet searches. Kloetzer et al. () and Jennett et al. () also found that participants used internet searches to augment their knowledge of the scientific focus of the projects. Other external resources that were reported to contribute to participants’ topic-specific knowledge were talks given by scientists, conferences and meetings, blogs, and scientific papers (). Some participants reported that they improved their topic-related knowledge by engaging in project administration tasks, such as creating glossaries and presentations and sharing information about the project. Finally, in Price and Lee () most of the interviewees stated that their knowledge was increased or remained unaffected, and nobody suggested that they had problems while learning. Forum and log data analysis in Aristeidou, Scanlon, and Sharples () showed that in some cases participants improved their topic-specific vocabulary, using words related to the scientific field and the particular investigations. In addition, there is evidence that they overcame some of the misconceptions they held in certain everyday things related to the scientific topic (e.g., there is no extreme weather in southern countries). Finally, log data analysis of identifications of participants in Scanlon, Wood, and Clow () associates the level of participation with the number of correct identifications. Thus, as participants progress, the percentage of identifying observations correctly increases. A single study () found that self-reported learning impact was not very high, with participants responding that they learned “somewhat” about the scientific topic (i.e., the health and environmental impact of aerosols).

Beyond the science knowledge and skills gained on the topic of the project, some studies have also examined whether participants gained knowledge around the project itself, and in particular the concept of the project, how it works, and how it is supported by software and scientists. For example, in distributed computing projects, participants who were involved passively in the project understood the concept of distributed computing, whereas more active participants were aware of more technical concepts and skills needed for the project (). In other online citizen science communities, participants discovered the terms and rules of the game or the software they engaged with in order to participate, such as the interface rules, available options, buttons and commands, and the credit/reputation system (; ; ). However, a more direct assessment of participants’ understanding of the project (e.g., whether their measurements give information about the exposure of people to aerosols at their location) in Land-Zandstra et al. () resulted in low scores. Further, the assessment indicated that the more they understood about the dynamics of the project, the lower their expectation was on whether this can impact policy.

Science knowledge

Four studies (n = 4) (i.e., ; ; ; ) examined effects on general science knowledge and skills. One study (i.e., ) designed and used visual science quizzes, whereas the other three (exploratory) studies did some content/log data and forum interaction analysis.

Masters et al. () explored the relationship between science knowledge and level of participation in a number of Zooniverse projects using visual science quizzes, and concluded that there was no evidence that the two are linked. However, participants who had more positive attitudes towards science also increased their science knowledge. The analysis has also shown differences between the science knowledge gained in projects of different science topics; participants in astronomy-related projects scored better than those in ecology-related ones. Observation of participants’ contributions, via log data and forum analysis, has also revealed that getting involved in scientific investigations facilitated the improvement of inquiry skills such as data annotation, argumentation, critique, and reflection (); pattern recognition and identification (; ; ); and question and answer formation, initiation of and effective contribution to discussions, and data comprehension (; ).

Generic knowledge

Only four studies (n = 4) (i.e., ; ; ; ) were designed with an explicit objective to look at how participation in online citizen science communities impacts knowledge and skills that are not directly related to science. All four studies did content/log data and forum interaction analysis, and focused on self-reported responses from interviews.

Main findings were linked to communication, community management, and digital literacy. Communication results showed that language barriers that prompt English-speaking participants to translate pieces of the project to help non-English speakers to participate, thereby improving the language skills of the latter (; ), and that participants improved their writing skills after they were invited to structure their answers and responses using the forums and other interaction tools (). Community management skills have also been reported (; ; ), gained through participant operation of the online platforms, team management, forum updates, and event and competition organisation. Finally, digital literacy was one of the skills necessary for participating in citizen science projects online. Participants were found to gain opportunities to learn how to perform a number of tasks such as using the software and hardware of the project, and navigating on the web (; ; ). Moreover, in some projects, participants had the opportunity to gain more advanced skills such as programming and content creation (; ).

Discussion

Ten studies reported empirical examinations of learning in online citizen science projects (see Table 1). All of the studies were published after 2013, which indicates both a growing interest in online citizen science that was made possible by the use of technology and the research community’s recent interest in understanding and documenting the citizen’s learning in online citizen science. The total number of participants taking part in the studies under analysis was 4,189, with a minimum number of 21 in a study (interviews and log data analysis) and a maximum of 1,921 (survey study). The studies researched learning in the following 16 online citizen science projects: Citizen Sky, Galaxy Zoo, Old Weather, BOINC, Eyewire, Transcribe Bentham, Bat Detective, Everyaware, Krag, Planet Hunters, iSPEX, Penguin Watch, Seafloor Explorer, Snapshot Serengeti, iSpot, and nQuire. The studies were published in diverse journals and conferences: Journal of Science Communication; Human Computation Journal; Public Understanding of Science; Astronomy Education Review; Journal of Research in Science Teaching; Journal of Educational Technology and Society; European Research Conference of the Network of Access; Learning Careers and Identities; and the International Conference on Communities and Technologies.

Table 1

Online citizen science projects: methods and findings.

Research articleCitizen science projectsApproachResearch instruments/methodsLearning impact categories exploredSummary of reported findings on learning and scientific literacy

Kloetzer et al. (2013)
(ESREA conference)
Old Weather, BOINC, EyewireBottom-up
  • Self-reported learning (interviews)
  • Nature of science
  • Topic-specific knowledge
  • Science knowledge
  • Generic knowledge
  • Understanding of science procedures and risks
  • Increased topic-specific knowledge through interaction with project material
  • Understanding of how topic-specific science software and tools work
  • Increased pattern recognition, identification skills, and data comprehension
  • Improved communication, digital literacy, and personal development
Jennett et al. (2016)
(Journal of Science Communication)
BOINC, Old Weather, Eyewire, Transcribe Bentham, Bat Detective, EveryAware, KRAGTop-down
  • Self-reported learning (interviews)
  • Interviews (scientists)
Prather et al. (2013)
(Astronomy Education Review)
Galaxy ZooTop-down
  • Zooniverse Astronomy Concept Survey (ZACS) (pre/post)
  • Topic-specific knowledge
  • Lack of astronomy knowledge and skills in particular topic-specific concepts
  • Increased knowledge associated to completing more tasks
Price and Lee (2013)
(Journal of Research in Science Teaching)
Citizen SkyTop-down
  • Scientific attitude instrument (pre/post)
  • Nature of Scientific Knowledge Scale (NSKS) Instrument for epistemological beliefs (pre/post)
  • Self-reported learning (interviews)
  • Attitudes towards science
  • Nature of science
  • Topic-specific knowledge
  • Positive change in scientific attitude
  • Significantly increased epistemological beliefs about the nature of science
  • Increased or unaffected astronomy knowledge and skills
Scanlon, William and Clow (2014)
(Educational Technology & Society)
iSpotTop-down
  • User Identification check (false/right)
  • Topic-specific knowledge
  • The percentage of identifying observations correctly increases with participants’ progress
Mugar et al. (2015)
(International Conference on Communities and Technologies)
Planet HuntersTop-down
  • Interviews (citizens)
  • Log data analysis
  • Attitudes towards science
  • Agent-centred presence, with citizens contributing to research independently (individually or collaboratively)
Land-Zandstra et al. (2016)
(Public Understanding of Science)
iSPEXTop-down
  • Project questions (false/right)
  • Self-reported learning
  • Likert scales
  • Attitudes towards science
  • Topic-specific knowledge
  • Participants had limited involvement with science in their daily lives
  • Low self-reported learning with participants learning “somewhat” about the specific topic
Kloetzer, Schneider, and da Costa (2016)
(Human Computation)
BOINC AFBottom-up
  • Self-reported learning (interviews)
  • Informal Learning in Citizen Science (ILICS) Survey
  • Forum interactions analysis
  • Attitudes towards science
  • Nature of science
  • Topic-specific knowledge
  • Generic knowledge
  • Volunteers further contributing to the science topic by promoting the project and participating in local action groups
  • Citizen participation in peer review and revisions
  • Increased topic-specific knowledge through the project material and researchers
  • Improved communication, digital literacy, and personal development
Masters et al. (2016)
(Journal of Science Communication)
Galaxy Zoo, Planet Hunters, Penguin Watch, Seafloor explorer, Snapshot SerengetiTop-down
  • Self-reported learning (survey)
  • Visual science quizzes
  • Nature of science
  • Topic-specific knowledge
  • Science knowledge
  • Experience gained on how to approach scientific investigations
  • Positive association between level of engagement and topic-specific knowledge
  • No association between level of participation and science knowledge
Aristeidou, Scanlon, and Sharples (2017b)
(AERA conference)
nQuireBottom-up
  • Self-reported learning (interviews & questionnaire)
  • Log data analysis
  • Interviews (scientists)
  • Attitudes towards science
  • Nature of science
  • Topic-specific knowledge
  • Science knowledge
  • Generic knowledge
  • Engagement with science considered a fun way for participants to spend their time
  • Experience gained on how to approach scientific investigations
  • Increased topic-specific knowledge and language through the project material and researchers
  • Increased data annotation, identification, and argumentation skills
  • Improved communication and digital literacy

This review aimed to address two research questions: RQ1 asked what are the methods and instruments previously used to explore the learning effects of citizens’ participation in online citizen science projects? There is a tendency to use self-reported methods to understand how participation in online citizen science is affecting learning. The majority of studies (n = 8) used self-reported instruments, including interviews and questionnaires, to explore learning (; ; ; ; ; ; ; ), scientific literacy gains, and scientific attitudes and beliefs (). Other methods were content analysis of contributed data and forum posts (; ; ); correctness check of contributed data (); and science and project-specific quizzes and tests (; ; ).

There are, however, limited studies that capture existing knowledge and determine whether this has remained stable or has improved after participation in citizen science. It is not surprising that only two studies (; ) used pre/post data, as it is reported that pre/post assessment in such informal contexts with flexible participation and free-choice learning seems to be impossible (; ). This approach could provide more information on the learning background citizens had before they joined a project. Alternative self-reporting methods, for comparing learning before and after participation, could be the use of retrospective pre-surveys or citizens’ daily diaries, in which participants would reflect on and assess their new knowledge. This method may also prevent possible stress caused by taking a pre-test.

Six studies in total combined more than one method in their design. Mixed-methods studies could form a more appropriate form of evaluating learning impact, as combining self-reports with direct analysis of actual participation (e.g., through science knowledge tests and analysis of log data) can enhance the robustness of findings.

A final reflection on the methods used to explore learning in online citizen science involves access to digitally enabled instruments and techniques such as learning analytics of log data files (with consent). Contrary to field-based observation techniques, the online aspect allows researchers to have a full picture of participant learning and progress, rather than an instance of it.

RQ2 aimed to identify evidence of learning due to participation in online citizen science. The studies included in this review examined or uncovered changes in attitudes towards science, in understanding of the nature of science, in topic-specific knowledge, in science knowledge, and in generic knowledge. The majority of studies examined whether participation in online citizen science has improved the knowledge and skills of participants around a specific topic (n = 10), whereas only four studies reported on generic knowledge and skills gains such as communication, community management, and digital literacy. Equal numbers of studies (n = 5) reported changes in attitudes towards science, in understanding the nature of science, and in improvement of science knowledge and skills. Most of the learning areas examined in the reviewed studies were present in the ILICS model (), apart from attitudes to science.

An online citizen science project is characterised by technological affordances, such as scaffolding mechanisms, that may facilitate learning outcomes (). For example, learning outcomes encountered in this review have shown that a well-designed interface visualising research phases and methods may affect participants’ understanding of scientific processes and of how the project works, and may contribute to data-annotation and pattern-recognition skills. Additionally, online participant interactions, with each other and with the content, may have added to vocabulary progress ().

The flexible nature of online citizen science as a learning environment may make measuring pre/post learning outcomes a challenge (). At the same time, online citizen science enables volunteers to experiment and to gain random and spontaneous generic knowledge such as communication skills and digital literacy. The interaction with software platforms and digital instruments is inevitable when participating in online citizen science; this provides participants with the opportunity to improve their information and communication technology (ICT) skills (i.e., ; ; ; ), and concurrently brings them closer to the instruments and approaches that real scientists use. However, the lack of face-to-face experiences and the self-regulating nature of online informal learning environments (; ) that characterises online citizen science projects may have raised challenges for volunteers such as influence their self-confidence or prevent them from contributing.

None of the studies in this review has examined participation in online citizen science in formal education, used experimental studies, or reported impact on participants’ identity and personal growth. Online citizen science projects implemented in classrooms would provide us with valuable evidence of learning in formal settings. However, the use of technology may be an obstacle to employing online citizen science in school classrooms, as sometimes technology infrastructure is not adequate, and there may be restrictive policies in place. Moreover, in many countries, students are forbidden from bringing their mobile devices onto school premises (e.g., ), which has presented an additional challenge.

The creation of control and experimental groups, with volunteers participating in formal and informal programmes or in different online citizen science programmes, could provide more insights into determining the effect of citizen science programmes on learning (), especially when combined with qualitative accounts of data collection (e.g., observations and interviews) that can explain any observed differences between groups. However, the flexibility and the voluntary nature of online citizen science projects obstruct the control of variables such as the type and level of participation. Conducting experimental studies with control and experimental groups could alleviate the difficulty in matching pre- and post-assessment surveys in such a flexible environment. This would be easier to study in online citizen science programmes implemented in formal educational settings, or by administering parallel versions of the same online citizen science project, each exhibiting different features.

It is also possible that the lack of impact on participants’ identity, compared with the citizen science learning frameworks that propose impact on community, society, economy and the environment (e.g., ; ; ; ), can be attributed to the fact that within the reviewed studies, there were no large-scale citizen science implementations or projects focusing explicitly on long-term impact.

Conclusions

At the core of this study is the learning impact of online participation in citizen science, and the methods and instruments that have been used to investigate this impact. Overall, research that examines learning in online citizen science has used self-reported instruments including interviews and questionnaires, content analysis of contributed data and forum posts, accuracy checks of contributed data, science and project-specific quizzes, and instruments for measuring scientific attitudes and beliefs. Learning outcomes in the studies reviewed here include revised attitudes towards science, a better understanding of the nature of science, increased science knowledge, and additional topic-specific knowledge as well as generic knowledge.

This study has produced useful evidence about overall learning in online citizen science programmes. The findings and recommendations of this research contribute to general design considerations for methods and instruments, and areas of focus for evaluating learning in online citizen science projects. Understanding the learning impacts of online citizen science and the potential avenues for further exploration could facilitate the formation of consistent learning-oriented citizen science communities, with more suitable guidance by scientists and project designers, and the development of an integrated learning-evaluation framework.

Future research should focus on exploring the design of different types of online citizen science projects and should identify which specific design features can facilitate or support self-regulated learning and which features can hinder it. In-depth descriptive methods, such as interviews, questionnaires, and observations, are expected to provide insights into the design affordances that promote learning and scientific literacy. Finally, further exploration of long-term effects on the identity and agency of participants may achieve designs that will have an impact on community, society, the economy, and the environment.