Start Submission

Reading: Online Citizen Science: A Systematic Review of Effects on Learning and Scientific Literacy

Download

A- A+
Alt. Display

Review and Synthesis Papers

Online Citizen Science: A Systematic Review of Effects on Learning and Scientific Literacy

Authors:

Maria Aristeidou ,

The Open University, GB
X close

Christothea Herodotou

The Open University, GB
X close

Abstract

Participation in online citizen science is increasingly popular, yet studies that examine the impact on participants’ learning are limited. The aims of this paper are to identify the learning impact on volunteers who participate in online citizen science projects and to explore the methods used to study the impact. The ten empirical studies, examined in this systematic review, report learning impacts on citizens’ attitudes towards science, on their understanding of the nature of science, on topic-specific knowledge, on science knowledge, and on generic knowledge. These impacts were measured using self-reports, content analysis of contributed data and of forum posts, accuracy checks of contributed data, science and project-specific quizzes, and instruments for measuring scientific attitudes and beliefs. The findings highlight that certain technological affordances in online citizen science projects can cultivate citizens’ knowledge and skills, and they point to unexplored areas, including the lack of experimental and long-term studies, and studies in formal education settings.

How to Cite: Aristeidou, M. and Herodotou, C., 2020. Online Citizen Science: A Systematic Review of Effects on Learning and Scientific Literacy. Citizen Science: Theory and Practice, 5(1), p.11. DOI: http://doi.org/10.5334/cstp.224
750
Views
231
Downloads
43
Twitter
  Published on 07 Apr 2020
 Accepted on 11 Dec 2019            Submitted on 18 Dec 2018

Introduction

Citizen science usually refers to the voluntary participation of citizens in different phases of the scientific process, often data collection or analysis, of projects run by scientists (Bonney et al. 2009a). In recent years, citizen science has enjoyed growth and popularity owing to web-based and mobile technology advancements (Reed et al. 2013). As a result, beyond the offline, field-based projects that take place in the physical world, there are now two additional settings for citizen science: Blended settings are mainly offline, but require the use of technology, mainly for data collection; for example, Garden Wildlife Heath (https://www.gardenwildlifehealth.org) invites citizens, and in particular bird watchers, to report dead or diseased wildlife in their gardens through an online survey tool. Virtual settings are exclusively online; for example, Old Weather (https://www.oldweather.org) involves volunteers in transcribing weather observations recorded in US books dating from the mid-nineteenth century.

The aims of this paper are to identify empirical studies that report on learning outcomes in online citizen science, and to review the methods they used to investigate learning. In this section, we briefly introduce an overview of learning outcomes encountered in mainly field-based citizen science programmes, we conceptualise learning in online learning environments, we present the rationale and research questions of this paper. In the Literature Search Process section, we elaborate on the methodology we have used for collecting and critically assessing the studies under review. In the Results and Discussion sections, we present, visualise, and reflect on the review findings. The last section presents conclusions from this study, and points to future directions.

Learning in field-based citizen science programmes

Empirical studies in field-based programmes have highlighted the potential of citizen science to increase scientific literacy, the promotion of knowledge, and the understanding of scientific concepts and processes, (e.g., Trumbull et al. 2000; Brossard 2005; Bonney et al. 2009b; Raddick et al. 2009; Cronje et al. 2011; Wiggins and Crowston, 2011; Philips et al. 2014). More specifically, a notable study by Bonney et al. (2009b) reviewed ten citizen science projects and concluded that there were impacts on participants’ scientific knowledge, ranging from increased understanding of the scientific process to project-specific knowledge about birds. Similarly, empirical studies in school-based settings have reported positive impacts on science learning (e.g., Perello et al. 2017).

In an overview study, Phillips et al. (2018) examined project web pages and surveyed citizen science practitioners; they reported that studies tend to measure science content and process as well as participant interest in both science content and process, behaviour changes, attitudes towards science, and science inquiry skills. However, existing empirical research has not yet systematically analysed learning impact from participation in online citizen science projects. An online citizen science project enables different forms of participation than a field-based project and thus likely results in different forms of learning; the project design may hinder or facilitate specific interactions (with, for example, the science content, other citizens, or scientists) that may determine certain learning processes and outcomes.

Conceptualisation of online learning

Online learning is a form of distance, web-based learning with synchronous and asynchronous components such as real-time interaction with peers and teachers, participation in virtual classes, and the ability to study anytime and anywhere. It may refer to either formal education activities such as free or paid online courses hosted on a learning management system or virtual learning environment, or informal learning experiences such as participation in massive open online platforms (See MOOCs), social networks (e.g., Greenhow and Robelia 2009), online museums (Sackey, Nguyen, and Grabill 2015) and game-based communities (e.g., Sourmelis, Ioannou, and Zaphiris 2017).

In this paper, we are interested in informal learning because informal learning experiences are often designed without an explicit learning or curriculum objective, and because this type of learning is rather random, spontaneous, and hard to measure, and it is less likely to lead to any form of recognition (e.g., Malcolm, Hodkinson and Colley 2003). Learners are viewed as self-directed individuals driven by their own personal interests and as individuals who make sense of the world through an inquiry approach to learning—that is, through manipulating, testing, observing, and questioning (e.g., Bell et al. 2009; Song and Bonk 2016). Facilitation is a significant aspect of online learning and one that can transform digital environments to learning spaces (Sackey, Nguyen, and Grabill 2015). The affordances of online technologies influence how learners interact with each other and with the content, yet this is often not adequate for promoting shared dialogue, group learning, and ongoing interactions. The presence of individuals who monitor and facilitate interactions has been shown to motivate participation in online citizen science communities (Aristeidou, Scanlon, and Sharples 2015), and in formal settings, facilitators have led to enhanced learning outcomes (e.g., Dockter 2016; Herodotou et al. 2019a). Facilitation is often related to scaffolding, which refers to how learners are supported in informal learning conditions. Scaffolding can lead to deeper cognitive gains, yet overformalisation, or highly scaffolded conditions (through, for example, the use of response sheets), may restrict the informal participation behaviours that take place in online citizen science settings, including experimentation and questioning (Yoon et al. 2013). One of the challenges of online learning is to offer high-quality learning experiences that are comparable to face-to-face or classroom-based education (e.g., Davis, Gough, and Taylor 2019).

Rationale

A systematic review of the learning impact of participation in online citizen science is timely for a number of reasons. The use of technology has not only escalated participation (Reed et al., 2013), but it has also enabled some projects to take place entirely online, allowing geographically dispersed people to take part. This increasing participation raises the need to understand whether and what citizens learn from engaging with citizen science projects. Examining the impact on volunteers’ learning can help project designers improve the design of their programmes and cater to citizens’ learning needs. A better understanding of learning impacts can support the educational merit of participation in online citizen science, thereby informing existing approaches and initiatives that aim to engage people with science. Gaining STEM skills is considered both a challenge and an asset in the US and in Europe (ICF and Cedefop for the European Commission 2015; National Science Foundation 2015), and participation in citizen science projects may help to tackle the challenge aspect. School and higher education programmes may form the mediators of STEM skill cultivation.

Moreover, there is evidence that learning stimulates intrinsic motivation to participate, which positively associates to contribution quality (Nov, Arazy, and Anderson 2014) and encourages the loyal and sustained participation of engaged members (Aristeidou, Scanlon, and Sharples 2017a). Self-directed learning is important for lifelong learning (Falk and Dierking 2010), but it is also a challenge to sustain participation in such communities (Nov, Arazy, and Anderson 2011). Understanding and supporting learning within online citizen science projects may balance participation in such flexible and uncontrolled participatory environments.

Furthermore, the flexible participation and the lack of stability and of physical space lead to difficulties in forming a community of practice among fellow online citizen science members. Although communication can be challenging, interaction with peers and experts is thought to be one of the main factors that supports learning in citizen science (Amsha et al. 2016).

Additionally, project scientists usually have to serve as teachers, offering learning content and instructions, although they may not be trained for this role (Price and Lee 2013). Acknowledging what people actually learn and what the communication gaps are may help to train scientists to deliver more suitable lessons through better design that is inspired by more mature research undertaken in the context of online learning environments.

Finally, detecting areas of learning in online citizen science can contribute to setting up a common framework for evaluating the impact of different online citizen science projects. Assessing the educational potential of citizen science has been one of the latest requests from funding agencies. An example of an effort to assess learning in online citizen science is the Informal Learning in Citizen Science (ILICS) model (Kloetzer et al. 2013), which suggests a range of potential learning outcomes based on empirical research within the Citizen Cyberlab project. Furthermore, the US National Science Foundation (NSF) has funded the Learning and Environmental Science Agency Research Network for Citizen Science (LEARN CitSci) programme (http://bit.ly/2A0zxNo), stressing the significance of understanding and effectively supporting citizens’ learning across settings. It is anticipated that evidence-based findings of this review will work as a channel for stakeholders to advance existing learning evaluation frameworks and shape future directions in researching online citizen science.

Aim and research objectives

The aim of this paper is, through a systematic review of resources in education-, technology-, and citizen science–related databases, to identify and critically analyse the learning impact of citizens’ participation in online citizen science projects, and to examine how this has been explored in recent literature. This review can benefit diverse stakeholders, including citizen science designers and researchers, educators (of science and other disciplines), parents, and policymakers, by providing insights as to whether participating in online citizen science has an impact on learning, and how this impact can be documented and evaluated. This work aims to produce an in-depth account of learning outcomes in online citizen science alongside the methods and instruments used for capturing these outcomes.

This systematic review aims to collect evidence about whether participation in citizen science programmes impacts learning, and if so, what is the learning impact? The following research questions (RQs) will be addressed:

  • RQ 1: What methods/instruments have been used to capture learning from citizens’ participation in online citizen science programmes?
  • RQ 2: What is the learning impact of citizens’ participation in online citizen science programmes?

Literature Search Process

In May 2019, we undertook an extensive automated search of the electronic databases Google Scholar, Web of Science, ERIC, Wiley online library, Science Direct, the European Citizen Science Association (ECSA) collection of Citizen Science publications (http://bit.ly/2zLRpsq), the Zooniverse publication database (http://bit.ly/2ijUuZf), the Citizen Science: Theory and Practice journal, and The Open University library search engine. Two different sets of keywords were utilised to extract relevant resources: online (or virtual) citizen science and learning (or science learning or scientific literacy). No chronological restriction was applied, and a Boolean logic search or individual keyword combination search was used as allowed by each database. The search returned 75 unique results; these were manually checked against a set of inclusion and exclusion criteria (see the section Inclusion and exclusion criteria).

Inclusion and exclusion criteria

A set of inclusion criteria ensured that only literature relevant to the research objectives was included in the analysis. Studies were required to:

  • report on online (or virtual) citizen science communities or projects,
  • examine learning impact (learning in general, science learning, or scientific literacy),
  • describe empirical research, and
  • use English as the written language.

Examples of studies excluded from the analysis were those that reported on the motivations for participating in online citizen science projects (e.g., Curtis 2015), those that offered perspectives without empirical data (e.g., Bonney et al. 2016), and those that measured levels of engagement solely as evidence of learning (e.g., Amsha et al. 2016).

Assessing the literature resources

A Mendeley shared folder was created with all the results from the database search. The authors read the abstract and skimmed the full text of resources to identify whether studies met the inclusion and exclusion criteria. Ten successful items were found. In a shared Excel sheet, the following information was recorded: the year of publication, author name/s, number of research participants, research instruments/methods used for data collection, and findings about learning. Studies were also categorised in terms of whether they were journal or conference items and whether they adopted a top-down or a bottom-up approach. Top-down approaches propose types of learning outcomes from online citizen science and suggest ways to investigate whether these outcomes were present or not in particular projects. Bottom-up studies, rather than making use of existing frameworks, focus on self-reports from citizen scientists, or close observation of the participation and engagement of volunteers, to identify evidence of learning.

Results

This section articulates the methods and instruments that have been used to capture learning from citizens’ participation in online citizen science programmes and examines the resulting learning impact.

Methods and instruments (RQ1)

Methods and instruments that have been used to explore the effect on citizens’ learning (RQ1) include instruments for measuring scientific attitude and epistemological beliefs (Price and Lee 2013), scales for measuring the nature of scientific knowledge (Price and Lee 2013), citizen surveys to capture self-reported learning (Kloetzer et al. 2013; Price and Lee 2013; Mugar et al. 2015; Jennett et al. 2016; Land-Zandstra et al. 2016; Masters et al. 2016; Aristeidou, Scanlon, and Sharples 2017b), interviews with scientists to capture their perspectives on citizen learning (Jennett et al. 2016; Aristeidou, Scanlon, and Sharples 2017b), pre/post surveys to measure conceptual knowledge and reasoning abilities (Prather et al. 2013), questionnaires with questions regarding the project (right/false) (Scanlon, William and Clow 2014; Land-Zandstra et al. 2016), visual science quizzes for capturing topic-specific and other science knowledge (Masters et al, 2016), multi-level frameworks to map the types of learning captured in interviews (Kloetzer, Schneider and da Costa, 2016), and log data and forum discussion analysis for correctness check of contributing data (Kloetzer, Schneider & de Costa, 2016; Aristeidou, Scanlon & Sharples, 2017b) and for analysing possible inquiry interactions (Aristeidou, Scanlon, and Sharples 2017b).

More specifically, Price and Lee (2013) administered a scientific attitude instrument and the Nature of Scientific Knowledge Scale (NSKS) to measure participants’ scientific attitudes and epistemological beliefs in the Citizen Sky project, analysed pre- and post-test data collected from 333 participants, and conducted nine interviews; the mean age of the Citizen Sky project participants was 41 years old, with 78% males and 19% females. Prather et al. (2013) collected and analysed responses from 160 participants, through an assessment instrument developed specifically to measure the conceptual knowledge and reasoning abilities of the Galaxy Zoo participants; the mean age of participants, in a previous Galaxy Zoo study, was 43 years old, with 81.3% male participation (Raddick et al. 2013). Kloetzer et al. (2013) conducted 32 semi-structured exploratory interviews with participants from the Old Weather, BOINC (Berkeley Open Infrastructure for Network Computing), and Eyewire projects, and identified learning dimensions in online citizen science projects. Jennett et al. (2016) extended this work with 39 interviews with 28 participants and eleven researchers from BOINC, Old Weather, Eyewire, Transcribe Bentham, Bat Detective, EveryAware, and KRAG projects; the participants’ gender was not surveyed and it is only mentioned that people from a very diverse age range participated in the projects. Scanlon, Woods and Clow (2014) analysed the learning progress of a sample of 407 users in iSpot by identifying their first 50 observations; the participants’ age and gender were not surveyed. Mugar et al. (2015) explored the learning experience of newcomers in Planet Hunters drawing on 21 interviews and the data log analysis of nine participants; the participants’ age and gender were not surveyed. Land-Zandstra et al. (2016) examined the understanding of the project, attitudes towards science, and the perceived learning impacts of 1,123 participants in the iSPEX project via Likert scales, project questions, and Boolean survey questions; the average age of the respondents was 51 years and the majority were male (71%). Kloetzer, Schneider, and da Costa (2016) explored learning outcomes in volunteer computing projects and observed forum interactions on the BOINC Alliance Francophone community. They interviewed ten participants, analysed forum logs, and received responses on their ILICS survey from 147 members. Masters et al. (2016) collected 1,921 responses from Galaxy Zoo, Planet Hunters, Penguin Watch, Seafloor Explorer, and Snapshot Serengeti participants, through a survey examining scientific (general and project-specific) knowledge via visual science quizzes and self-reported science learning; the age of participants was not surveyed, and 56% of the participants were males. Finally, Aristeidou, Scanlon, and Sharples (2017b) explored the types of learning in the nQuire community by examining self-reported learning responses and by analysing the log data of 125 participants; the participants’ age and gender were not surveyed.

Learning impact (RQ2)

The learning impact from participating in online citizen science projects has emerged from both studies that were examining specific aspects of learning (top-down), and exploratory studies collecting evidence about various forms of learning (bottom-up). The main learning categories were: attitudes towards science (Price & Lee, 2013; Mugar et al., 2015; Land-Zandstra et al., 2016; Kloetzer, Schneider & de Costa, 2016; Aristeidou, Scanlon & Sharples, 2017b), understanding of the nature of science (Kloetzer et al., 2013; Price & Lee, 2013; Jennett et al., 2016; Kloetzer, Schneider & de Costa, 2016; Masters et al., 2016; Aristeidou, Scanlon & Sharples, 2017b), topic-specific knowledge (Prather et al., 2013; Price & Lee, 2013; Kloetzer et al., 2013; Scanlon, William & Clow, 2014; Jennett et al., 2016; Kloetzer, Schneider & de Costa, 2016; Land-Zandstra et al., 2016; Masters et al., 2016; Aristeidou, Scanlon & Sharples, 2017b), science knowledge (Kloetzer et al., 2013; Jennett et al., 2016; Masters et al., 2016; Aristeidou, Scanlon & Sharples, 2017b), and generic knowledge (Kloetzer et al., 2013; Jennett et al., 2016; Kloetzer, Schneider & de Costa, 2016; Aristeidou, Scanlon & Sharples, 2017b).

Attitudes towards science

Five studies (n = 5) examined the effects of participating in online citizen science projects on attitudes towards science (i.e., Price and Lee 2013; Mugar et al. 2015; Land-Zandstra et al. 2016; Kloetzer, Schneider, and de Costa, 2016; Aristeidou, Scanlon, and Sharples 2017b). In two of the studies (Price and Lee, 2013; Land-Zandstra et al. 2016), Likert scale instruments were designed to explore attitudes towards science, and three of the studies (Mugar et al. 2015; Kloetzer, Schneider, and de Costa 2016; Aristeidou, Scanlon, and Sharples 2017b) focused on self-reported attitudes captured through interviews.

Participants answering a Likert scale during their participation in Land-Zandstra et al. (2016) reported limited involvement with science in their daily lives (low scores for reading science magazines, attending lectures/events, and following science news), although they agreed that science can have a positive impact on their lives. Changes in attitudes among adults require many interventions over longer periods of time (Merriam, Caffarella, and Baumgartner 2012), and this might be one explanation for not finding any significant changes in the participants’ attitudes. However, Likert scale pre- and post-tests (Price and Lee, 2013) detected a change in scientific attitudes that the authors believe was derived from the reinforcement of existing positive attitudes towards science, as a citizen science project alone is not likely to effect significant change in attitudes towards science. One unexpected finding of this study was that participants reported lowered self-perception of applying scientific thinking in daily life due to the realisation that they understand only a small fraction of the scientific field. In addition, participants considered their engagement with science a fun way to spend their free time (Aristeidou, Scanlon, and Sharples (2017b). This attitude contradicts Castell et al. 2014, which examined public attitudes to science reports and found that the public views science and scientists as serious. Finally, in interviews conducted by Mugar et al. (2015), participants contributed to research following their own individual or collaborative route and showing agent-centred presence. Similarly, in Kloetzer, Shneider, and da Costa (2016), interviews showed that participants volunteered to further contribute to the scientific topic in which they’d engaged as citizen scientists by promoting the project and by participating in local action groups. This sense of influence over the project structure can be key to scientific literacy, as it can lead to citizen science becoming an important learning process (Roth and Lee 2004).

Nature of science

Five studies (n = 5) (i.e., Kloetzer et al. 2013; Price and Lee, 2013; Masters et al. 2015; Kloetzer, Schneider, and da Costa, 2016; Aristeidou, Scanlon, and Sharples, 2017b) examined participants’ beliefs about the nature of science, and in particular, their experience of scientific research and learning about science. One study (i.e., Price and Lee 2013) designed an instrument for measuring epistemological beliefs about the nature of science, on the basis of the Nature of Scientific Knowledge Scale (NSKS) (Rubba and Anderson 1978); another study (i.e., Masters et al. 2015) used a self-reported quantitative survey including measures of science experiences; and three studies (i.e., Kloetzer et al. 2013; Kloetzer, Schneider, and da Costa 2016; Aristeidou, Scanlon, and Sharples 2017b) focused on self-reported interview responses.

Epistemological beliefs about the nature of science were significantly increased in Price and Lee (2013) NSKS results, with participants debating particularly on whether creativity exists in the scientific process and lowering their beliefs that the goal of science is to create universal laws. Similarly, in a quantitative survey by Masters et al. (2015), participants agreed on gaining a new perspective on scientific research. Evidence also showed that participants in some online citizen science projects (a) gained experience on how to approach a scientific investigation (Masters et al. 2015; Aristeidou, Scanlon, and Sharples 2017b); (b) realised that science involves the use of rigorous procedures and controlled protocols (Kloetzer et al. 2013); (c) understood that science takes time to progress (Kloetzer, Schneider, and da Costa 2016); (d) understood the importance of the scientific debate (Aristeidou, Scanlon, and Sharples 2017b); (e) realised that failure is a normal risk and can contribute to improvements in science (Kloetzer et al. 2013); and (f) became more familiar with the process of peer-review and revisions for the production and publication of scientific papers, and in some cases, were co-authors in scientific publications (Kloetzer et al. 2013; Kloetzer, Schneider, and da Costa 2016). These findings differ from published studies about offline or field-based citizen science, where participants demonstrated little change in scientific process and understanding (Brossard 2005; Cronje et al. 2010; Jordan et al. 2012), and difficulties in describing what scientific research is (Crall et al. 2012). This difference may be explained by the different settings (for example, online visualisations could contribute to better understanding of the scientific process) and by the type of participants (for example, online and offline participants could have different ages or motivations for participating).

Topic-specific knowledge

All ten studies (n = 10) (i.e., Kloetzer et al. 2013; Prather et al. 2013; Price & Lee 2013; Scanlon, William and Clow 2014; Mugar et al. 2015; Jennett et al. 2016; Kloetzer, Schneider, and da Costa 2016; Land-Zandstra et al. 2016; Masters et al. 2016; Aristeidou, Scanlon, and Sharples 2017b), examined how participation in online citizen science communities affected conceptual knowledge and skills in science, and they investigated whether participation in online citizen science projects facilitated topic-specific knowledge and skills related to the particular scientific field.

In particular, assessment survey findings from Prather et al. (2013) suggested that many of the participants do not possess a comprehensive understanding of a particular project-specific concept (in this case, the relationship between a galaxy’s morphology and other properties). The authors also compared the achievements of participants on the assessment surveys to their level of participation and concluded that participants who completed more tasks appeared to gain greater knowledge than those who completed fewer tasks. This finding suggests that increased participation in projects more likely results in enhanced learning gains. Furthermore, the authors reported difficulties in getting a significant and representative sample of completed pre/post assessment items to measure the conceptual knowledge of participants, as citizen involvement changes greatly over time. Similarly, Masters et al. (2015) compared the project-specific knowledge, assessed via the quizzes, to the level of participants’ engagement and found a positive association between the two. However, the association depended on the topic of the project (e.g., performance in the astronomy-related project showed more improvement than performance in other projects) and on the project’s level of public engagement (e.g., blog and twitter posts by the project managers).

Exploratory studies examined topic-specific knowledge through analysing self-reported learning and observing data and forum logs. Participants were asked what they learned from their participation in the projects; most of the participants’ responses in Aristeidou, Scanlon, and Sharples (2017b) mentioned gains in topic-specific knowledge as a result of their interactions with the material used in the project, with the scientists, with other participants, and with internet searches. Kloetzer et al. (2013) and Jennett et al. (2016) also found that participants used internet searches to augment their knowledge of the scientific focus of the projects. Other external resources that were reported to contribute to participants’ topic-specific knowledge were talks given by scientists, conferences and meetings, blogs, and scientific papers (Kloetzer, Schneider, and da Costa 2016). Some participants reported that they improved their topic-related knowledge by engaging in project administration tasks, such as creating glossaries and presentations and sharing information about the project. Finally, in Price and Lee (2013) most of the interviewees stated that their knowledge was increased or remained unaffected, and nobody suggested that they had problems while learning. Forum and log data analysis in Aristeidou, Scanlon, and Sharples (2017b) showed that in some cases participants improved their topic-specific vocabulary, using words related to the scientific field and the particular investigations. In addition, there is evidence that they overcame some of the misconceptions they held in certain everyday things related to the scientific topic (e.g., there is no extreme weather in southern countries). Finally, log data analysis of identifications of participants in Scanlon, Wood, and Clow (2014) associates the level of participation with the number of correct identifications. Thus, as participants progress, the percentage of identifying observations correctly increases. A single study (Land-Zandstra et al. 2016) found that self-reported learning impact was not very high, with participants responding that they learned “somewhat” about the scientific topic (i.e., the health and environmental impact of aerosols).

Beyond the science knowledge and skills gained on the topic of the project, some studies have also examined whether participants gained knowledge around the project itself, and in particular the concept of the project, how it works, and how it is supported by software and scientists. For example, in distributed computing projects, participants who were involved passively in the project understood the concept of distributed computing, whereas more active participants were aware of more technical concepts and skills needed for the project (Kloetzer, Schneider, and da Costa 2016). In other online citizen science communities, participants discovered the terms and rules of the game or the software they engaged with in order to participate, such as the interface rules, available options, buttons and commands, and the credit/reputation system (Kloetzer et al. 2013; Jennett et al. 2016; Aristeidou, Scanlon, and Sharples 2017b). However, a more direct assessment of participants’ understanding of the project (e.g., whether their measurements give information about the exposure of people to aerosols at their location) in Land-Zandstra et al. (2016) resulted in low scores. Further, the assessment indicated that the more they understood about the dynamics of the project, the lower their expectation was on whether this can impact policy.

Science knowledge

Four studies (n = 4) (i.e., Kloetzer et al. 2013; Jennett et al. 2016; Masters et al. 2016; Aristeidou, Scanlon, and Sharples 2017b) examined effects on general science knowledge and skills. One study (i.e., Masters et al. 2016) designed and used visual science quizzes, whereas the other three (exploratory) studies did some content/log data and forum interaction analysis.

Masters et al. (2016) explored the relationship between science knowledge and level of participation in a number of Zooniverse projects using visual science quizzes, and concluded that there was no evidence that the two are linked. However, participants who had more positive attitudes towards science also increased their science knowledge. The analysis has also shown differences between the science knowledge gained in projects of different science topics; participants in astronomy-related projects scored better than those in ecology-related ones. Observation of participants’ contributions, via log data and forum analysis, has also revealed that getting involved in scientific investigations facilitated the improvement of inquiry skills such as data annotation, argumentation, critique, and reflection (Aristeidou, Scanlon, and Sharples 2017b); pattern recognition and identification (Kloetzer et al. 2013; Jennett et al. 2016; Aristeidou, Scanlon, and Sharples, 2017b); and question and answer formation, initiation of and effective contribution to discussions, and data comprehension (Kloetzer et al. 2013; Jennett et al. 2016).

Generic knowledge

Only four studies (n = 4) (i.e., Kloetzer et al. 2013; Jennett et al. 2016; Kloetzer, Schneider, and da Costa 2016; Aristeidou, Scanlon, and Sharples 2017b) were designed with an explicit objective to look at how participation in online citizen science communities impacts knowledge and skills that are not directly related to science. All four studies did content/log data and forum interaction analysis, and focused on self-reported responses from interviews.

Main findings were linked to communication, community management, and digital literacy. Communication results showed that language barriers that prompt English-speaking participants to translate pieces of the project to help non-English speakers to participate, thereby improving the language skills of the latter (Kloetzer et al. 2013; Jennett et al. 2016), and that participants improved their writing skills after they were invited to structure their answers and responses using the forums and other interaction tools (Aristeidou, Scanlon, and Sharples 2017b). Community management skills have also been reported (Kloetzer et al. 2013; Jennett et al. 2016; Kloetzer, Schneider, and da Costa 2016), gained through participant operation of the online platforms, team management, forum updates, and event and competition organisation. Finally, digital literacy was one of the skills necessary for participating in citizen science projects online. Participants were found to gain opportunities to learn how to perform a number of tasks such as using the software and hardware of the project, and navigating on the web (Kloetzer et al. 2013; Jennett et al. 2016; Aristeidou, Scanlon, and Sharples 2017b). Moreover, in some projects, participants had the opportunity to gain more advanced skills such as programming and content creation (Kloetzer et al. 2013; Jennett et al. 2016).

Discussion

Ten studies reported empirical examinations of learning in online citizen science projects (see Table 1). All of the studies were published after 2013, which indicates both a growing interest in online citizen science that was made possible by the use of technology and the research community’s recent interest in understanding and documenting the citizen’s learning in online citizen science. The total number of participants taking part in the studies under analysis was 4,189, with a minimum number of 21 in a study (interviews and log data analysis) and a maximum of 1,921 (survey study). The studies researched learning in the following 16 online citizen science projects: Citizen Sky, Galaxy Zoo, Old Weather, BOINC, Eyewire, Transcribe Bentham, Bat Detective, Everyaware, Krag, Planet Hunters, iSPEX, Penguin Watch, Seafloor Explorer, Snapshot Serengeti, iSpot, and nQuire. The studies were published in diverse journals and conferences: Journal of Science Communication; Human Computation Journal; Public Understanding of Science; Astronomy Education Review; Journal of Research in Science Teaching; Journal of Educational Technology and Society; European Research Conference of the Network of Access; Learning Careers and Identities; and the International Conference on Communities and Technologies.

Table 1

Online citizen science projects: methods and findings.

Research article Citizen science projects Approach Research instruments/methods Learning impact categories explored Summary of reported findings on learning and scientific literacy

Kloetzer et al. (2013)
(ESREA conference)
Old Weather, BOINC, Eyewire Bottom-up
  • Self-reported learning (interviews)
  • Nature of science
  • Topic-specific knowledge
  • Science knowledge
  • Generic knowledge
  • Understanding of science procedures and risks
  • Increased topic-specific knowledge through interaction with project material
  • Understanding of how topic-specific science software and tools work
  • Increased pattern recognition, identification skills, and data comprehension
  • Improved communication, digital literacy, and personal development
Jennett et al. (2016)
(Journal of Science Communication)
BOINC, Old Weather, Eyewire, Transcribe Bentham, Bat Detective, EveryAware, KRAG Top-down
  • Self-reported learning (interviews)
  • Interviews (scientists)
Prather et al. (2013)
(Astronomy Education Review)
Galaxy Zoo Top-down
  • Zooniverse Astronomy Concept Survey (ZACS) (pre/post)
  • Topic-specific knowledge
  • Lack of astronomy knowledge and skills in particular topic-specific concepts
  • Increased knowledge associated to completing more tasks
Price and Lee (2013)
(Journal of Research in Science Teaching)
Citizen Sky Top-down
  • Scientific attitude instrument (pre/post)
  • Nature of Scientific Knowledge Scale (NSKS) Instrument for epistemological beliefs (pre/post)
  • Self-reported learning (interviews)
  • Attitudes towards science
  • Nature of science
  • Topic-specific knowledge
  • Positive change in scientific attitude
  • Significantly increased epistemological beliefs about the nature of science
  • Increased or unaffected astronomy knowledge and skills
Scanlon, William and Clow (2014)
(Educational Technology & Society)
iSpot Top-down
  • User Identification check (false/right)
  • Topic-specific knowledge
  • The percentage of identifying observations correctly increases with participants’ progress
Mugar et al. (2015)
(International Conference on Communities and Technologies)
Planet Hunters Top-down
  • Interviews (citizens)
  • Log data analysis
  • Attitudes towards science
  • Agent-centred presence, with citizens contributing to research independently (individually or collaboratively)
Land-Zandstra et al. (2016)
(Public Understanding of Science)
iSPEX Top-down
  • Project questions (false/right)
  • Self-reported learning
  • Likert scales
  • Attitudes towards science
  • Topic-specific knowledge
  • Participants had limited involvement with science in their daily lives
  • Low self-reported learning with participants learning “somewhat” about the specific topic
Kloetzer, Schneider, and da Costa (2016)
(Human Computation)
BOINC AF Bottom-up
  • Self-reported learning (interviews)
  • Informal Learning in Citizen Science (ILICS) Survey
  • Forum interactions analysis
  • Attitudes towards science
  • Nature of science
  • Topic-specific knowledge
  • Generic knowledge
  • Volunteers further contributing to the science topic by promoting the project and participating in local action groups
  • Citizen participation in peer review and revisions
  • Increased topic-specific knowledge through the project material and researchers
  • Improved communication, digital literacy, and personal development
Masters et al. (2016)
(Journal of Science Communication)
Galaxy Zoo, Planet Hunters, Penguin Watch, Seafloor explorer, Snapshot Serengeti Top-down
  • Self-reported learning (survey)
  • Visual science quizzes
  • Nature of science
  • Topic-specific knowledge
  • Science knowledge
  • Experience gained on how to approach scientific investigations
  • Positive association between level of engagement and topic-specific knowledge
  • No association between level of participation and science knowledge
Aristeidou, Scanlon, and Sharples (2017b)
(AERA conference)
nQuire Bottom-up
  • Self-reported learning (interviews & questionnaire)
  • Log data analysis
  • Interviews (scientists)
  • Attitudes towards science
  • Nature of science
  • Topic-specific knowledge
  • Science knowledge
  • Generic knowledge
  • Engagement with science considered a fun way for participants to spend their time
  • Experience gained on how to approach scientific investigations
  • Increased topic-specific knowledge and language through the project material and researchers
  • Increased data annotation, identification, and argumentation skills
  • Improved communication and digital literacy

This review aimed to address two research questions: RQ1 asked what are the methods and instruments previously used to explore the learning effects of citizens’ participation in online citizen science projects? There is a tendency to use self-reported methods to understand how participation in online citizen science is affecting learning. The majority of studies (n = 8) used self-reported instruments, including interviews and questionnaires, to explore learning (Kloetzer et al. 2013; Price and Lee 2013; Mugar et al. 2015; Jennet et al. 2016; Kloetzer, Schneider, and da Costa 2016; Masters et al. 2016; Land-Zandstra et al. 2016; Aristeidou, Scanlon, and Sharples 2017b), scientific literacy gains, and scientific attitudes and beliefs (Price and Lee 2013). Other methods were content analysis of contributed data and forum posts (Mugar et al. 2015; Kloetzer, Schneider, and da Costa 2016; Aristeidou, Scanlon, and Sharples 2017b); correctness check of contributed data (Scanlon, William, and Doug 2014); and science and project-specific quizzes and tests (Prather et al. 2013; Land-Zandstra et al. 2016; Masters et al. 2016).

There are, however, limited studies that capture existing knowledge and determine whether this has remained stable or has improved after participation in citizen science. It is not surprising that only two studies (Prather et al. 2013; Price and Lee 2013) used pre/post data, as it is reported that pre/post assessment in such informal contexts with flexible participation and free-choice learning seems to be impossible (Prather et al. 2013; Aristeidou, Scanlon, and Sharples 2017b). This approach could provide more information on the learning background citizens had before they joined a project. Alternative self-reporting methods, for comparing learning before and after participation, could be the use of retrospective pre-surveys or citizens’ daily diaries, in which participants would reflect on and assess their new knowledge. This method may also prevent possible stress caused by taking a pre-test.

Six studies in total combined more than one method in their design. Mixed-methods studies could form a more appropriate form of evaluating learning impact, as combining self-reports with direct analysis of actual participation (e.g., through science knowledge tests and analysis of log data) can enhance the robustness of findings.

A final reflection on the methods used to explore learning in online citizen science involves access to digitally enabled instruments and techniques such as learning analytics of log data files (with consent). Contrary to field-based observation techniques, the online aspect allows researchers to have a full picture of participant learning and progress, rather than an instance of it.

RQ2 aimed to identify evidence of learning due to participation in online citizen science. The studies included in this review examined or uncovered changes in attitudes towards science, in understanding of the nature of science, in topic-specific knowledge, in science knowledge, and in generic knowledge. The majority of studies examined whether participation in online citizen science has improved the knowledge and skills of participants around a specific topic (n = 10), whereas only four studies reported on generic knowledge and skills gains such as communication, community management, and digital literacy. Equal numbers of studies (n = 5) reported changes in attitudes towards science, in understanding the nature of science, and in improvement of science knowledge and skills. Most of the learning areas examined in the reviewed studies were present in the ILICS model (Kloetzer et al. 2013), apart from attitudes to science.

An online citizen science project is characterised by technological affordances, such as scaffolding mechanisms, that may facilitate learning outcomes (Sackey, Nguyen, and Grabill 2015). For example, learning outcomes encountered in this review have shown that a well-designed interface visualising research phases and methods may affect participants’ understanding of scientific processes and of how the project works, and may contribute to data-annotation and pattern-recognition skills. Additionally, online participant interactions, with each other and with the content, may have added to vocabulary progress (Aristeidou, Scanlon, and Sharples 2017b).

The flexible nature of online citizen science as a learning environment may make measuring pre/post learning outcomes a challenge (Hodkinson, Colley, and Malcolm 2003). At the same time, online citizen science enables volunteers to experiment and to gain random and spontaneous generic knowledge such as communication skills and digital literacy. The interaction with software platforms and digital instruments is inevitable when participating in online citizen science; this provides participants with the opportunity to improve their information and communication technology (ICT) skills (i.e., Kloetzer et al. 2013; Jennett et al. 2016; Kloetzer, Schneider, and da Costa 2016; Aristeidou, Scanlon, and Sharples 2017b), and concurrently brings them closer to the instruments and approaches that real scientists use. However, the lack of face-to-face experiences and the self-regulating nature of online informal learning environments (Bell et al. 2009; Song and Bonk 2016) that characterises online citizen science projects may have raised challenges for volunteers such as influence their self-confidence or prevent them from contributing.

None of the studies in this review has examined participation in online citizen science in formal education, used experimental studies, or reported impact on participants’ identity and personal growth. Online citizen science projects implemented in classrooms would provide us with valuable evidence of learning in formal settings. However, the use of technology may be an obstacle to employing online citizen science in school classrooms, as sometimes technology infrastructure is not adequate, and there may be restrictive policies in place. Moreover, in many countries, students are forbidden from bringing their mobile devices onto school premises (e.g., Doward 2015), which has presented an additional challenge.

The creation of control and experimental groups, with volunteers participating in formal and informal programmes or in different online citizen science programmes, could provide more insights into determining the effect of citizen science programmes on learning (Connolly et al. 2017), especially when combined with qualitative accounts of data collection (e.g., observations and interviews) that can explain any observed differences between groups. However, the flexibility and the voluntary nature of online citizen science projects obstruct the control of variables such as the type and level of participation. Conducting experimental studies with control and experimental groups could alleviate the difficulty in matching pre- and post-assessment surveys in such a flexible environment. This would be easier to study in online citizen science programmes implemented in formal educational settings, or by administering parallel versions of the same online citizen science project, each exhibiting different features.

It is also possible that the lack of impact on participants’ identity, compared with the citizen science learning frameworks that propose impact on community, society, economy and the environment (e.g., NOAA 2009; Jordan, Ballard, and Phillips 2012; Phillips et al. 2014; Bonney et al. 2016), can be attributed to the fact that within the reviewed studies, there were no large-scale citizen science implementations or projects focusing explicitly on long-term impact.

Conclusions

At the core of this study is the learning impact of online participation in citizen science, and the methods and instruments that have been used to investigate this impact. Overall, research that examines learning in online citizen science has used self-reported instruments including interviews and questionnaires, content analysis of contributed data and forum posts, accuracy checks of contributed data, science and project-specific quizzes, and instruments for measuring scientific attitudes and beliefs. Learning outcomes in the studies reviewed here include revised attitudes towards science, a better understanding of the nature of science, increased science knowledge, and additional topic-specific knowledge as well as generic knowledge.

This study has produced useful evidence about overall learning in online citizen science programmes. The findings and recommendations of this research contribute to general design considerations for methods and instruments, and areas of focus for evaluating learning in online citizen science projects. Understanding the learning impacts of online citizen science and the potential avenues for further exploration could facilitate the formation of consistent learning-oriented citizen science communities, with more suitable guidance by scientists and project designers, and the development of an integrated learning-evaluation framework.

Future research should focus on exploring the design of different types of online citizen science projects and should identify which specific design features can facilitate or support self-regulated learning and which features can hinder it. In-depth descriptive methods, such as interviews, questionnaires, and observations, are expected to provide insights into the design affordances that promote learning and scientific literacy. Finally, further exploration of long-term effects on the identity and agency of participants may achieve designs that will have an impact on community, society, the economy, and the environment.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

Both authors made substantial contributions to the conception and design of the review; Dr. Maria Aristeidou contributed to the acquisition, first analysis, and initial interpretation of the resources, and Dr. Christothea Herodotou revised critically the content and the integrity of the work.

References

  1. Amsha, AO, Schneider, D, Fernandez-Marquez, JL, Da Costa, J, Fuchs, B and Kloetzer, L. 2016. Data analytics in citizen cyberscience: Evaluating participant learning and engagement with analytics. Human Computation, 3(1): 69–97. DOI: https://doi.org/10.15346/hc.v3i1.5 

  2. Aristeidou, M, Scanlon, E and Sharples, M. 2015. Weather-it: evolution of an online community for citizen inquiry. In: Proceedings of the 15th International Conference on Knowledge Technologies and Data-driven Business, 13. ACM. DOI: https://doi.org/10.1145/2809563.2809567 

  3. Aristeidou, M, Scanlon, E and Sharples, M. 2017a. Profiles of engagement in online communities of citizen science participation. Computers in Human Behavior, 74. DOI: https://doi.org/10.1016/j.chb.2017.04.044 

  4. Aristeidou, M, Scanlon, E and Sharples, M. 2017b. Science learning in online communities of scientific investigations: evidence and suggestions. In: American Educational Research Association Annual Conference 2017 (AERA 2017), AERA Online Paper Repository. 

  5. Bell, P, Bruce, L, Shouse Andrew, W and Feder Michael, A. (Eds.) 2009. Learning science in informal environments: People, places, and pursuits. Washington, DC: The National Academies Press. 

  6. Bonney, R, Ballard, H, Jordan, R, McCallie, E, Phillips, T, Shirk, J and Wilderman, C. 2009b. Public Participation in Scientific Research Defining the the field and assessing its potential for informal science education. A CAISE Inquiry Group Report. Washington, DC. 

  7. Bonney, R, Cooper, CB, Dickinson, J, Kelling, S, Phillips, T, Rosenberg, KV and Shirk, J. 2009a. Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy. BioScience, 59(11): 977–984. DOI: https://doi.org/10.1525/bio.2009.59.11.9 

  8. Bonney, R, Phillips, TB, Ballard, HL and Enck, JW. 2016. Can citizen science enhance public understanding of science? Public Understanding of Science, 25(1): 2–16. DOI: https://doi.org/10.1177/0963662515607406 

  9. Brossard, D, Lewenstein, B and Bonney, R. 2005. Scientific knowledge and attitude change: The impact of a citizen science project. International Journal of Science Education, 27(9): 1099–1121. DOI: https://doi.org/10.1080/09500690500069483 

  10. Castell, S, Charlton, A, Clemence, M, Pettigrew, N, Pope, S, Quigley, A, Shah, JN and Silman, T. 2014. Public attitudes to science 2014. London: Ipsos MORI Social Research Institute. 

  11. Connolly, P, Biggart, A, Miller, S, O’Hare, L and Thurston, A. 2017. Using randomised controlled trials in education. SAGE. DOI: https://doi.org/10.4135/9781473920385 

  12. Crall, AW, Jordan, R, Holfelder, K, Newman, GJ, Graham, J and Waller, DM. 2012. The impacts of an invasive species citizen science training program on participant attitudes, behavior, and science literacy. Public Understanding of Science, 22(6): 745–764. DOI: https://doi.org/10.1177/0963662511434894 

  13. Cronje, R, Rohlinger, S, Crall, A and Newman, G. 2011. Does participation in citizen science improve scientific literacy? A study to compare assessment methods. Applied Environmental Education and Communication, 10(3): 135–145. DOI: https://doi.org/10.1080/1533015X.2011.603611 

  14. Curtis, V. 2015. Online citizen science projects: an exploration of motivation, contribution and participation. PhD thesis. The Open University. Retrieved from http://oro.open.ac.uk/42239/. 

  15. Davis, NL, Gough, M and Taylor, LL. 2019. Online teaching: advantages, obstacles and tools for getting it right. Journal of Teaching in Travel & Tourism, 256–263. DOI: https://doi.org/10.1080/15313220.2019.1612313 

  16. Dockter, J. 2016. The problem of teaching presence in transactional theories of distance education. Computers and Composition, 40: 73–86. DOI: https://doi.org/10.1016/j.compcom.2016.03.009 

  17. Doward, J. 2015, May 17. Schools that ban mobile phones see better academic results. The Guardian. 

  18. Falk, JH and Dierking, LD. 2010. The 95 percent solution. American Scientist, 98(6): 486–493. DOI: https://doi.org/10.1511/2010.87.486 

  19. Greenhow, C and Robelia, B. 2009. Informal learning and identity formation in online social networks. Learning, Media and Technology, 34(2): 119–140. DOI: https://doi.org/10.1080/17439880902923580 

  20. Herodotou, C, Hlosta, M, Boroowa, A, Rienties, B, Zdrahal, Z and Mangafa, C. 2019. Empowering online teachers through predictive learning analytics. British Journal of Educational Technology, 50(6): 1–16. DOI: https://doi.org/10.1111/bjet.12853 

  21. ICF and Cedefop for the European Commission. 2015. EU Skills Panorama (2014) STEM skills Analytical Highlight, 1–5. 

  22. Jennett, C, Kloetzer, L, Schneider, D, Iacovides, I, Cox, A, Gold, M, Talsi, Y, et al. 2016. Motivations, learning and creativity in online citizen science. Journal of Science Communication, 15(3). DOI: https://doi.org/10.22323/2.15030205 

  23. Jordan, RC, Ballard, HL and Phillips, TB. 2012. Key issues and new approaches for evaluating citizen-science learning outcomes. Frontiers in Ecology and the Environment, 10(6): 307–309. DOI: https://doi.org/10.1890/110280 

  24. Kloetzer, L, Schneider, DK and Da Costa, J. 2016. Not so passive: engagement and learning in Volunteer Computing projects. Human Computation, 3(1): 25–68. DOI: https://doi.org/10.15346/hc.v3i1.4 

  25. Kloetzer, L, Schneider, D, Jennett, C, Iacovides, I, Eveleigh, A, Cox, A and Gold, M. 2013. Learning by volunteer computing, thinking and gaming: What and how are volunteers learning by participating in Virtual Citizen Science? In: Proceedings of the 2013 European Research Conference of the Network of Access, Learning Careers and Identities, European Society for Research on the Education of Adults (ESREA), 73–92. Linköping, Sweden. 

  26. Land-Zandstra, AM, Devilee, JLA, Snik, F, Buurmeijer, F and van den Broek, JM. 2016. Citizen science on a smartphone: Participants’ motivations and learning. Public. Understanding of Science, 25(1): 45–60. DOI: https://doi.org/10.1177/0963662515602406 

  27. Malcolm, J, Hodkinson, P and Colley, H. 2003. The interrelationships between informal and formal learning. Journal of workplace learning, 15(7/8): 313–318. DOI: https://doi.org/10.1108/13665620310504783 

  28. Masters, K, Young Oh, E, Cox, J, Simmons, B, Lintott, C, Graham, G, Holmes, K, et al. 2016. Science learning via participation in online citizen science. Journal of Science Communication, 15(3). DOI: https://doi.org/10.22323/2.15030207 

  29. Merriam, SB, Caffarella, RS and Baumgartner, LM. 2012. Learning in adulthood: A comprehensive guide. John Wiley & Sons. 

  30. Mugar, G, Østerlund, C, Jackson, CB and Crowston, K. 2015. Being present in online communities: learning in citizen science. In: Proceedings of the 7th International Conference on Communities and Technologies, 129–138. ACM. DOI: https://doi.org/10.1145/2768545.2768555 

  31. National Oceanic and Atmospheric Administration. 2009. Designing Education Projects: a comprehensive approach to needs assessment, project planning and implementation, and evaluation. Bridgewater, VA. 

  32. National Science Foundation. 2015. Revisiting the STEM workforce. A companion to science and engineering indicators 2014. 

  33. Nov, O, Arazy, O and Anderson, D. 2011. Dusting for science: motivation and participation of digital citizen science volunteers. In: Proceedings of the 2011 iConference, 68–74. ACM. DOI: https://doi.org/10.1145/1940761.1940771 

  34. Nov, O, Arazy, O and Anderson, D. 2014. Scientists@ Home: what drives the quantity and quality of online citizen science participation? PloS one, 9(4): e90375. DOI: https://doi.org/10.1371/journal.pone.0090375 

  35. Perelló, J, Ferran-Ferrer, N, Ferré, S, Pou, T and Bonhoure, I. 2017. High motivation and relevant scientific competencies through the introduction of citizen science at secondary schools: An assessment using a rubric model. In: Herodotou, C, Sharples, M and Scanlon, E (Eds.), Citizen inquiry: synthesising science and inquiry learning. Abingdon: Routledge. DOI: https://doi.org/10.4324/9781315458618-9 

  36. Phillips, T, Ferguson, M, Minarchek, M and Porticella, N. 2014. User’s guide for evaluating learning outcomes in citizen science. Ithaca, NY: Cornell Lab of Ornithology. 

  37. Phillips, T, Porticella, N, Constas, M and Bonney, R. 2018. A Framework for Articulating and Measuring Individual Learning Outcomes from Participation in Citizen Science. Citizen Science: Theory and Practice, 3(2): 3. DOI: https://doi.org/10.5334/cstp.126 

  38. Prather, EE, Cormier, S, Wallace, CS, Lintott, C, Jordan Raddick, M and Smith, A. 2013. Measuring the Conceptual Understandings of Citizen Scientists Participating in Zooniverse Projects: A First Approach. Astronomy Education Review, 12(1). DOI: https://doi.org/10.3847/AER2013002 

  39. Price, CA and Lee, H-S. 2013. Changes in participants’ scientific attitudes and epistemological beliefs during an astronomical citizen science project. Journal of Research in Science Teaching, 50(7): 773–801. DOI: https://doi.org/10.1002/tea.21090 

  40. Raddick, MJ, Bracey, G, Carney, K, et al. 2009. Citizen Science: Status and Research Directions for the Coming Decade. ASTRO2010 Decadal Survey Position Paper. 

  41. Raddick, MJ, Bracey, G, Gay, PL, Lintott, CJ, Cardamone, C, Murray, P, Vandenberg, J, et al. 2013. Galaxy Zoo: Motivations of citizen scientists. arXiv preprint arXiv:1303.6886. 

  42. Reed, J, Raddick, MJ, Lardner, A and Carney, K. 2013. An Exploratory Factor Analysis of Motivations for Participating in Zooniverse, a Collection of Virtual Citizen Science Projects. In: 2013 46th Hawaii International Conference on System Sciences, 610–619. IEEE. DOI: https://doi.org/10.1109/HICSS.2013.85 

  43. Roth, W and Lee, S. 2004. Science education as/for participation in the community. Science Education, 88: 263–291. DOI: https://doi.org/10.1002/sce.10113 

  44. Rubba, PA and Andersen, HO. 1978. Development of an instrument to assess secondary school students’ understanding of the nature of scientific knowledge. Science Education, 62: 449–458. DOI: https://doi.org/10.1002/sce.10113 

  45. Sackey, DJ, Nguyen, MT and Grabill, JT. 2015. Constructing learning spaces: What we can learn from studies of informal learning online. Computers and Composition, 35: 112–124. DOI: https://doi.org/10.1016/j.compcom.2015.01.004 

  46. Scanlon, E, Woods, W and Clow, D. 2014. Informal participation in science in the UK: Identification, location and mobility with iSpot. Educational Technology & Society, 17(2): 58–71. 

  47. Song, D and Bonk, CJ. 2016. Motivational factors in self-directed informal learning from online learning resources. Cogent Education, 3(1): 11. DOI: https://doi.org/10.1080/2331186X.2016.1205838 

  48. Sourmelis, T, Ioannou, A and Zaphiris, P. 2017. Massively Multiplayer Online Role Playing Games (MMORPGs) and the 21st century skills: A comprehensive research review from 2010 to 2016. Computers in Human Behavior, 67: 41–48. DOI: https://doi.org/10.1016/j.chb.2016.10.020 

  49. Trumbull, DJ, Bonney, R, Bascom, D and Cabral, A. 2000. Thinking scientifically during participation in a citizen-science project. Science Education, 84(2): 265–275. DOI: https://doi.org/10.1002/(SICI)1098-237X(200003)84:2<265::AID-SCE7>3.0.CO;2-5 

  50. Wiggins, A and Crowston, K. 2011. From Conservation To Crowdsourcing: A Typology of Citizen Science. 44th Hawaii International Conference on System Sciences, 1–10. IEEE. DOI: https://doi.org/10.1109/HICSS.2011.207 

  51. Yoon, SA, et al. 2013. Scaffolding Informal Learning in Science Museums: How Much Is Too Much? Science Education, 97(6): 848–877. DOI: https://doi.org/10.1002/sce.21079 

comments powered by Disqus