Start Submission

Reading: Citizen Social Scientists’ Observations on Complex Tasks Match Trained Research Assistants’,...

Download

A- A+
Alt. Display

Research Papers

Citizen Social Scientists’ Observations on Complex Tasks Match Trained Research Assistants’, Suggesting Lived Experiences are Valuable in Data Collection

Authors:

Cindi SturtzSreetharan ,

ASU,US
X close

Alissa Ruth,

ASU,US
X close

Amber Wutich,

ASU,US
X close

Meskerem Glegziabher,

ASU,US
X close

Charlayne Mitchell,

ASU,US
X close

H. Russell Bernard,

ASU,US
X close

Alexandra Brewis

ASU,US
X close

Abstract

Can citizen scientists reliably and meaningfully observe and record complex social phenomena? To explore this question in more detail, we recruited 162 diverse citizen scientists to identify publicly located structural markers of that exclude the elderly, women, large-bodied people, and non-white people in Phoenix, Arizona. The quality of observations by citizen scientists was assessed against data collected on the same observational task by (1) three professional social scientists with expertise on discrimination and (2) 33 trained research assistants. We found that the performance of citizen social scientists is similar to that of trained research assistants, even while both performed very differently from professional social scientists. The main finding is that citizen social scientists who had self-reported with specific social categories relevant to the social exclusions were roughly equal observers to those who did not identify as belonging to those categories, an unexpected finding. Likewise, citizen social scientists who reported experiences of discrimination were not more likely to observe more social exclusions or discrimination in public places than those who did not report such experiences. One key implication is that detailed input from volunteers in how they approach social science tasks could illuminate how social categories (like age or gender) matter for recruitment and performance in citizen social science research.

How to Cite: SturtzSreetharan, C., Ruth, A., Wutich, A., Glegziabher, M., Mitchell, C., Bernard, H.R. and Brewis, A., 2021. Citizen Social Scientists’ Observations on Complex Tasks Match Trained Research Assistants’, Suggesting Lived Experiences are Valuable in Data Collection. Citizen Science: Theory and Practice, 6(1), p.37. DOI: http://doi.org/10.5334/cstp.449
274
Views
54
Downloads
  Published on 16 Dec 2021
 Accepted on 01 Nov 2021            Submitted on 22 Jun 2021

Introduction

Citizen science—collaboration between professional researchers and lay volunteers in research activities—is an increasingly popular research method (Bonney et al. 2009; Bonney et al. 2014; Hecker et al. 2018). Most citizen science projects focus on natural and environmental phenomena (Kullenberg and Kasperowski 2016) and involve the labor-intensive collection of data, which cannot be done using technology (Kasperowski, Kullenberg, and Mäkitalo 2016). The Audubon Society’s Christmas Bird Count, begun in 1900, now involves tens of thousands of volunteers worldwide. Some recent examples include taking measures of air and water quality (McKinley et al. 2016), identifying stages in cyclone development (Phillips et al. 2018), and identifying animals (van Strien, van Swaay, and Termaat 2014). Key citizen science discoveries (Lukyanenko, Wiggins, and Rosser 2019) using this approach include finding a lost NASA image satellite (Voosen 2018) and identifying new insect species (Renault 2018; Losey, Perlman, and Hoebeke 2007).

Despite both useful and imagination-capturing outcomes, citizen science projects have been criticized for weak research design, insufficient sampling, low data quality, and unethical practices such as erasing volunteers from reports in scientific publications (Cooper et al. 2014; Haklay 2013). With regard to data quality, the results are mixed (Conrad and Hilchey 2011; Ruiz-Guitierrez, Hooten, and Campbell 2016; Freitag, Meyer, and Whiteman 2016; Kosmala et al. 2016; Steger, Butt, and Hooten 2017; van der Velde et al. 2017; MacKenzie et al. 2017; Strobl et al. 2019; Falk et al. 2019; and, see Aceves-Bueno et al. 2017 for a review). Steger and colleagues (2017), for example, found that the quality of data collected by volunteers and professionals on wildlife species depended on the species, and that, unsurprisingly, individuals who had particular interests in a species were able to identify and document locations more reliably than those without such interests. In a comparison of marine debris data collection, van de Velde et al. found that the data collected by citizen scientists were of equivalent quality to those collected by researchers (2017). In contrast, in a review of five years of citizen science–collected data, errors were found in the citizen scientists’ documentation and location specificity of alpine flowers, which ultimately limited participation in the research to trained staff and well-trained volunteers (MacKenzie et al. 2017). In short, data quality still poses serious issues for citizen science projects.

Research on data quality in citizen social science is relatively recent (for a review, see Tauginienė et al. 2020). Purdam (2014), for example, documented a variety of panhandling behaviors (asking strangers for money/food/goods) in central London using volunteers who collected data during their normal daily activities. Housley (2018) has made clear the benefits of investigating language through citizen social science methods. Other language-focused projects engaging citizen scientists include documenting linguistic diversity in Norway (Svendsen 2018) and public instances of “fat talk” (e.g., request for evaluation such as “Do I look fat?”) in the US (Agostini et al. 2019; SturtzSreetharan et al. 2019; SturtzSreetharan 2020).

Some of the emergent literature poses citizen social science volunteers as co-learners and change agents rather than just as data collectors (e.g., Dadich 2014; Hoover 2016; Mantyka-Pringle et al. 2017; Kasperowski and Hillman 2018). The argument supporting this is that citizen social scientists working alongside professional or trainee scientists are more likely to seek to change or influence attitudes and practices in their own communities, that is, advance the translation of findings (Kythreotis et al. 2019; Bonhoure et al. 2019). For example, Kythreotis et al. argue that citizen social scientists working on climate change, may “initiate action and policy responses based on their specific forms of social knowing and values,” potentially leading to positive change (2019, p. 4). While this claim requires testing, it highlights the potential benefits of bringing more citizens into social science research—for collecting data that may not be able to be collected otherwise, for enhancing science-based social and policy action, and for creating stronger positive relationships between academic research and its social applications (Housley et al. 2014).

In our study, we confirmed the extent to which data collected by citizen social scientists might be (or not be) subject to those same problems documented for citizen biophysical science, including the problems of low reliability and of differences in observer capacities (Lukyanenko, Wiggins, and Rosser 2019; Kosmala et al. 2016; Steger, Butt, and Hooten 2017; Brown and Williams 2019; Heiss and Matthes 2017; Janssens, Cecile, and Kraft 2012). We also consider here a concern that is central to contemporary social science discussions around data interpretation in community-based research: the effects of personal positionality on what is observed and how it is experienced and reported by researchers, research assistants, and other stakeholders in the research process (e.g., Muhammad et al 2014; Sultana 2007; Pasquini and Olaniyan 2004; Turner 2010; Mwambari 2019).

To do this, we recruited 162 citizen volunteers and engaged them in an environmental observational task that was designed to assess how they noticed (or didn’t) potential indicators of exclusion of frequently discriminated-against social groups in public places (those classified as overweight, elderly, women, and non-white minorities). To help interpret the results, we then compared these citizen scientist observations against two datasets collected in regular (i.e., non–citizen science) researcher modalities: (a) the observations of experienced senior social scientists with theoretical understandings of the issues and with field experience studying discrimination, and (b) trained research assistants (in this case, all undergraduate social science students).

We test the ways that citizen scientists observe environmental symbols that can be read as potentially exclusionary (i.e., discriminatory), related to old age, female gender, large body size, and minority race/ethnicity. All are well-documented signals potentially observable in public spaces (at least, to those who can read the signal meanings) (Butler 1990; hooks 1984, 2000; Harro 2000; Haraway 2001; Brewis 2011; Brewis et al. 2017; Brewis & Wutich 2019; Cole 1992; Macnicol 2005; Puhl and Brownell 2001; Byrne 2012; Wacquant 1993). Feminist and Black feminist writing (Haraway 2001; Collins 2000; Harding 2001; Smith 1987) is also of theoretical use here, because it asserts that one’s personal experiences of marginalization facilitate a keener eye toward markers and practices of social exclusion.

From this literature, we formulated two hypotheses:

H1: Untrained observers. Citizen scientists’ observations of potentially exclusionary social phenomena in public spaces will be significantly different from those of trained social scientists (research assistants or senior researchers).

H2. Positionality. (a) The observations of citizen scientists who are members of discriminated groups (those who are elderly, women, of larger body size, or are non-white minorities) will differ significantly from the observations of citizen scientists who are not. (b) The observations of citizen scientists who report personal experiences with discrimination will differ significantly from the observations of citizen scientists that do not report experiences with discrimination.

By public social exclusionary spaces, we do not mean that someone is physically barred or legally banned from entry. Rather we mean perceptible, observable indicators that people in some groups are less welcome than others. Examples are clearly gendered bathrooms, flags or statues associated with slavery, and public health posters that exhibit faceless (dehumanized) bodies to address obesity (weight stigma). Some indicators are more obvious and are thus able to be widely read and recognized than others. But there are many more, often very subtle, exclusion markers that require understandings of context to discern, but that reflect—and so can create and recreate— some degree of social stigma toward members of some groups and the discrimination it produces (Brewis and Wutich 2019, p. 209). Notably, the meanings need not be read the same by everybody, but their meanings should be especially discernable to “cultural experts” (or, said another way, people with relevant positionality should perhaps be better able to detect them).

Material and Methods

All research was conducted in the greater Phoenix, Arizona area. Informed consent was obtained from all participants, under the auspices of the Arizona State University Institutional Review Board.

Recruitment and training

Citizen social scientists

Using email and social media advertising, chain recruitment (starting with research assistants’ social and professional networks), and word of mouth, we recruited 162 citizen scientists in the Phoenix metropolitan area, although we do not know how many people our recruitment efforts ultimately reached. An effort was made to recruit for diversity in educational and occupational backgrounds. Volunteers were invited, in a recruitment brochure (Figure 1), to “identify and record the physical features and social markers that shape how people relate to city spaces.” The brochure described the ways in which people find aspects of a city welcoming or not and the ways that this can affect individual health. The recruitment noted that the combined efforts of professionals and volunteers would enable the project to be undertaken on a larger scale than if completed only by professional researchers. Of the 162 volunteers recruited, all completed the task. Volunteers received an institutionally logoed t-shirt that included a unique design created by students. Only people at least 18 years of age were recruited; we purposefully excluded currently enrolled university students.

Recruitment brochure for citizen social science project “Eyes on OUR City.”
Figure 1 

Recruitment brochure for citizen social science project “Eyes on OUR City.”

The volunteers completed a basic demographic survey as well as a baseline structural awareness assessment (explained below). They then went through an in-person briefing on how to carry out the observational tasks and record findings. This was conducted by a research assistant assigned to each volunteer. This orientation took about 20 minutes and focused on ensuring that the volunteers understood where they were to undertake the task and how to mark the booklet with their observations.

Research assistants

In addition to the 162 citizen science volunteers, we recruited 33 experienced research assistants to complete the same observational task. All had completed several social science courses crossing disciplines such as anthropology and global health; they had also received at least 40 hours of research training across one semester (Ruth et al. 2020). The training included activities for developing explicit awareness of the four structural exclusions that are the focus of this study: (1) age/elderly; (2) female gender; (3) large body; and (4) non-white minority. Engaging in the training was incentivized through credit for research practicum coursework; however, participation as research assistants was voluntary.

For example, as part of their orientation, the research assistants observed several community locations in groups and practiced evaluating public spaces for the presence of these four structural exclusions, followed by group debriefing and individual reflection with feedback from senior social scientists. Through these guided exercises, the assistants learned to identify examples of discriminatory/exclusionary practices in public spaces and to see how these practices (1) become embedded and normalized in mundane public spaces, (2) cause feelings of shame, and (3) decrease the likelihood that someone will enter these spaces. Many examples of discriminatory/exclusionary practices were identified during the practice observations; we provide three examples of these here: (1) the window display of a women’s clothing store depicts only thin-bodied mannequins and clothing styles for thin-bodied women, signaling exclusion of large bodied women from the store (Gruys 2012); (2) an underground parking garage is dark and has many walls and corners, signaling the exclusion of women who are concerned about safety risks (Blöbaum and Hunecke 2005); (3) signage to reduce bad public behaviors (like littering, loud talking on cell phones, etc.) appear in both English and Spanish, but signs that explain historical monuments are only in English (Rosa 2018). They then completed the observational task described below.

Professional social scientists (PSSs)

Three social scientists with published expertise in social exclusion and discrimination independently completed the same observation tasks. These three social scientists were part of the larger study team of authors (Mitchell, Ruth, and SturtzSreetharan) who helped design the broader research study (see also Ruth et al. 2020).

Observational task

The observational tasks applied in our analysis were the same for all participants. Each observer was provided with a booklet of instructions on where and how to complete the observational task in nine public spaces: (L1–L3) a public city park, (L4) a public transit stop, (L5) a national chain coffee shop, (L6) a small local clothing retailer, (L7) a national chain drugstore, (L8) an underground parking garage, and (L9) a hotel entry. Observations were to be made in the booklet while walking through a pre-designed circuit depicted on maps of the 9 sites (see Supplemental File 1). As noted below, it was important to pre-design the circuit so that consistent movement through the public space could be achieved by each participant. The booklet also provided instructions on how to record the observations and how to return the booklet to their research assistant–trainer.

Instructions for where and what to observe, including location information, were on the lefthand page of the booklet; observations were entered on the right side, which was divided into four sections, with ample blank space to document, in writing, observed instances of exclusion in the four domains. The observation booklet also had the following reminder at the top of each page for recording observations: “Remember: DO NOT include any notes about what people are doing in the spaces. Look just at fixed items in the environment, such as parts of buildings, equipment, and signs. Describe any items that you see in this location that could make any of the following groups feel unwelcome or excluded. Identify what you see and tell us why you think it could be unwelcoming to any of groups. Describe as many items as you can for each category. If there are no items relevant to that category, write ‘none’.” These instructions allowed the citizen scientists to complete the observational task at their own pace according to their availability. Participants were asked to focus on fixed items in order to highlight the ways that the physical environment contributes to feelings of exclusion and discrimination rather than people and their potentially discriminatory/exclusionary behaviors (e.g., staring, rude comments).

Each site in the booklet noted a “starting point,” “recording instructions,” and an “end point.” The “starting point” instructions indicated where to stand when beginning the moment of observation, along with a Google satellite image of the physical space to ensure each volunteer was starting and ending at precisely the same points. Likewise, the “recording instructions” included a birds-eye-view photo from Google maps indicating the walking route when recording observations. This section also included explicit instructions for each location. For example, Location 1: “Walk up the west stairs. Then walk down the east stairs. Walk to the paved area in the center of the park.” Verbal feedback from research assistants indicated that completing all observational tasks took approximately 90 minutes.

Qualitative evaluation of the observational task

Each booklet of observations was coded to assess overlap: that is, whether observers did (1) or did not (0) identify the same exclusions as the PSSs in each location and domain. We used Cohen’s kappa, a widely accepted measure of interrater reliability (Bernard et al. 2016), to assess agreement between the first author and a primary coder on the presence of overlap. Both independently coded 15% of observations in the combined dataset from citizen scientists and trained assistants. Cohen’s Kappa was .831—very good agreement on the presence of overlap by the Landis and Koch (1977) standard. This supported coding the entire data set.

The PSS data was accordingly treated differently. Observations (for each of the 4 observational domains in each of the 9 observations sites) were coded as full agreement on presence (all three identified the same exclusion in the same observation site for the same domain—a result we would use later to compare observations from PSSs and citizen scientists and research assistants); partial agreement on presence (two of the three social scientists noted the same exclusion); and partial agreement on absence (one social scientist noted an exclusion but the other two did not). This last was coded as an absence. A total of 130 observable exclusions were agreed upon by the PSS observers to be present across all nine locations (Figure 2). Examples included small font size on signage (making it difficult for the elderly to read); very small parking spaces (making it difficult for large-bodied people to exit a car); dark, secluded areas (perceived danger for women); and signage that appeared only in English with the exception of those signs targeting negative public behavior, which appeared in Spanish. Of the 130 exclusions agreed on by the PSS observers (across the nine locations) 49 excluded the elderly, 26 excluded nonwhite persons, 38 excluded large bodies (overweight), and 17 excluded women. In Location 1, for example, 6 of the 49 exclusions were for the elderly, 2 were for women, 6 were for large bodies, and 2 were for people seen as a nonwhite minority. These were then compared with both the citizen scientist and research assistant observations.

Steps of qualitative analysis of observation booklets
Figure 2 

Steps of qualitative analysis of observation booklets.

Each citizen scientist and trained research assistant then received a percentage score based on their observations for each of the four exclusion domains (observed exclusions/potentially observable exclusions), indicating how well their observations approximated those of the PSSs. (See Supplemental File 2 for specific exclusions per observational sites.)

Key Variables

Assessment of pre-existing structural awareness

All trained research assistants and all but one citizen social scientist (N = 161) also completed a structural awareness (competency) pre-test to assess their general awareness of and sensitivity to the structural exclusions included in this study (Supplemental File 3; see also Ruth et al. 2020; Metzl and Petty 2017). The test presented three vignettes that had both a visual depiction and a written description of: (1) higher average pay for men in the US compared with women; (2) higher rates of obesity in the southern states than elsewhere in the US; (3) higher numbers of non-white immigrants in lower income neighborhoods in Phoenix. Respondents were asked to provide three possible explanations for each vignette, for a total of 9 explanations (full details in Ruth et al. 2020).

We coded each vignette response as a “social structural” or “other” explanation. Social structural explanations identified policies, economic systems, and other institutions as contributing to or explaining the research finding or attributed differences to disadvantages created by social categories such as race, class, gender, and sexuality (e.g., Neff et al. 2019). Cohen’s Kappa for the social structural code was .827, indicating very good agreement (Landis and Koch 1977). The other explanations category included individual- or group-blaming rationales such as personal failings, social influences, and cultural reasons (See Ruth et al. 2020 for a full description of the findings). Each citizen scientist and trained assistant was then assigned a structural awareness score ranging from 0 to 9, where 0 means they provided no structural explanations and 9 means they provided only structural explanations to each of the three vignettes. A higher score suggested more awareness of and sensitivity to structural exclusions in US society.

Citizen scientist–experienced discrimination

Discrimination experienced by participants was assessed using a 5-item short version of the Everyday Discrimination Scale (Williams et al. 1997), a Likert-type scale that captures the number of contexts and frequencies in which people report “being treated worse than others” over the past 12 months. This yielded a possible score between 0 (no discrimination reported) and 30 (discrimination in many contexts, almost every day). Reported scores ranged from 0 to 22.

Citizen scientist demographics

Table 1 summarizes the demographic variables for the citizen scientists as well as provides corresponding information for the research assistants.

Table 1

Citizen scientist and research assistant demographic information (=162).


CITIZEN SCIENTISTS
(N = 162)
TRAINED SOCIAL SCIENCE RESEARCH ASSISTANTS
(N = 33)

Age Range: 19–72 years old 22–45 years old

Mean: 34.2 24.3

Gender

Female 92 (58%) 20 (61%)

Male 67 (41%) 7 (21%)

Other/non-binary 3 (2%) 1 (3%)

Decline to answer 0 (0%) 5 (15%)

Race/ethnicity

White 49% 39%

Black, African American 4% 0%

American Indian or Alaska Native 2% 0%

Asian/Asian American 9% 9%

Native Hawaiian or other Pacific Islander 0% 0%

Hispanic 27% 15%

≥ 2 categories 8% 24%

Decline to answer 1% 12%

Body Size

Do you consider yourself overweight? = Yes 20% 6%

Clinically overweight based on self-report of height and weight [BMI >25 < 29.9] 30% 18%

Clinically obese based on self-report* of height and weight
[BMI ≥ 30]
20% 0%

Notes: Self-reported height and weight found that 50% of the citizen scientists were either clinically overweight or obese based on BMI, although 80% of the volunteers indicated that they did not consider themselves overweight.

Open-ended responses for ethnicity were coded using the five US census categories plus “Hispanic” and a further category that recognized people who reported 2 or more race/ethnicity categories. Citizen scientists who identified as anything other than white were coded as having minority status.

Observer body size was determined from citizen scientists’ self-reported height and weight. From the latter, BMI categories greater than BMI 25 and less than BMI 25 were assigned for the analysis. These categories were used because BMI 25 is the standard clinical cut-point for defining people as overweight or not. (This is not to say that people above BMI 25 either perceive themselves as overweight or are metabolically unhealthy; this just provides a very general heuristic for separating the sample analytically based on body size.) Gender was determined based on self-identifying as male, female, or other. Finally, for analysis, citizen scientists were grouped into categories of over or under 40 years of age.

Analysis and Results

Our first hypothesis proposed that citizen scientists perform differently on observational tasks compared with trained observers (both research assistants and experts). This only partially confirmed that citizen scientists differ from PSSs, but not trained research assistants. Recall that citizen scientists’ observational booklets were coded and scored based on how well their observations matched those of the PSS observations. Overall, citizen scientists identified a mean of 12.18% (+ 5.8) of the possible observable exclusions, and trained research assistants identified 15.26% (+ 4.0). Both scores were very low compared with the number identified by the PSSs (130 total), but as predicted, the citizen scientists had statistically fewer overlapping observations with the PSSs than did the trained field assistants (t = –2.84, p = 0.043, df = 193). But, shown in Figure 3, (1) citizen social scientists and trained research assistants did not differ significantly with regard to the average number of overlapping observations in the elderly domain; (2) trained field assistants identified significantly more overlapping exclusions for body size (t = –1.084; p = 0.05, df = 193) and for non-white minorities (t = –6.056; p = 0.000, df = 193) than the citizen social scientists; and (3) citizen social scientists identified significantly more overlapping gender exclusions compared with the trained field assistants (t = 1.972; p = 0.05, df = 193). The PSSs, then, identified far more potential symbolic markers of exclusions in the pubic space than either the field assistant or the citizen scientists. Research assistants versus citizen scientists saw similar levels of potentially exclusionary symbols in the public spaces, but not exactly the same ones.

Mean percentage of possible correct observations by exclusion domain for citizen scientists (n = 162) versus trained research assistants (n = 33). The whiskers represent standard deviation
Figure 3 

Mean percentage of possible correct observations by exclusion domain for citizen scientists (n = 162) versus trained research assistants (n = 33). The whiskers represent standard deviation.

Our second hypotheses (a and b) test if citizen scientists’ social positions mattered to what they observed, specifically if those who are members of historically discriminated groups, or those individuals who identify as experiencing more discrimination, are more acute observers of relevant potentially social exclusionary phenomena.

As Table 2 makes clear, this was not confirmed. There was no significant difference in the average number of identified exclusions based on the citizen scientists’ membership in all tested categories: elderly, women, body size, non-white minority. Likewise, self-reported levels of experienced discrimination within the last 12 months by citizen social scientists did not predict any differences in percent of observation scores in any of the social exclusion domains (all p > 0.05 based on students’ t-test with df = 161).

Table 2

Results of linear regression, predicting percent of possible exclusionary observations by category, based on citizen scientists’ initial vignette tests of their structural awareness and reported personal level of experiences of discrimination (note: one participant did not complete the structural awareness test).


DEPENDENT PREDICTOR N UNSTANDARDIZED BETA STANDARD ERROR STANDARDIZED BETA T P F R2 ADJUSTED R2

Observations of gender exclusions [% correct] Structural awareness pretest score [0–9] 161 0.995 0.347 0.221 2.87 0.005 8.239 0.49 0.43

Observations of minority exclusions [% correct] Structural awareness pretest score [0–9] 161 1.256 0.504 0.193 2.491 0.014 6.206 0.037 0.031

Observations of large body exclusions [% correct] Structural awareness pretest score [0–9] 161 0.099 0.349 0.022 0.284 0.777 0.080 0.000 –0.006

Observations of elderly exclusions [% correct] Structural awareness pretest score [0–9] 161 0.098 0.248 0.031 0.397 0.692 0.158 0.001 –0.005

Observations of gender exclusions [% correct] Discrimination experience score [0–22] 162 –0.049 0.140 –0.028bv bv 16.389 0.727 0.123 0.001 –0.006

Observations of minority exclusions [% correct] Discrimination experience score [0–22] 162 0.113 0.202 0.044 0.560 0.576 0.314 0.002 –0.004

Observations of large body exclusions [% correct] Discrimination experience score [0–22] 162 –0.015 0.136 –0.009 –0.107 0.915 0.012 0.000 –0.006

Observations of elderly exclusions [% correct] Discrimination experience score [0–22] 162 0.084 0.098 0.068 0.858 0.392 0.736 0.005 –0.002

Note: Bolding added to column 1 for ease of distinguishing dependent variable; bolding of p values indicates significance.

However, using the same t-test, higher baseline structural awareness scores among citizen social scientists (based on the vignette test) were associated with more observation scores in alignment with PSSs on gender exclusions (p = 0.005) and non-white minority exclusions (p = 0.014) but not on elderly or body size exclusions (p = 0.392 and p = 0.915 respectively) (see Table 2).

Discussion: What can we learn?

Overall, citizen scientists given a social observation task in public spaces performed well compared with trained field research assistants, despite the research assistants having more theoretical and practical knowledge relevant to the task of detecting subtle environmental cues of exclusion in public spaces. Citizen scientists performed similarly to trained research assistants on observing social exclusions related to older age, and were better at observing gender exclusions. They made fewer observations in comparison with trained research assistants when noticing potential exclusionary symbols related to non-white minority status or to large body size. Also, contrary to our hypothesis, citizen scientists who aligned (via self-report) with specific social categories (e.g., women assessing gendered exclusions) performed similarly to other citizen scientists who reported they did not belong to those categories. And, contrary to predictions, citizen scientists who reported more frequent experiences of discrimination in their everyday lives were not more likely to observe the same social exclusions in public places as PSS on the assigned task.

We suggest there are three key takeaways of our findings. The first implication is good news for engaging citizens in social science research. The data collected by citizen scientists was seemingly similar in quality to that collected by ostensibly better-prepared and trained field research assistants. In short, citizen social scientists do no worse than field assistants when assigned routine tasks related to observing complex social phenomena.

The social observational task that we set for these citizen scientists to complete proved to be one that is highly nuanced and complex; it was meant to reflect the real analytic work PSSs do. It’s noteworthy, then, that both non-professional groups performed differently from PSSs. This suggests that observations performed by social scientists and non-professional observers may have different analytic values and applications. That is, non-professional observers may be better than professionals at capturing popularly perceived exclusions.

We also proposed that people who are categorized in marginalized groups or with more direct experience of discrimination would better observe social phenomena relevant to the exclusion of their groups. Yet this is not what we found. This surprising conclusion somewhat contradicts some of the basic thinking behind why social scientists posit positionality as an important consideration in social science research design and data interpretations. Broadly, this approach theorizes that minoritized groups should be more sensitive to (i.e., observant of) markers and practices of exclusion (Haraway 2001; Collins 2000; Harding 2001; Smith 1987).

Why might we not have observed this here? It may be that the forms of discrimination experienced in the past by the citizen scientists interpersonally (the focus on standard scales) are not recognized as an issue of structure, and thus do not translate into the observation of social exclusions in public spaces. Another possibility is that of internalized ageism, fat shame, sexism, and racism. Internalized oppression is also theorized to disorient, and hence may desensitize the citizen scientists who self-reported being members of the relevant social categories under examination here. It may also be that the idea of positionality is usually considered in terms of improved access or interpretive insights, rather than strictly in terms of assessing how symbols are perceived. Perhaps the more relevant theory here could be from sociolinguistics, where the ability to “read” symbols in the environment (whether they are spoken or seen) suggests that meaning is never fixed and always contextual; e.g., a small chair is not the same symbol to a kindergarten teacher versus others for reasons unrelated to the types of categories we tested (Silverstein 1976; Eckert 2008). This was something we could capture with our research design, which was focused on the assumption from positionality theory that people in categories will—through lived experience—be different observers. Working closely with citizen science volunteers clearly introduces the benefit of a community’s interpretation of exclusion (as noted above) versus simply relying on scholarly literature. The lived experience of lay volunteer citizen scientists promises to lend important understandings of how our physical world is navigated and the ways that people feel included or not included.

It is worth noting that extraneous comments made by the citizen science observers in their documentation booklets revealed important information regarding the way that these marginalized groups are imagined by those without relevant lived experience. For example, the category of “elderly” people was overwhelmingly interpreted as people who have poor vision, use assistive devices (wheelchairs, walkers, canes, etc.), and tire easily. That is, elderly people were (in many ways) understood as having mobility issues. Similarly, the category of “women” as a marginalized category was understood as people who are fearful of their surroundings and either are pregnant or have children in tow. This was revealed in their documentation notes wherein some of the observation locations were seen as exclusionary to women because children could easily fall into a lake or because there were no obvious child-feeding or changing stations in the area. In contrast, these kinds of assumptions (disabled for elderly; child-in-tow for women) were explicitly rejected by the three PSSs as indicative of exclusion, as they are seen as drawing on trite cliches of these marginalized groups. But, the tension between the citizen science lay observers and social scientists is intriguing. Indeed, it points out that citizen scientists bring unique insight into popular and culturally shared perceptions of exclusions; these are analytically valuable contributions. Future projects could more fully bring citizen scientists into a critique of academic conceptualizations of social exclusion, and exclusions citizen scientists uniquely perceive could be incorporated into the research design. This point gets to the important issue of expertise (see Irwin 1995 for an excellent discussion of expertise across various stakeholders).

Our findings raise intriguing questions about how diverse citizen scientists can enrich future social science research. We selected volunteers based on categoric diversity in their backgrounds, assuming this mattered to how well they would be able to perform tasks. However, it may be that such categorizations hide important diversity within them, and without knowing more about the citizen scientists, it is hard to say to what extent that diversity matters to the social science produced. Clearly, firm answers are beyond the scope of our designed study. Further research can cognitively trace how individual citizen scientists—in the context of their own social identities—make decisions about why exactly which symbols are noticed, and how their personal lived experiences relate. This may better capture how such factors as discrimination could shape the types of social observations people are able (or willing) to make. There is considerable scope and need for further research on all these points before we can draw clear conclusions about how positionality shapes the perceptions of citizen scientists as social observers.

Conclusion

Our findings affirm the comparable quality of some social-meaning data collected by citizen volunteers. Citizen social scientists performed similarly to trained field assistants in observing social exclusions related to older age; better on observing gender exclusions; and lower on identifying exclusions related to non-white minority status or large body size. The literature suggests that minoritized groups are more observant of relevant social markers and practices (Haraway 2001; Collins 2000; Harding 2001; Smith 1987). Contrary to expectations, however (1) citizen social scientists who aligned (via self-report) with specific social categories (e.g., women, large-bodied, etc.) performed similarly to citizen scientists who reported they did not belong to those categories; and (2) citizen social scientists who reported more frequent experiences of discrimination in their everyday lives were not more likely to observe social exclusions in public places. Our findings suggest we need to better understand how social differences translate into data collection and interpretation among citizen scientists. We find that attending to the nuances of novel observations by citizen social scientists promises to highlight aspects of lived experiences not yet revealed in the extant literature but highly relevant to the tantalizing possibilities of scaling future social science research.

Supplementary Files

The supplementary files for this article can be found as follows:

Supplemental File 1

Observational Protocol Booklet (PDF). DOI: https://doi.org/10.5334/cstp.449.s1

Supplemental File 2

List of PSS Observations per Each Domain at Each Location (PDF). DOI: https://doi.org/10.5334/cstp.449.s2

Supplemental File 3

Baseline Structural Awareness Test and Demographic Information (PDF). DOI: https://doi.org/10.5334/cstp.449.s3

Ethics and Consent

This study was approved by the Arizona State University Institutional Review Board in Fall 2018 (STUDY00008248).

Acknowledgements

We would like to thank all of the citizen scientists and trained research assistants who were part of this project.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

SturtzSreetharan contributed to data curation, investigation, project administration, resources, supervision, and writing the original draft; Ruth contributed to data curation, funding, investigation, project administration, resources, supervision, writing review, and editing; Wutich contributed to the conceptualization, funding, methodology, project administration, resources, writing review, and editing; Glegziabher contributed to conceptualization, investigation, writing review, and editing; Mitchell contributed to data curation, investigation, and resources; Bernard contributed to methodology, visualization, writing review, and editing; and Brewis contributed to conceptualization, formal analysis, funding, methodology, project administration, resources, supervision, writing review, and editing.

References

  1. Aceves-Bueno, E, Adeleye, A, Feraud, M, Huang, Y, Tao, M, Yang, Y and Anderson, S. 2017. The accuracy of citizen science data: A quantitative review. Bulletin of the Ecological Society of America, 98(4): 278–290. DOI: https://doi.org/10.1002/bes2.1336 

  2. Agostini, G, SturtzSreetharan, CL, Wutich, A, Williams, D and Brewis, A. 2019. Citizen Sociolinguistics: A new method for understanding fat talk and other sociolinguistic phenomena, PLoS ONE, 14(5): e0217618. DOI: https://doi.org/10.1371/journal.pone.0217618 

  3. Blöbaum, A and Hunecke, M. 2005. Perceived danger in urban public space: The impacts of physical features and personal factors. Environment and Behavior, 37(4): 465–486. DOI: https://doi.org/10.1177/0013916504269643 

  4. Bonhoure, I, Cigarini, A, Vicens, J and Perelló, J. 2019. Citizen social science in practice: A critical analysis of a mental health community-based project. [internet] OpenSystems Research Group. Universitat de Barcelona. Available at: https://osf.io/preprints/socarxiv/63aj7/ (Last accessed 24 November 2019). DOI: https://doi.org/10.31235/osf.io/63aj7 

  5. Bonney, R, Cooper, CR, Dickinson, J, Kelling, S, Phillips, T, Rosenberg, KV and Shirk, J. 2009. Citizen science: a developing tool for expandin science knowledge and scientific literacy. BioScience, 59: 977–984. DOI: https://doi.org/10.1525/bio.2009.59.11.9 

  6. Bonney, R, Shirk, JL, Phillips, TB, Wiggins, A, Ballard, HL, Miller-Rushing, AJ and Parrish, JK. 2014. Next steps for citizen science. Science, 343(6178): 1436–1437. DOI: https://doi.org/10.1126/science.1251554 

  7. Brewis, A. 2011. Obesity: Cultural and Biocultural Perspectives. New Brunswick, NJ: Rutgers University Press. 

  8. Brewis, A, Trainer, S, Han, S and Wutich, A. 2017. Publically misfitting: extreme weight and the everyday production and reinforcement of felt stigma. Medical anthropology quarterly, 31(2): 257–76. DOI: https://doi.org/10.1111/maq.12309 

  9. Brewis, A and Wutich, A. 2019. Lazy, crazy and disgusting: Stigma and the undoing of global health. Baltimore: JH Press. 

  10. Brown, E and Williams, B. 2019. The potential for citizen science to produce reliable and useful information in ecology. Conservation Biology, 33(3): 561–569. DOI: https://doi.org/10.1111/cobi.13223 

  11. Butler, J. 1990. Gender trouble: Feminism and the subversion of identity. New York: Routledge. 

  12. Byrne, J. 2012. When green is White: The cultural politics of race, nature and social exclusion in a Los Angeles urban national park. Geoforum, 43(3): 595–611. DOI: https://doi.org/10.1016/j.geoforum.2011.10.002 

  13. Cole, T. 1992. The journey of life: A cultural history of aging in America. New York: CUP. 

  14. Collins, PH. 2000. Black feminist thought: Knowledge, consciousness, and the politics of empowerment. New York: Routledge. 

  15. Conrad, CC and Hilchey, KG. 2011. A review of citizen science and community-based environmental monitoring: issues and opportunities. Environmental Monitoring and Assessment, 176: 273–291. DOI: https://doi.org/10.1007/s10661-010-1582-5 

  16. Cooper, CB, Shirk, J and Zuckerberg, B. 2014. The invisible prevalence of citizen science in global gesearch: Migratory birds and climate. PLoS One, 9(9): e106508. DOI: https://doi.org/10.1371/journal.pone.0106508 

  17. Dadich, A. 2014. Citizen social science: A methodology to facilitate and evaluate workplace learning in continuing interprofessional education. Journal of Interprofessional Care, 28(3): 194–199. DOI: https://doi.org/10.3109/13561820.2013.874982 

  18. Eckert, P. 2008. Variation and the indexical field. Journal of Sociolinguistics, 12(4): 453–476. DOI: https://doi.org/10.1111/j.1467-9841.2008.00374.x 

  19. Falk, S, Foster, G, Comont, R, Conroy, J, Bostock, Salisbury, A, Kilbey, D, Bennett, J and Smith, B. 2019. Evaluating the ability of citizen scientists to identify bumblebee (Bombus) species. PLoS ONE, 14(6): e0218614. DOI: https://doi.org/10.1371/journal.pone.0218614 

  20. Freitag, A, Meyer, R and Whiteman, L. 2016. Strategies employed by citizen science programs to increase the credibility of their data. Citizen Science: Theory and Practice, 1(1): 1–11. DOI: https://doi.org/10.5334/cstp.6 

  21. Gruys, K. 2012. Does this make me look fat? Aesthetic labor and fat talk as emotional labor in a women’s plus-size clothing store. Social Problems, 59(4): 481–500. DOI: https://doi.org/10.1525/sp.2012.59.4.481 

  22. Haraway, D. 2001. Situated knowledges: The science question in feminism and the privilege of partial perspective. In: Lederman, M and Bartsch, I (eds.), The gender and science reader, 169–188. London: Routledge. 

  23. Harro, B. 2000. The cycle of socialization. In: Adams, M, Blumenfeld, W, Castañeda, CR and Hackman, H (eds.), Readings for diversity in social justice (pp.16–21). New York: Routledge. 

  24. Harding, SG. 2001. Feminist standpoint epistemology. In: Lederman, M and Bartsch, I (eds.), The gender and science reader, 145–168. London: Routledge. 

  25. Haklay, M. 2013. Citizen science and volunteered geographic information: Overview and typology of participation. In: Sui, D, Elwood, S and Goodchild, M (eds.), Crowdsourcing geographic knowledge, 105–122. Dordrecht: Springer. DOI: https://doi.org/10.1007/978-94-007-4587-2_7 

  26. Hecker, S, Haklay, M, Bowser, A, Makuch, Z, Vogel, J and Bonn, A. 2018. Citizen Science: Innovation in open science, society, and policy. London: UCL Press. DOI: https://doi.org/10.2307/j.ctv550cf2 

  27. Heiss, R and Matthes, J. 2017. Citizen science in the Social Sciences: A call for more evidence. GAIA, 26(1): 22–26. DOI: https://doi.org/10.14512/gaia.26.1.7 

  28. Hooks, B. 1984. Feminist theory: From margin to center. New York: Routledge. 

  29. Hooks, B. 2000. Feminism is for everybody: Passionate politics. Cambridge: South End Press. 

  30. Hoover, E. 2016. ‘We’re not going to be guinea pigs;’ Citizen science and environmental health in a Native American community. Journal of Science Communication, 15(1): 1–21. DOI: https://doi.org/10.22323/2.15010205 

  31. Housley, W. 2018. Conversation analysis, publics, practitioners and citizen social science. Discourse Studies, 20(3): 431–437. DOI: https://doi.org/10.1177/1461445618754581 

  32. Housley, W, Procter, R, Edwards, A, Burnap, P, Williams, M, Sloan, L, Rana, O, Morgan, J, Voss, A and Greenhill, A. 2014. Big and broad social data and the sociological imagination: A collaborative response. Big Data & Society, 1(2): 1–15. DOI: https://doi.org/10.1177/2053951714545135 

  33. Irwin, A. 1995. Citizen Science: A Study of People, Expertise, and Sustainable Development. London: Routledge Press. 

  34. Janssens, A, Cecile, JW and Kraft, P. 2012. Research conducted using data obtained through online communities: Ethical implications of methodological limitations. PLOS Medicine, 9(1): e1001328. DOI: https://doi.org/10.1371/journal.pmed.1001328 

  35. Kasperowski, D and Hillman, T. 2018. The epistemic culture in an online citizen science project: Programs, antiprograms and epistemic subjects. Social Studies of Science, 48(4): 564–588. DOI: https://doi.org/10.1177/0306312718778806 

  36. Kasperowski, D, Kullenberg, C and Mäkitalo, Å. 2016. Embedding citizen science in research: Forms of engagement, scientific output and values for science, policy and society. 27 February 2017. Available at https://osf.io/e3x4y/. [20 October 2019]. DOI: https://doi.org/10.31235/osf.io/tfsgh 

  37. Kosmala, M, Wiggins, A, Swanson, A and Simmons, B. 2016. Assessing data quality in citizen science. Frontiers in Ecology and the Environment, 14: 551–560. DOI: https://doi.org/10.1002/fee.1436 

  38. Kullenberg, C and Kasperowski, D. 2016. What is citizen science? – A scientometric meta-analysis. PLoS ONE, 11(1): e0147152. DOI: https://doi.org/10.1371/journal.pone.0147152 

  39. Kythreotis, A, Mantyka-Pringle, C, Mercer, T, Whitmarsh, L, Corner, A, Paavola, J, Chambers, C, Miller, BA and Castree, N. 2019. Citizen social science for more integrative and effective climate action: A science-policy perspective. Frontiers in Environmental Science, 7: 1–10. DOI: https://doi.org/10.3389/fenvs.2019.00010 

  40. Landis, R and Koch, G. 1977. An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics, 33(2): 363–374. DOI: https://doi.org/10.2307/2529786 

  41. Losey, J, Perlman, J and Hoebeke, E. 2007. Citizen scientist rediscovers rare nine-spotted lady beetle, Coccinella novemnotata, in eastern North America. Journal of Insect Conservation, 11(4): 415–417. DOI: https://doi.org/10.1007/s10841-007-9077-6 

  42. Lukyanenko, R, Wiggins, A and Rosser, H. 2019. Citizen science: An information quality research frontier. Information Systems Frontiers, 22: 961–983. DOI: https://doi.org/10.1007/s10796-019-09915-z 

  43. MacKenzie, CM, Murray, G, Primack, R and Weihrauch, D. 2017. Lessons from citizen science: Assessing volunteer-collected plant phenology data with Mountain Watch. Biological Conservation, 208: 121–176. DOI: https://doi.org/10.1016/j.biocon.2016.07.027 

  44. Macnicol, J. 2005. Age discrimination: An historical and contemporary analysis. New York: CUP. DOI: https://doi.org/10.1017/CBO9780511550560 

  45. Mantyka-Pringle, C, Jardine, T, Bradford, L, Bharadwaj, L, Kythreotis, A, Fresque-Baxter, J, Kelly, E, Somers, G, Lorne, DE, Jones, PD, Lindenschmidt, E, The Slave River and Delta Partnership. 2017. Bridging science and traditional knowledge to assess cumulative impacts of stressors on ecosystem health. Environmental International, 102: 125–137. DOI: https://doi.org/10.1016/j.envint.2017.02.008 

  46. McKinley, DC, Miller-Rushing, AJ, Ballard, HL, Bonney, R, Brown, H, Cook-Patton, SC, Evans, DM, French, RA, Parrish, JK, Phillips, TB, Ryan, SF, Shanley, LA, Shirk, JL, Stepenuck, KF, Weltzin, JF, Wiggins, A, Boyle, OD, Briggs, RD, Chapin, SF, Hewitt, DA, Preuss, PW and Soukup, MA. 2016. Citizen science can improve conservation science, natural resource management, and environmental protection. Biological Conservation, 208: 15–28. DOI: https://doi.org/10.1016/j.biocon.2016.05.015 

  47. Metzl, JM and Petty, J. 2017. Integrating and assessing structural competency in an innovative prehealth curriculum at Vanderbilt University. Academic Medicine, 92(3): 354. DOI: https://doi.org/10.1097/ACM.0000000000001477 

  48. Muhammad, M, Wallerstein, N, Sussman, A, Avila, M, Balone, L and Duran, B. 2014. Reflections on Researcher Identity and Power: The Impact of Positionality on Community Based Participatory Research (CBPR) Processes and Outcomes. Critical sociology, 41(7–8): 1045–1063. DOI: https://doi.org/10.1177/0896920513516025 

  49. Mwambari, D. 2019. Local Positionality in the Production of Knowledge in Northern Uganda. International Journal of Qualitative Methods. January. DOI: https://doi.org/10.1177/1609406919864845 

  50. Neff, J, Holmes, SM, Strong, S, Chin, G, De Avila, J, Dubal, S, Duncan, LG, Halpern, J, Harvey, M, Knight, KR, Lemay, E, Lewis, B, Matthews, J, Nelson, N, Satterwhite, S, Thompson-Lastad, A and Walkover, L. 2019. The structural competency working group: Lessons from iterative, interdisciplinary development of a structural competency training module. In Hansen, H and Metzl, J (eds.), Structural competency in mental health and medicine, 53–74. New York: Springer Publishing. DOI: https://doi.org/10.1007/978-3-030-10525-9_5 

  51. Pasquini, MW and Olaniyan, O. 2004. The researcher and the field assistant: a cross-disciplinary, cross-cultural viewing of positionality. Interdisciplinary Science Reviews, 29(1): 24–36. DOI: https://doi.org/10.1179/030801804225012446 

  52. Phillips, C, Walshe, D, O’Regan, K, Strong, K, Hennon, C, Knapp, K, Murphy, C and Thorne, P. 2018. Assessing citizen science participation skill for altruism or university course credit: A case study analysis using cyclone center. Citizen Science: Theory and Practice, 3(1): 6. DOI: https://doi.org/10.5334/cstp.111 

  53. Puhl, R and Brownell, K. 2001. Bias, discrimination, and obesity. Obesity Research, 9(12): 788–805. DOI: https://doi.org/10.1038/oby.2001.108 

  54. Purdam, K. 2014. Citizen social science and citizen data? Methodological and ethical challenges for social research. Current Sociology, 62(3): 374–392. DOI: https://doi.org/10.1177/0011392114527997 

  55. Renault, H. 2018. New spider species discovered by citizen scientists using Australian conservation app, 4 January 2018. Available at https://www.abc.net.au/news/2018-01-05/seven-new-spider-species-discovered-by-gamers/9303710. (Last accessed 20 October 2019]. 

  56. Rosa, J. 2018. Community as a campus: From “problems” to possibilities in Latinx communities. In Castañeda, M and Krupczynski, J (eds.), Civic engagement in diverse Latinx communities: Learning from social justice partnerships in action, 111–123. New York: Peter Lang Publishing. 

  57. Ruiz-Gutierrez, V, Hooten, MB and Campbell Grant, EH. 2016. Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias. Methods in Ecology and Evolution, 7: 900–909. DOI: https://doi.org/10.1111/2041-210X.12542 

  58. Ruth, A, SturtzSreetharan, CL, Brewis, A and Wutich, A. 2020. Structural competency of pre-health students: Can a single course lead to meaningful change? Med Sci Educ. e-ISSN 2156–8650. DOI: https://doi.org/10.1007/s40670-019-00909-9 

  59. Silverstein, M. 1976. Shifters, Linguistic Categories and Cultural Description. In: Basso, K and Selby, H (eds.) Meaning in Anthropology, 11–55. Albuquerque: University of New Mexico Press. 

  60. Smith, D. 1987. The everyday world as problematic: A feminist sociology. Boston: NUP. 

  61. Steger, C, Butt, B and Hooten, MB. 2017. Safari science: assessing the reliability of citizen science data for wildlife surveys. Journal of Applied Ecology, 54(6): 2053–2062. DOI: https://doi.org/10.1111/1365-2664.12921 

  62. Strobl, B, Etter, S, van Meerveld, I and Seibert, J. 2019. The CrowdWater game: A playful way to improve the accuracy of crowdsourced water level class data. PLoS ONE, 14(9): e0222579. DOI: https://doi.org/10.1371/journal.pone.0222579 

  63. SturtzSreetharan, CL. 2020. Citizen Sociolinguistics: A data collection approach for hard-to-capture naturally-occurring language data. Field Methods, 32(3): 327–334. DOI: https://doi.org/10.1177/1525822X20912211 

  64. SturtzSreetharan, CL, Agostini, G, Brewis, AA and Wutich, A. 2019. Fat talk: A citizen sociolinguistic approach. Journal of Sociolinguistics, 23(3): 263–283. DOI: https://doi.org/10.1111/josl.12342 

  65. Sultana, F. 2007. Reflexivity, positionality and participatory ethics: Negotiating fieldwork dilemmas in international research. ACME: An international journal for critical geographies, 6(3): 374–385. Available at https://acme-journal.org/index.php/acme/article/view/786 (Last accessed 5 August 2021). 

  66. Svendsen, B. 2018. The dynamics of citizen sociolinguistics. Journal of Sociolinguistics, 22(2): 137–60. DOI: https://doi.org/10.1111/josl.12276 

  67. Tauginienė, L, Butkevičienė, E., Vohland, K, Heinisch, B, Daskolia, M, Suškevičs, M, Portela, M, Balázs, B and Prūse, B. 2020. Citizen science in the social sciences and humanities: The power of interdisciplinarity. Palgrave Communications, 6: 89. DOI: https://doi.org/10.1057/s41599-020-0471-y 

  68. Turner, S. 2010. Research Note: The silenced assistant. Reflections of invisible interpreters and research assistants. Asia Pacific Viewpoint, 51(2): 206–219. DOI: https://doi.org/10.1111/j.1467-8373.2010.01425.x 

  69. van der Velde, T, Milton, DA, Lawson, TJ, Wilcox, C, Lansdell, M, Davis, G, Perkins, G and Hardesty, BD. 2017. Comparison of marine debris data collected by researchers and citizen scientists: Is citizen science data worth the effort? Biological Conservation, 208: 127–138. DOI: https://doi.org/10.1016/j.biocon.2016.05.025 

  70. van Strien, AJ, van Swaay, CA and Termaat, T. 2014. Opportunistic citizen science data of animal species produce reliable estimates of distribution trends if analysed with occupancy models. Journal of Applied Ecology, 50(6): 1450–1458. DOI: https://doi.org/10.1111/1365-2664.12158 

  71. Voosen, P. 2018. Update: NASA confirms amateur astronomer has discovered a lost satellite, 31 January 2018. Available at https://www.sciencemag.org/news/2018/01/amateur-astronomer-discovers-revived-nasa-satellite (Last accessed 20 October 2019]. DOI: https://doi.org/10.1126/science.aat1319 

  72. Wacquant, LJ. 1993. Urban outcasts: stigma and division in the black American ghetto and the French urban periphery. International Journal of Urban and Regional Research, 17(3): 366–83. DOI: https://doi.org/10.1111/j.1468-2427.1993.tb00227.x 

  73. Williams, DR, Yu, Y, Jackson, JS and Anderson, NB. 1997. Racial differences in physical and mental health: Socioeconomic status, stress, and discrimination. Journal of Health Psychology, 2(3): 335–351. DOI: https://doi.org/10.1177/135910539700200305