Introduction

Citizen science—collaboration between professional researchers and lay volunteers in research activities—is an increasingly popular research method (; ; ). Most citizen science projects focus on natural and environmental phenomena () and involve the labor-intensive collection of data, which cannot be done using technology (). The Audubon Society’s Christmas Bird Count, begun in 1900, now involves tens of thousands of volunteers worldwide. Some recent examples include taking measures of air and water quality (), identifying stages in cyclone development (), and identifying animals (). Key citizen science discoveries () using this approach include finding a lost NASA image satellite () and identifying new insect species (; ).

Despite both useful and imagination-capturing outcomes, citizen science projects have been criticized for weak research design, insufficient sampling, low data quality, and unethical practices such as erasing volunteers from reports in scientific publications (; ). With regard to data quality, the results are mixed (; ; ; ; ; ; ; ; ; and, see ). Steger and colleagues (), for example, found that the quality of data collected by volunteers and professionals on wildlife species depended on the species, and that, unsurprisingly, individuals who had particular interests in a species were able to identify and document locations more reliably than those without such interests. In a comparison of marine debris data collection, van de Velde et al. found that the data collected by citizen scientists were of equivalent quality to those collected by researchers (2017). In contrast, in a review of five years of citizen science–collected data, errors were found in the citizen scientists’ documentation and location specificity of alpine flowers, which ultimately limited participation in the research to trained staff and well-trained volunteers (). In short, data quality still poses serious issues for citizen science projects.

Research on data quality in citizen social science is relatively recent (for a review, see ). Purdam (), for example, documented a variety of panhandling behaviors (asking strangers for money/food/goods) in central London using volunteers who collected data during their normal daily activities. Housley () has made clear the benefits of investigating language through citizen social science methods. Other language-focused projects engaging citizen scientists include documenting linguistic diversity in Norway () and public instances of “fat talk” (e.g., request for evaluation such as “Do I look fat?”) in the US (; ; ).

Some of the emergent literature poses citizen social science volunteers as co-learners and change agents rather than just as data collectors (e.g., ; ; ; ). The argument supporting this is that citizen social scientists working alongside professional or trainee scientists are more likely to seek to change or influence attitudes and practices in their own communities, that is, advance the translation of findings (; ). For example, Kythreotis et al. argue that citizen social scientists working on climate change, may “initiate action and policy responses based on their specific forms of social knowing and values,” potentially leading to positive change (). While this claim requires testing, it highlights the potential benefits of bringing more citizens into social science research—for collecting data that may not be able to be collected otherwise, for enhancing science-based social and policy action, and for creating stronger positive relationships between academic research and its social applications ().

In our study, we confirmed the extent to which data collected by citizen social scientists might be (or not be) subject to those same problems documented for citizen biophysical science, including the problems of low reliability and of differences in observer capacities (; ; ; ; ; ). We also consider here a concern that is central to contemporary social science discussions around data interpretation in community-based research: the effects of personal positionality on what is observed and how it is experienced and reported by researchers, research assistants, and other stakeholders in the research process (e.g., ; ; ; ; ).

To do this, we recruited 162 citizen volunteers and engaged them in an environmental observational task that was designed to assess how they noticed (or didn’t) potential indicators of exclusion of frequently discriminated-against social groups in public places (those classified as overweight, elderly, women, and non-white minorities). To help interpret the results, we then compared these citizen scientist observations against two datasets collected in regular (i.e., non–citizen science) researcher modalities: (a) the observations of experienced senior social scientists with theoretical understandings of the issues and with field experience studying discrimination, and (b) trained research assistants (in this case, all undergraduate social science students).

We test the ways that citizen scientists observe environmental symbols that can be read as potentially exclusionary (i.e., discriminatory), related to old age, female gender, large body size, and minority race/ethnicity. All are well-documented signals potentially observable in public spaces (at least, to those who can read the signal meanings) (; , ; ; ; ; ; ; ; ; ; ; ). Feminist and Black feminist writing (; ; ; ) is also of theoretical use here, because it asserts that one’s personal experiences of marginalization facilitate a keener eye toward markers and practices of social exclusion.

From this literature, we formulated two hypotheses:

H1: Untrained observers. Citizen scientists’ observations of potentially exclusionary social phenomena in public spaces will be significantly different from those of trained social scientists (research assistants or senior researchers).

H2. Positionality. (a) The observations of citizen scientists who are members of discriminated groups (those who are elderly, women, of larger body size, or are non-white minorities) will differ significantly from the observations of citizen scientists who are not. (b) The observations of citizen scientists who report personal experiences with discrimination will differ significantly from the observations of citizen scientists that do not report experiences with discrimination.

By public social exclusionary spaces, we do not mean that someone is physically barred or legally banned from entry. Rather we mean perceptible, observable indicators that people in some groups are less welcome than others. Examples are clearly gendered bathrooms, flags or statues associated with slavery, and public health posters that exhibit faceless (dehumanized) bodies to address obesity (weight stigma). Some indicators are more obvious and are thus able to be widely read and recognized than others. But there are many more, often very subtle, exclusion markers that require understandings of context to discern, but that reflect—and so can create and recreate— some degree of social stigma toward members of some groups and the discrimination it produces (). Notably, the meanings need not be read the same by everybody, but their meanings should be especially discernable to “cultural experts” (or, said another way, people with relevant positionality should perhaps be better able to detect them).

Material and Methods

All research was conducted in the greater Phoenix, Arizona area. Informed consent was obtained from all participants, under the auspices of the Arizona State University Institutional Review Board.

Recruitment and training

Citizen social scientists

Using email and social media advertising, chain recruitment (starting with research assistants’ social and professional networks), and word of mouth, we recruited 162 citizen scientists in the Phoenix metropolitan area, although we do not know how many people our recruitment efforts ultimately reached. An effort was made to recruit for diversity in educational and occupational backgrounds. Volunteers were invited, in a recruitment brochure (Figure 1), to “identify and record the physical features and social markers that shape how people relate to city spaces.” The brochure described the ways in which people find aspects of a city welcoming or not and the ways that this can affect individual health. The recruitment noted that the combined efforts of professionals and volunteers would enable the project to be undertaken on a larger scale than if completed only by professional researchers. Of the 162 volunteers recruited, all completed the task. Volunteers received an institutionally logoed t-shirt that included a unique design created by students. Only people at least 18 years of age were recruited; we purposefully excluded currently enrolled university students.

Figure 1 

Recruitment brochure for citizen social science project “Eyes on OUR City.”

The volunteers completed a basic demographic survey as well as a baseline structural awareness assessment (explained below). They then went through an in-person briefing on how to carry out the observational tasks and record findings. This was conducted by a research assistant assigned to each volunteer. This orientation took about 20 minutes and focused on ensuring that the volunteers understood where they were to undertake the task and how to mark the booklet with their observations.

Research assistants

In addition to the 162 citizen science volunteers, we recruited 33 experienced research assistants to complete the same observational task. All had completed several social science courses crossing disciplines such as anthropology and global health; they had also received at least 40 hours of research training across one semester (). The training included activities for developing explicit awareness of the four structural exclusions that are the focus of this study: (1) age/elderly; (2) female gender; (3) large body; and (4) non-white minority. Engaging in the training was incentivized through credit for research practicum coursework; however, participation as research assistants was voluntary.

For example, as part of their orientation, the research assistants observed several community locations in groups and practiced evaluating public spaces for the presence of these four structural exclusions, followed by group debriefing and individual reflection with feedback from senior social scientists. Through these guided exercises, the assistants learned to identify examples of discriminatory/exclusionary practices in public spaces and to see how these practices (1) become embedded and normalized in mundane public spaces, (2) cause feelings of shame, and (3) decrease the likelihood that someone will enter these spaces. Many examples of discriminatory/exclusionary practices were identified during the practice observations; we provide three examples of these here: (1) the window display of a women’s clothing store depicts only thin-bodied mannequins and clothing styles for thin-bodied women, signaling exclusion of large bodied women from the store (); (2) an underground parking garage is dark and has many walls and corners, signaling the exclusion of women who are concerned about safety risks (); (3) signage to reduce bad public behaviors (like littering, loud talking on cell phones, etc.) appear in both English and Spanish, but signs that explain historical monuments are only in English (). They then completed the observational task described below.

Professional social scientists (PSSs)

Three social scientists with published expertise in social exclusion and discrimination independently completed the same observation tasks. These three social scientists were part of the larger study team of authors (Mitchell, Ruth, and SturtzSreetharan) who helped design the broader research study (see also ).

Observational task

The observational tasks applied in our analysis were the same for all participants. Each observer was provided with a booklet of instructions on where and how to complete the observational task in nine public spaces: (L1–L3) a public city park, (L4) a public transit stop, (L5) a national chain coffee shop, (L6) a small local clothing retailer, (L7) a national chain drugstore, (L8) an underground parking garage, and (L9) a hotel entry. Observations were to be made in the booklet while walking through a pre-designed circuit depicted on maps of the 9 sites (see Supplemental File 1). As noted below, it was important to pre-design the circuit so that consistent movement through the public space could be achieved by each participant. The booklet also provided instructions on how to record the observations and how to return the booklet to their research assistant–trainer.

Instructions for where and what to observe, including location information, were on the lefthand page of the booklet; observations were entered on the right side, which was divided into four sections, with ample blank space to document, in writing, observed instances of exclusion in the four domains. The observation booklet also had the following reminder at the top of each page for recording observations: “Remember: DO NOT include any notes about what people are doing in the spaces. Look just at fixed items in the environment, such as parts of buildings, equipment, and signs. Describe any items that you see in this location that could make any of the following groups feel unwelcome or excluded. Identify what you see and tell us why you think it could be unwelcoming to any of groups. Describe as many items as you can for each category. If there are no items relevant to that category, write ‘none’.” These instructions allowed the citizen scientists to complete the observational task at their own pace according to their availability. Participants were asked to focus on fixed items in order to highlight the ways that the physical environment contributes to feelings of exclusion and discrimination rather than people and their potentially discriminatory/exclusionary behaviors (e.g., staring, rude comments).

Each site in the booklet noted a “starting point,” “recording instructions,” and an “end point.” The “starting point” instructions indicated where to stand when beginning the moment of observation, along with a Google satellite image of the physical space to ensure each volunteer was starting and ending at precisely the same points. Likewise, the “recording instructions” included a birds-eye-view photo from Google maps indicating the walking route when recording observations. This section also included explicit instructions for each location. For example, Location 1: “Walk up the west stairs. Then walk down the east stairs. Walk to the paved area in the center of the park.” Verbal feedback from research assistants indicated that completing all observational tasks took approximately 90 minutes.

Qualitative evaluation of the observational task

Each booklet of observations was coded to assess overlap: that is, whether observers did (1) or did not (0) identify the same exclusions as the PSSs in each location and domain. We used Cohen’s kappa, a widely accepted measure of interrater reliability (Bernard et al. 2016), to assess agreement between the first author and a primary coder on the presence of overlap. Both independently coded 15% of observations in the combined dataset from citizen scientists and trained assistants. Cohen’s Kappa was .831—very good agreement on the presence of overlap by the Landis and Koch () standard. This supported coding the entire data set.

The PSS data was accordingly treated differently. Observations (for each of the 4 observational domains in each of the 9 observations sites) were coded as full agreement on presence (all three identified the same exclusion in the same observation site for the same domain—a result we would use later to compare observations from PSSs and citizen scientists and research assistants); partial agreement on presence (two of the three social scientists noted the same exclusion); and partial agreement on absence (one social scientist noted an exclusion but the other two did not). This last was coded as an absence. A total of 130 observable exclusions were agreed upon by the PSS observers to be present across all nine locations (Figure 2). Examples included small font size on signage (making it difficult for the elderly to read); very small parking spaces (making it difficult for large-bodied people to exit a car); dark, secluded areas (perceived danger for women); and signage that appeared only in English with the exception of those signs targeting negative public behavior, which appeared in Spanish. Of the 130 exclusions agreed on by the PSS observers (across the nine locations) 49 excluded the elderly, 26 excluded nonwhite persons, 38 excluded large bodies (overweight), and 17 excluded women. In Location 1, for example, 6 of the 49 exclusions were for the elderly, 2 were for women, 6 were for large bodies, and 2 were for people seen as a nonwhite minority. These were then compared with both the citizen scientist and research assistant observations.

Figure 2 

Steps of qualitative analysis of observation booklets.

Each citizen scientist and trained research assistant then received a percentage score based on their observations for each of the four exclusion domains (observed exclusions/potentially observable exclusions), indicating how well their observations approximated those of the PSSs. (See Supplemental File 2 for specific exclusions per observational sites.)

Key Variables

Assessment of pre-existing structural awareness

All trained research assistants and all but one citizen social scientist (N = 161) also completed a structural awareness (competency) pre-test to assess their general awareness of and sensitivity to the structural exclusions included in this study (Supplemental File 3; see also ; ). The test presented three vignettes that had both a visual depiction and a written description of: (1) higher average pay for men in the US compared with women; (2) higher rates of obesity in the southern states than elsewhere in the US; (3) higher numbers of non-white immigrants in lower income neighborhoods in Phoenix. Respondents were asked to provide three possible explanations for each vignette, for a total of 9 explanations (full details in ).

We coded each vignette response as a “social structural” or “other” explanation. Social structural explanations identified policies, economic systems, and other institutions as contributing to or explaining the research finding or attributed differences to disadvantages created by social categories such as race, class, gender, and sexuality (e.g., ). Cohen’s Kappa for the social structural code was .827, indicating very good agreement (). The other explanations category included individual- or group-blaming rationales such as personal failings, social influences, and cultural reasons (See ). Each citizen scientist and trained assistant was then assigned a structural awareness score ranging from 0 to 9, where 0 means they provided no structural explanations and 9 means they provided only structural explanations to each of the three vignettes. A higher score suggested more awareness of and sensitivity to structural exclusions in US society.

Citizen scientist–experienced discrimination

Discrimination experienced by participants was assessed using a 5-item short version of the Everyday Discrimination Scale (), a Likert-type scale that captures the number of contexts and frequencies in which people report “being treated worse than others” over the past 12 months. This yielded a possible score between 0 (no discrimination reported) and 30 (discrimination in many contexts, almost every day). Reported scores ranged from 0 to 22.

Citizen scientist demographics

Table 1 summarizes the demographic variables for the citizen scientists as well as provides corresponding information for the research assistants.

Table 1

Citizen scientist and research assistant demographic information (=162).


CITIZEN SCIENTISTS
(N = 162)
TRAINED SOCIAL SCIENCE RESEARCH ASSISTANTS
(N = 33)

AgeRange:19–72 years old22–45 years old

Mean:34.224.3

Gender

Female92 (58%)20 (61%)

Male67 (41%)7 (21%)

Other/non-binary3 (2%)1 (3%)

Decline to answer0 (0%)5 (15%)

Race/ethnicity

White49%39%

Black, African American4%0%

American Indian or Alaska Native2%0%

Asian/Asian American9%9%

Native Hawaiian or other Pacific Islander0%0%

Hispanic27%15%

≥ 2 categories8%24%

Decline to answer1%12%

Body Size

Do you consider yourself overweight? = Yes20%6%

Clinically overweight based on self-report of height and weight [BMI >25 < 29.9]30%18%

Clinically obese based on self-report* of height and weight
[BMI ≥ 30]
20%0%

Notes: Self-reported height and weight found that 50% of the citizen scientists were either clinically overweight or obese based on BMI, although 80% of the volunteers indicated that they did not consider themselves overweight.

Open-ended responses for ethnicity were coded using the five US census categories plus “Hispanic” and a further category that recognized people who reported 2 or more race/ethnicity categories. Citizen scientists who identified as anything other than white were coded as having minority status.

Observer body size was determined from citizen scientists’ self-reported height and weight. From the latter, BMI categories greater than BMI 25 and less than BMI 25 were assigned for the analysis. These categories were used because BMI 25 is the standard clinical cut-point for defining people as overweight or not. (This is not to say that people above BMI 25 either perceive themselves as overweight or are metabolically unhealthy; this just provides a very general heuristic for separating the sample analytically based on body size.) Gender was determined based on self-identifying as male, female, or other. Finally, for analysis, citizen scientists were grouped into categories of over or under 40 years of age.

Analysis and Results

Our first hypothesis proposed that citizen scientists perform differently on observational tasks compared with trained observers (both research assistants and experts). This only partially confirmed that citizen scientists differ from PSSs, but not trained research assistants. Recall that citizen scientists’ observational booklets were coded and scored based on how well their observations matched those of the PSS observations. Overall, citizen scientists identified a mean of 12.18% (+ 5.8) of the possible observable exclusions, and trained research assistants identified 15.26% (+ 4.0). Both scores were very low compared with the number identified by the PSSs (130 total), but as predicted, the citizen scientists had statistically fewer overlapping observations with the PSSs than did the trained field assistants (t = –2.84, p = 0.043, df = 193). But, shown in Figure 3, (1) citizen social scientists and trained research assistants did not differ significantly with regard to the average number of overlapping observations in the elderly domain; (2) trained field assistants identified significantly more overlapping exclusions for body size (t = –1.084; p = 0.05, df = 193) and for non-white minorities (t = –6.056; p = 0.000, df = 193) than the citizen social scientists; and (3) citizen social scientists identified significantly more overlapping gender exclusions compared with the trained field assistants (t = 1.972; p = 0.05, df = 193). The PSSs, then, identified far more potential symbolic markers of exclusions in the pubic space than either the field assistant or the citizen scientists. Research assistants versus citizen scientists saw similar levels of potentially exclusionary symbols in the public spaces, but not exactly the same ones.

Figure 3 

Mean percentage of possible correct observations by exclusion domain for citizen scientists (n = 162) versus trained research assistants (n = 33). The whiskers represent standard deviation.

Our second hypotheses (a and b) test if citizen scientists’ social positions mattered to what they observed, specifically if those who are members of historically discriminated groups, or those individuals who identify as experiencing more discrimination, are more acute observers of relevant potentially social exclusionary phenomena.

As Table 2 makes clear, this was not confirmed. There was no significant difference in the average number of identified exclusions based on the citizen scientists’ membership in all tested categories: elderly, women, body size, non-white minority. Likewise, self-reported levels of experienced discrimination within the last 12 months by citizen social scientists did not predict any differences in percent of observation scores in any of the social exclusion domains (all p > 0.05 based on students’ t-test with df = 161).

Table 2

Results of linear regression, predicting percent of possible exclusionary observations by category, based on citizen scientists’ initial vignette tests of their structural awareness and reported personal level of experiences of discrimination (note: one participant did not complete the structural awareness test).


DEPENDENTPREDICTORNUNSTANDARDIZED BETASTANDARD ERRORSTANDARDIZED BETATPFR2 ADJUSTED R2

Observations of gender exclusions [% correct]Structural awareness pretest score [0–9]1610.9950.3470.2212.870.0058.2390.490.43

Observations of minority exclusions [% correct]Structural awareness pretest score [0–9]1611.2560.5040.1932.4910.0146.2060.0370.031

Observations of large body exclusions [% correct]Structural awareness pretest score [0–9]1610.0990.3490.0220.2840.7770.0800.000–0.006

Observations of elderly exclusions [% correct]Structural awareness pretest score [0–9]1610.0980.2480.0310.3970.6920.1580.001–0.005

Observations of gender exclusions [% correct]Discrimination experience score [0–22]162–0.0490.140–0.028bv bv16.3890.7270.1230.001–0.006

Observations of minority exclusions [% correct]Discrimination experience score [0–22]1620.1130.2020.0440.5600.5760.3140.002–0.004

Observations of large body exclusions [% correct]Discrimination experience score [0–22]162–0.0150.136–0.009–0.1070.9150.0120.000–0.006

Observations of elderly exclusions [% correct]Discrimination experience score [0–22]1620.0840.0980.0680.8580.3920.7360.005–0.002

Note: Bolding added to column 1 for ease of distinguishing dependent variable; bolding of p values indicates significance.

However, using the same t-test, higher baseline structural awareness scores among citizen social scientists (based on the vignette test) were associated with more observation scores in alignment with PSSs on gender exclusions (p = 0.005) and non-white minority exclusions (p = 0.014) but not on elderly or body size exclusions (p = 0.392 and p = 0.915 respectively) (see Table 2).

Discussion: What can we learn?

Overall, citizen scientists given a social observation task in public spaces performed well compared with trained field research assistants, despite the research assistants having more theoretical and practical knowledge relevant to the task of detecting subtle environmental cues of exclusion in public spaces. Citizen scientists performed similarly to trained research assistants on observing social exclusions related to older age, and were better at observing gender exclusions. They made fewer observations in comparison with trained research assistants when noticing potential exclusionary symbols related to non-white minority status or to large body size. Also, contrary to our hypothesis, citizen scientists who aligned (via self-report) with specific social categories (e.g., women assessing gendered exclusions) performed similarly to other citizen scientists who reported they did not belong to those categories. And, contrary to predictions, citizen scientists who reported more frequent experiences of discrimination in their everyday lives were not more likely to observe the same social exclusions in public places as PSS on the assigned task.

We suggest there are three key takeaways of our findings. The first implication is good news for engaging citizens in social science research. The data collected by citizen scientists was seemingly similar in quality to that collected by ostensibly better-prepared and trained field research assistants. In short, citizen social scientists do no worse than field assistants when assigned routine tasks related to observing complex social phenomena.

The social observational task that we set for these citizen scientists to complete proved to be one that is highly nuanced and complex; it was meant to reflect the real analytic work PSSs do. It’s noteworthy, then, that both non-professional groups performed differently from PSSs. This suggests that observations performed by social scientists and non-professional observers may have different analytic values and applications. That is, non-professional observers may be better than professionals at capturing popularly perceived exclusions.

We also proposed that people who are categorized in marginalized groups or with more direct experience of discrimination would better observe social phenomena relevant to the exclusion of their groups. Yet this is not what we found. This surprising conclusion somewhat contradicts some of the basic thinking behind why social scientists posit positionality as an important consideration in social science research design and data interpretations. Broadly, this approach theorizes that minoritized groups should be more sensitive to (i.e., observant of) markers and practices of exclusion (; ; ; ).

Why might we not have observed this here? It may be that the forms of discrimination experienced in the past by the citizen scientists interpersonally (the focus on standard scales) are not recognized as an issue of structure, and thus do not translate into the observation of social exclusions in public spaces. Another possibility is that of internalized ageism, fat shame, sexism, and racism. Internalized oppression is also theorized to disorient, and hence may desensitize the citizen scientists who self-reported being members of the relevant social categories under examination here. It may also be that the idea of positionality is usually considered in terms of improved access or interpretive insights, rather than strictly in terms of assessing how symbols are perceived. Perhaps the more relevant theory here could be from sociolinguistics, where the ability to “read” symbols in the environment (whether they are spoken or seen) suggests that meaning is never fixed and always contextual; e.g., a small chair is not the same symbol to a kindergarten teacher versus others for reasons unrelated to the types of categories we tested (; ). This was something we could capture with our research design, which was focused on the assumption from positionality theory that people in categories will—through lived experience—be different observers. Working closely with citizen science volunteers clearly introduces the benefit of a community’s interpretation of exclusion (as noted above) versus simply relying on scholarly literature. The lived experience of lay volunteer citizen scientists promises to lend important understandings of how our physical world is navigated and the ways that people feel included or not included.

It is worth noting that extraneous comments made by the citizen science observers in their documentation booklets revealed important information regarding the way that these marginalized groups are imagined by those without relevant lived experience. For example, the category of “elderly” people was overwhelmingly interpreted as people who have poor vision, use assistive devices (wheelchairs, walkers, canes, etc.), and tire easily. That is, elderly people were (in many ways) understood as having mobility issues. Similarly, the category of “women” as a marginalized category was understood as people who are fearful of their surroundings and either are pregnant or have children in tow. This was revealed in their documentation notes wherein some of the observation locations were seen as exclusionary to women because children could easily fall into a lake or because there were no obvious child-feeding or changing stations in the area. In contrast, these kinds of assumptions (disabled for elderly; child-in-tow for women) were explicitly rejected by the three PSSs as indicative of exclusion, as they are seen as drawing on trite cliches of these marginalized groups. But, the tension between the citizen science lay observers and social scientists is intriguing. Indeed, it points out that citizen scientists bring unique insight into popular and culturally shared perceptions of exclusions; these are analytically valuable contributions. Future projects could more fully bring citizen scientists into a critique of academic conceptualizations of social exclusion, and exclusions citizen scientists uniquely perceive could be incorporated into the research design. This point gets to the important issue of expertise (see ).

Our findings raise intriguing questions about how diverse citizen scientists can enrich future social science research. We selected volunteers based on categoric diversity in their backgrounds, assuming this mattered to how well they would be able to perform tasks. However, it may be that such categorizations hide important diversity within them, and without knowing more about the citizen scientists, it is hard to say to what extent that diversity matters to the social science produced. Clearly, firm answers are beyond the scope of our designed study. Further research can cognitively trace how individual citizen scientists—in the context of their own social identities—make decisions about why exactly which symbols are noticed, and how their personal lived experiences relate. This may better capture how such factors as discrimination could shape the types of social observations people are able (or willing) to make. There is considerable scope and need for further research on all these points before we can draw clear conclusions about how positionality shapes the perceptions of citizen scientists as social observers.

Conclusion

Our findings affirm the comparable quality of some social-meaning data collected by citizen volunteers. Citizen social scientists performed similarly to trained field assistants in observing social exclusions related to older age; better on observing gender exclusions; and lower on identifying exclusions related to non-white minority status or large body size. The literature suggests that minoritized groups are more observant of relevant social markers and practices (; ; ; ). Contrary to expectations, however (1) citizen social scientists who aligned (via self-report) with specific social categories (e.g., women, large-bodied, etc.) performed similarly to citizen scientists who reported they did not belong to those categories; and (2) citizen social scientists who reported more frequent experiences of discrimination in their everyday lives were not more likely to observe social exclusions in public places. Our findings suggest we need to better understand how social differences translate into data collection and interpretation among citizen scientists. We find that attending to the nuances of novel observations by citizen social scientists promises to highlight aspects of lived experiences not yet revealed in the extant literature but highly relevant to the tantalizing possibilities of scaling future social science research.

Supplementary Files

The supplementary files for this article can be found as follows:

Supplemental File 1

Observational Protocol Booklet (PDF). DOI: https://doi.org/10.5334/cstp.449.s1

Supplemental File 2

List of PSS Observations per Each Domain at Each Location (PDF). DOI: https://doi.org/10.5334/cstp.449.s2

Supplemental File 3

Baseline Structural Awareness Test and Demographic Information (PDF). DOI: https://doi.org/10.5334/cstp.449.s3