Header

In a guest blog post on CitizenSci, Gwen Ottinger writes about a fresh study on air quality monitoring. The study reveals that “[a]ir concentrations of potentially dangerous compounds and chemical mixtures are frequently present near oil and gas production sites”, which in turn affects the health of local residents negatively. The data used in the study was collected by volunteer citizen scientists, using cheap buckets where:

[s]amples were ultimately collected near production pads, compressor stations, condensate tank farms, gas processing stations, and wastewater and produced water impoundments in five states (Arkansas, Colorado, Ohio, Pennsylvania, and Wyoming). (Macey et al. (2014, p.6)

The method of using buckets was advanced by the Louisiana Bucket Brigade already in 1995, inspired by the famous litigation against the Pacific Gas and Electric Company by Erin Brockovich.

This case is particularly interesting because the citizen scientists were active in shaping the research problem, and even in choosing the locations for collecting samples. Ottinger writes:

The recently released study pioneers a new approach to choosing sites for air quality monitoring: it mobilizes citizens to identify the areas where sampling was most likely to show the continuous impact of fracking emissions. Citizens chose places in their communities where they noticed a high degree of industrial activity, visible emissions, or health symptoms that could be caused by breathing toxic chemicals. They took samples themselves, following rigorous protocols developed by non-profit groups working in conjunction with regulatory agencies and academic researchers.

Moreover, in another article in Science, Technology and Human Values, Ottinger analyses the “Buckets of Resistance” of the Lousiana Bucket Brigade. She argues that the effectiveness of citizen scientists depended to a large extent on standards and standardized practices. To measure air quality successfully, the citizen scientists had to follow certain standardized procedures and tests that were used already by established scientists. This way, the measurements could “count” as proper scientific observations. However, other actors also used the same standards as an entry point for criticism of the citizen scientists’ measurements.

In the case of ‘bucket brigades’ and similar cases, it seems like the citizen scientists have a great deal of influence in configuring the research process as a whole. The problematization occurs on a local level, where citizens identify and react to a problem in their communities. But also the decisions on what to measure (and what not to measure) seem to be in the hands of volunteers. However, as Ottingen shows, the standards and established procedures, are harder to reshape. The citizens, in order to make ‘science proper’, need to relate and connect to an already existing paradigm of scientific knowledge and practice.

Do you know of any other interesting projects that share similar features as above? Please leave a comment!

References

Ottinger, G. (2010) “Buckets of Resistance: Standards and the Effectiveness of Citizen Science”, Science, Technology and Human Values, March 2010 vol. 35 no. 2 244-270.

The crucial role of protocols

October 7th, 2014 | Posted by citizenscience in article | control | crowdscience | protocol - (0 comments)

Several of the most cited articles we examined stress the importance of a functional protocol. Protocols are considered critical for establishing control over the tasks performed by citizen scientists. In fact, scientists setting up citizen science projects are typically concerned with accuracy, reliability and usability of data collected by citizens. How can amateurs collect data that are as good as those generated by professional re- searchers? According to the scientists interviewed by Cohn (2008), amateurs can collect reliable data and help advance scientific knowledge if they are properly trained to use instruments, collect and read data. Furthermore, it is important to design specific protocols that limit the tasks assigned to amateurs, test them and see whether reliable data are collected.

What are protocols, by the way? Bonney et al. (2009) described clearly what a protocol is and what it is for. They tell us that protocols specify when, where, and how data should be gathered. Used in large projects spanning multiple locations, such as, for example, the Seed Preference Test (SPT), which in 1994 attracted more than 17,000 participants of all ages and birding abilities (Trumbull et al., 2000), protocols “define a formal design or action plan for data collection” (p. 980), which allows observation made by many independent amateurs to be combined and used for analysis. These protocols should be clear, easy to use, and engaging for volunteer participants. Bonney et al. (2009) described how project designers working at the Cornell Lab of Ornithology (CLO) have tested draft protocols with both local groups, by accompanying them in the field and observing them as they collect and submit data, and with distant groups, by collecting their feedback online.

Unsurprisingly, protocols are one of the pillars supporting the engagement of citizen scientists, as emerging from our reading of the articles. Arguably, we could say that they act as ‘representatives’ of professional scientists, acting as “boundary objects”, aligning heterogeneous participants and professional scientists, such as in the SPT project. They reflect a normative view of how science should be performed and normative expectations of what the “scientific citizen” should do, once involved in a research project. Similarly to a speed bump, which is a technical artifact with an inbuilt script that prescribes drivers to proceed slowly (Latour, 1992), protocols have an inbuilt script that prescribes citizen scientists what to observe and report about. Control over observation tasks is delegated to this tool. Obviously, drivers can choose to ignore speed bumps – just fly over them and do not slow down. Similarly, citizen scientists can choose to ignore the protocols – perhaps over-reporting certain species of birds and under-reporting others. They will not be fined for their behavior as they would be by policemen, if they decided to ignore speed bumps, but their data is unlikely to pass scientists’ scrutiny.

References

Bonney R (Bonney, Rick); Cooper CB (Cooper, Caren B.); Dickinson J (Dickinson, Janis); Kelling S (Kelling, Steve); Phillips T (Phillips, Tina); Rosenberg KV (Rosenberg, Kenneth V.); Shirk J (Shirk, Jennifer), 2009, Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy, BIOSCIENCE 59 (11): 977-984.

Cohn, J. (2008). Citizen Science: Can Volunteers Do Real Research?

Latour, B. (1992). Where are the Missing Masses? The Sociology of a Few Mundane Artifacts.
In W. E. Bijker & J. Law (Eds.), Shaping Technology/ Building Society: Studies in
Sociotechnical Change
(pp. 225 -258). Cambridge, MA: MIT Press.

Trumbull, D., Bonney, R., Bascom, D., & Cabral, A. (2000). Thinking Scientifically during Participation in a Citizen-Science Project. Science Education, v84 n2 p265-75.

The invisible citizen scientist?

October 2nd, 2014 | Posted by citizenscience in article | review - (0 comments)

In early September this year, Caren B. Cooper, Jennifer Shirk and Benjamin Zuckerberg published an article called The Invisible Prevalence of Citizen Science in Global Research: Migratory Birds and Climate Change. This article analyzes the role of citizen science in the (most cited) articles describing the “impacts of climate change on avian migration”. The results are quite interesting:

We found that 85 of the 171 papers that we could classify were based on citizen science, constituting 5 to 20 papers per claim (Appendix S1). Citizen science heavily informed claims related to ecological patterns and consequences and was less frequently cited for claims about mechanisms (Table 1).

In other words, when it comes to avian migration and climate change, citizen scientists contribute to almost half of the body of scientific facts that we rely on for knowing about this phenomenon. Moreover, the quality of the data was examined, and the authors found no deviation among the observations performed by citizen scientists vs. the observations made by conventional means.

However, Cooper, Shirk and Zuckerberg point to a problem of visibility of the citizen scientists. It seems like the scientific community has not yet recognized the contribution of citizen scientists properly, and the authors argue that there is a “stigma” attached to involving the public:

The use of citizen science data in an active field of ecological research, such as migration phenology, is strong evidence that any stigma associated with the use of data collected by volunteers is unwarranted. Yet, the contributions of citizen science were not readily detectable in most cases. Thus, the stigma may persist unless researchers begin to draw attention to the citizen-science elements in their research papers.

As a consequence, scientific articles do not always render visible that citizens have participated in keywords, titles or abstracts. Thus, the authors suggest that the keyword “citizen science” should be used as a standardized keyword for all further studies that involve their contribution.

Cooper CB, Shirk J, Zuckerberg B (2014) The Invisible Prevalence of Citizen Science in Global Research: Migratory Birds and Climate Change. PLoS ONE 9(9): e106508. doi:10.1371/journal.pone.0106508

Nature of Tasks in Citizen Science

September 30th, 2014 | Posted by citizenscience in article | crowdscience | modularization | protocol | task - (0 comments)

Franzoni and Sauermann (2014), in their article titled Crowd science: The organization of scientific research in open collaborative projects, suggest a classification of crowd science projects according to task complexity and structure, which also provides an explanation of why and how projects will perform (regardless of being successful or not).

They define task complexity as the relationship between different individual sub-tasks. Less task complexity (usually preferred) is attained by minimizing individual sub-tasks. Therefore, a large and complex problem can be modularized, by being divided into many smaller modules, to address smaller problems, with a strategy or architecture specifying how modules fit together. Modularization is taken by the authors to allow for greater division of labor. Then, Franzoni and Sauermann, refer to task structure to denote how well defined the structure of sub-tasks is. Task complexity and task structure are useful for examining what amateurs are asked to do. Several “citizen science” projects, such as Galaxy Zoo, for example, ask for contributions that only require skills that are common to the general human population. For example, in Galaxy Zoo, when classifying galaxies, citizen scientists should be able to work independently on their sub-tasks, without the need to consider what other project participants contribute. This modularization – or granularity of tasks, as Benkler and Nissenbaum (2006) called it – allows people with different levels of motivation to work together by contributing small or large grained modules, consistent with their level of interest in the project and their motivation. Furthermore, modularization is compatible with loosely coupled work (Olson & Olson, 2000), which has fewer dependencies, is more routine, and tasks and procedures are clear. As a result, less amount and frequency of communication are needed to complete the task.

According to Franzoni and Sauermann, crowd science projects could benefit from modularization, by differentiating task complexity and structure aiming at citizens with different skills and expertise at different stages in a project. Different crowd science projects display more or less clearly formulated task complexities and structures and can be classified accordingly.

It should be noted that not only the organization of crowd science projects, often involving a number of independent participants in multiple locations, demands for independent and well-structured tasks, but also the emphasis on controlled and prescribed protocols and validation and accuracy of data. As Bonney et al. (2009) put it:

Citizen science data are gathered through protocols that specify when, where, and how data should be collected. Protocols must define a formal design or action plan for data collection that will allow observations made by multiple participants in many locations to be combined for analysis.

The need for accurate and validate data require convergent tasks  (Nickerson, 2014) to be assigned to citizen scientists, meaning that scientists look for a single output from contributors.  Classification of stars or annotation according to standard labels from experts are examples of convergent task. Since  in most citizen science projects reported in the literature we have examined, citizen scientists are only expected to perform tasks according to prescribed protocols, but not to design those tasks, which remains scientists’ responsibility,  it is worth reflecting on Nickerson’s thought-provokings words (which refer to Taylor’s advocated division between design and performance of tasks):

Distressingly, current crowd work seems to be at the early stages of recapitulating factory employment practices (p. 40).

 

References

Benkler, Y., & Nissenbaum, H. (2006). Commons-based peer production and virtue. Journal of Political Philosophy, 14(4), 394-419.

Bonney R (Bonney, Rick); Cooper CB (Cooper, Caren B.); Dickinson J (Dickinson, Janis); Kelling S (Kelling, Steve); Phillips T (Phillips, Tina); Rosenberg KV (Rosenberg, Kenneth V.); Shirk J (Shirk, Jennifer), 2009, Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy, BIOSCIENCE 59 (11): 977-984.

Franzoni, C., & Sauermann, H. (2014). Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43, 1–20.

Nickerson, J. V. (2013). Crowd work and collective learning. In A. Littlejohn & A. Margaryan (eds.), Technology-Enhanced Professional Learning (pp. 39-47). Routledge.

Olson, G. M., & Olson, J. S. (2000). Distance matters. Human-Computer Interaction, 15, 139–179.

 

 

no