Header

The crucial role of protocols

October 7th, 2014 | Posted by citizenscience in article | control | crowdscience | protocol - (0 comments)

Several of the most cited articles we examined stress the importance of a functional protocol. Protocols are considered critical for establishing control over the tasks performed by citizen scientists. In fact, scientists setting up citizen science projects are typically concerned with accuracy, reliability and usability of data collected by citizens. How can amateurs collect data that are as good as those generated by professional re- searchers? According to the scientists interviewed by Cohn (2008), amateurs can collect reliable data and help advance scientific knowledge if they are properly trained to use instruments, collect and read data. Furthermore, it is important to design specific protocols that limit the tasks assigned to amateurs, test them and see whether reliable data are collected.

What are protocols, by the way? Bonney et al. (2009) described clearly what a protocol is and what it is for. They tell us that protocols specify when, where, and how data should be gathered. Used in large projects spanning multiple locations, such as, for example, the Seed Preference Test (SPT), which in 1994 attracted more than 17,000 participants of all ages and birding abilities (Trumbull et al., 2000), protocols “define a formal design or action plan for data collection” (p. 980), which allows observation made by many independent amateurs to be combined and used for analysis. These protocols should be clear, easy to use, and engaging for volunteer participants. Bonney et al. (2009) described how project designers working at the Cornell Lab of Ornithology (CLO) have tested draft protocols with both local groups, by accompanying them in the field and observing them as they collect and submit data, and with distant groups, by collecting their feedback online.

Unsurprisingly, protocols are one of the pillars supporting the engagement of citizen scientists, as emerging from our reading of the articles. Arguably, we could say that they act as ‘representatives’ of professional scientists, acting as “boundary objects”, aligning heterogeneous participants and professional scientists, such as in the SPT project. They reflect a normative view of how science should be performed and normative expectations of what the “scientific citizen” should do, once involved in a research project. Similarly to a speed bump, which is a technical artifact with an inbuilt script that prescribes drivers to proceed slowly (Latour, 1992), protocols have an inbuilt script that prescribes citizen scientists what to observe and report about. Control over observation tasks is delegated to this tool. Obviously, drivers can choose to ignore speed bumps – just fly over them and do not slow down. Similarly, citizen scientists can choose to ignore the protocols – perhaps over-reporting certain species of birds and under-reporting others. They will not be fined for their behavior as they would be by policemen, if they decided to ignore speed bumps, but their data is unlikely to pass scientists’ scrutiny.

References

Bonney R (Bonney, Rick); Cooper CB (Cooper, Caren B.); Dickinson J (Dickinson, Janis); Kelling S (Kelling, Steve); Phillips T (Phillips, Tina); Rosenberg KV (Rosenberg, Kenneth V.); Shirk J (Shirk, Jennifer), 2009, Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy, BIOSCIENCE 59 (11): 977-984.

Cohn, J. (2008). Citizen Science: Can Volunteers Do Real Research?

Latour, B. (1992). Where are the Missing Masses? The Sociology of a Few Mundane Artifacts.
In W. E. Bijker & J. Law (Eds.), Shaping Technology/ Building Society: Studies in
Sociotechnical Change
(pp. 225 -258). Cambridge, MA: MIT Press.

Trumbull, D., Bonney, R., Bascom, D., & Cabral, A. (2000). Thinking Scientifically during Participation in a Citizen-Science Project. Science Education, v84 n2 p265-75.

Nature of Tasks in Citizen Science

September 30th, 2014 | Posted by citizenscience in article | crowdscience | modularization | protocol | task - (0 comments)

Franzoni and Sauermann (2014), in their article titled Crowd science: The organization of scientific research in open collaborative projects, suggest a classification of crowd science projects according to task complexity and structure, which also provides an explanation of why and how projects will perform (regardless of being successful or not).

They define task complexity as the relationship between different individual sub-tasks. Less task complexity (usually preferred) is attained by minimizing individual sub-tasks. Therefore, a large and complex problem can be modularized, by being divided into many smaller modules, to address smaller problems, with a strategy or architecture specifying how modules fit together. Modularization is taken by the authors to allow for greater division of labor. Then, Franzoni and Sauermann, refer to task structure to denote how well defined the structure of sub-tasks is. Task complexity and task structure are useful for examining what amateurs are asked to do. Several “citizen science” projects, such as Galaxy Zoo, for example, ask for contributions that only require skills that are common to the general human population. For example, in Galaxy Zoo, when classifying galaxies, citizen scientists should be able to work independently on their sub-tasks, without the need to consider what other project participants contribute. This modularization – or granularity of tasks, as Benkler and Nissenbaum (2006) called it – allows people with different levels of motivation to work together by contributing small or large grained modules, consistent with their level of interest in the project and their motivation. Furthermore, modularization is compatible with loosely coupled work (Olson & Olson, 2000), which has fewer dependencies, is more routine, and tasks and procedures are clear. As a result, less amount and frequency of communication are needed to complete the task.

According to Franzoni and Sauermann, crowd science projects could benefit from modularization, by differentiating task complexity and structure aiming at citizens with different skills and expertise at different stages in a project. Different crowd science projects display more or less clearly formulated task complexities and structures and can be classified accordingly.

It should be noted that not only the organization of crowd science projects, often involving a number of independent participants in multiple locations, demands for independent and well-structured tasks, but also the emphasis on controlled and prescribed protocols and validation and accuracy of data. As Bonney et al. (2009) put it:

Citizen science data are gathered through protocols that specify when, where, and how data should be collected. Protocols must define a formal design or action plan for data collection that will allow observations made by multiple participants in many locations to be combined for analysis.

The need for accurate and validate data require convergent tasks  (Nickerson, 2014) to be assigned to citizen scientists, meaning that scientists look for a single output from contributors.  Classification of stars or annotation according to standard labels from experts are examples of convergent task. Since  in most citizen science projects reported in the literature we have examined, citizen scientists are only expected to perform tasks according to prescribed protocols, but not to design those tasks, which remains scientists’ responsibility,  it is worth reflecting on Nickerson’s thought-provokings words (which refer to Taylor’s advocated division between design and performance of tasks):

Distressingly, current crowd work seems to be at the early stages of recapitulating factory employment practices (p. 40).

 

References

Benkler, Y., & Nissenbaum, H. (2006). Commons-based peer production and virtue. Journal of Political Philosophy, 14(4), 394-419.

Bonney R (Bonney, Rick); Cooper CB (Cooper, Caren B.); Dickinson J (Dickinson, Janis); Kelling S (Kelling, Steve); Phillips T (Phillips, Tina); Rosenberg KV (Rosenberg, Kenneth V.); Shirk J (Shirk, Jennifer), 2009, Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy, BIOSCIENCE 59 (11): 977-984.

Franzoni, C., & Sauermann, H. (2014). Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43, 1–20.

Nickerson, J. V. (2013). Crowd work and collective learning. In A. Littlejohn & A. Margaryan (eds.), Technology-Enhanced Professional Learning (pp. 39-47). Routledge.

Olson, G. M., & Olson, J. S. (2000). Distance matters. Human-Computer Interaction, 15, 139–179.

 

 

no