Header

Nature of Tasks in Citizen Science

30 Sep-2014 | Author: citizenscience in article | crowdscience | modularization | protocol | task

Franzoni and Sauermann (2014), in their article titled Crowd science: The organization of scientific research in open collaborative projects, suggest a classification of crowd science projects according to task complexity and structure, which also provides an explanation of why and how projects will perform (regardless of being successful or not).

They define task complexity as the relationship between different individual sub-tasks. Less task complexity (usually preferred) is attained by minimizing individual sub-tasks. Therefore, a large and complex problem can be modularized, by being divided into many smaller modules, to address smaller problems, with a strategy or architecture specifying how modules fit together. Modularization is taken by the authors to allow for greater division of labor. Then, Franzoni and Sauermann, refer to task structure to denote how well defined the structure of sub-tasks is. Task complexity and task structure are useful for examining what amateurs are asked to do. Several “citizen science” projects, such as Galaxy Zoo, for example, ask for contributions that only require skills that are common to the general human population. For example, in Galaxy Zoo, when classifying galaxies, citizen scientists should be able to work independently on their sub-tasks, without the need to consider what other project participants contribute. This modularization – or granularity of tasks, as Benkler and Nissenbaum (2006) called it – allows people with different levels of motivation to work together by contributing small or large grained modules, consistent with their level of interest in the project and their motivation. Furthermore, modularization is compatible with loosely coupled work (Olson & Olson, 2000), which has fewer dependencies, is more routine, and tasks and procedures are clear. As a result, less amount and frequency of communication are needed to complete the task.

According to Franzoni and Sauermann, crowd science projects could benefit from modularization, by differentiating task complexity and structure aiming at citizens with different skills and expertise at different stages in a project. Different crowd science projects display more or less clearly formulated task complexities and structures and can be classified accordingly.

It should be noted that not only the organization of crowd science projects, often involving a number of independent participants in multiple locations, demands for independent and well-structured tasks, but also the emphasis on controlled and prescribed protocols and validation and accuracy of data. As Bonney et al. (2009) put it:

Citizen science data are gathered through protocols that specify when, where, and how data should be collected. Protocols must define a formal design or action plan for data collection that will allow observations made by multiple participants in many locations to be combined for analysis.

The need for accurate and validate data require convergent tasks  (Nickerson, 2014) to be assigned to citizen scientists, meaning that scientists look for a single output from contributors.  Classification of stars or annotation according to standard labels from experts are examples of convergent task. Since  in most citizen science projects reported in the literature we have examined, citizen scientists are only expected to perform tasks according to prescribed protocols, but not to design those tasks, which remains scientists’ responsibility,  it is worth reflecting on Nickerson’s thought-provokings words (which refer to Taylor’s advocated division between design and performance of tasks):

Distressingly, current crowd work seems to be at the early stages of recapitulating factory employment practices (p. 40).

 

References

Benkler, Y., & Nissenbaum, H. (2006). Commons-based peer production and virtue. Journal of Political Philosophy, 14(4), 394-419.

Bonney R (Bonney, Rick); Cooper CB (Cooper, Caren B.); Dickinson J (Dickinson, Janis); Kelling S (Kelling, Steve); Phillips T (Phillips, Tina); Rosenberg KV (Rosenberg, Kenneth V.); Shirk J (Shirk, Jennifer), 2009, Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy, BIOSCIENCE 59 (11): 977-984.

Franzoni, C., & Sauermann, H. (2014). Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43, 1–20.

Nickerson, J. V. (2013). Crowd work and collective learning. In A. Littlejohn & A. Margaryan (eds.), Technology-Enhanced Professional Learning (pp. 39-47). Routledge.

Olson, G. M., & Olson, J. S. (2000). Distance matters. Human-Computer Interaction, 15, 139–179.

 

 

Follow this blog with RSS You can answer or leave a trackback.

Comment post

The University of Gothenburg reserves the right to remove posts that contain discriminatory statements, personal attacks or language that may be offensive.

Your email will not be published. Name and email are required.

*

no