Productivity Decomposed: Getting Big Things Done with Little Microtasks

Annotated Bibliography

Home * Schedule * Attendees * Bibliography

Paper Notes
PlanSourcing: Generating behavior change plans with friends and crowds.
Elena Agapie, Lucas Colusso, Sean A. Munson, and Gary Hsieh.
CSCW 2016.
Studies the quality of action plans when created by friends or strangers. Friends provide personalized structure, while strangers provide more diverse recommendations.
The future of work: Working for the machine.
Michael S. Bernstein.
Pacific Standard (2015).
Envisions a future of work where computers go beyond helping people perform work tasks to where commputers algorithmically allocate work tasks to people.
Soylent: A word processor with a crowd inside.
Michael S. Bernstein, Greg Little, Robert C. Miller, Björn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich.
UIST 2010.
Incorporates crowdsourcing into a word processor, enabling writers to call on Mechanical Turk workers to shorten, proofread, or otherwise edit parts of their document on demand.
VizWiz: Nearly real-time answers to visual questions.
Jeffrey P. Bigham, Chandrika, Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samuel White, and Tom Yeh.
UIST 2010.
Using microtasks, VizWiz made it possible to have always-available, near real-time support for visual question asked by blind and low-vision users. The deployed system has answered tens of thousands of questions for real users.
Wait-Learning: Leveraging wait time for second language education.
Carrie J. Cai, Philip J. Guo, James R. Glass, and Robert C. Miller.
CHI 2015.
Introduces wait-learning, where people can leverage brief moments of waiting during their everyday activities to learn new things. Specifically looks at vocabulary practice within the context of chat.
Chain reactions: The impact of order on microtask chains.
Carrie J. Cai, Shamsi Iqbal, and Jaime Teevan.
CHI 2016.
Shows that microtasks can be chained together in a way that builds context and makes it easier to perform subsequent microtasks.
Break it down: A comparison of macro-and microtasks.
Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and Michael S. Bernstein.
CHI 2015.
Looks at the impact on task performance if a large task is broken down into smaller components. Finds that while people complete the task more slowly, they find it easier, do higher quality work, and are more resilient to interruption.
Cascade: Crowdsourcing taxonomy creation.
Lydia B. Chilton, Greg Little, Darren Edge, Daniel S. Weld, and James A. Landay.
CHI 2013.
Allows distributed groups of people to create a taxonomy of information without full access to the data being categorized. This is more scalable than traditional approaches that require centralized expertise.
A benchmark for interactive augmented reality instructions for assembly tasks.
Markus Funk, Thomas Kosch, Scott W. Greenwald, and Albrecht Schmidt.
MUM 2015.
Focuses on how to divide micro-tasks even further into single actions which are performed at manual assembly workplaces. Further, it proposes how instructions for micro-tasks can be benchmarked and proposes a method to evaluate the suitability of assembly instructions in that domain.
Glance: Rapidly coding behavioral video with the crowd.
Walter S. Lasecki, Mitchell Gordon, Danai Koutra, Malte F. Jung, Seven P. Dow, and Jeffrey P. Bigham.
UIST 2014.
Glance divides the task of annotating (coding) behavioral events in video among large groups of crowd workers, making it possible to annotate hours of video in seconds or minutes, instead of weeks.
Taskgenies: Automatically providing action plans helps people complete tasks.
Nicolas Kokkalis, Thomas Köhn, Johannes Huebner, Moontae Lee, Florian Schulze, and Scott R. Klemmer.
TOCHI (2013).
Presents a crowd-based approach for breaking tasks down into action plans, or the concrete steps necessary to implement the tasks. Finds people who are provided with action plans are more likely to finish their tasks than those prompted to create their own plans.
Real-time captioning by groups of non-experts.
Walter S. Lasecki, Christopher D. Miller, Adam Sadilek, Andrew Abumoussa, Donato Borrello, Raja Kushalnagar, and Jeffrey P. Bigham.
UIST 2012.
By dividing up a continuous task among multiple contributors, Scribe makes it possible to provide real-time captions with non-expert captionists, instead of expensive, hard-to-schedule professionals.
WearWrite: Orchestrating the crowd to complete complex tasks from wearables (we wrote this paper on a watch).
Michael Nebeling, Anhong Guo, Kyle Murray, Annika Tostengard, Angelos Giannopoulos, Martin Mihajlov, Steven Dow, Jaime Teevan, and Jeffrey P. Bigham.
arXiv (2015).
Makes it possible for people to write complex documents from a tiny wearable device by allowing authors to access crowd workers who are on desktop devices. The first draft of the paper was actually written from a watch using the described process.
WearWrite: Crowd-assisted writing from smartwatches.
Michael Nebeling, Alexandra To, Anhong Guo, Adrian A. de Freitas, Jaime Teevan, Steven Dow, and Jeffrey P. Bigham.
CHI 2016.
A study of the WearWrite system that shows that authors use the system to capture new ideas from their watch as the ideas come to mind, and to manage the crowd's writing during spare moments while going about their daily routine.
Supporting collaborative writing with microtasks.
Jaime Teevan, Shamsi T. Iqbal, and Curtis von Veh.
CHI 2016.
Presents an approach for decomposing the task of writing down into microtasks. Suggest ways that recent advances in microtasking and crowd work can be used to support collaborative writing across preexisting groups.
Selfsourcing personal tasks.
Jaime Teevan, Daniel J. Liebling, and Walter S. Lasecki.
CHI 2014.
Introduces selfsourcing as a way to help people perform large personal information tasks by breaking them down into manageable microtasks that the task owners complete themselves.
Twitch crowdsourcing: crowd contributions in short bursts of time.
Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein.
CHI 2014.
Supports the mobile completion of quick microtasks that can be completed in just a second or two. Asks users to complete a micro-contribution each time they unlock their phones.
Measuring the crowd within probabilistic representations within individuals.
Edward Vul and Harold Pashler.
Psychological Science (2008).
Tries to replicate the "wisdom of the crowd" with a single individual. Finds it is possible to get better answers by asking the same person the same question over and over, but that it is even better to ask multiple people.
Human computation tasks with global constraints.
Haoqi Zhang, Edith Law, Rob Miller, Krzysztof Gajos, David Parkes, and Eric Horvitz.
CHI 2012.
Presents an approach for supporting complex tasks with global constraints via microtasks.

Submit papers for potential inclusion to microproductivity@cs.stanford.edu. Please include the paper's title, URL, authors, outlet, and a brief description of how the work relates to microproductivity.