Schedule

WORKSHOP AGENDA

09:00-09:10        Introduction

09:10-09:40        A Brief Survey of Machine Learning in Human Computation and Crowdsourcing
Rajarshi Das, IBM Research, USA

9:40-10:20        Crowd IQ: Measuring the Intelligence of Crowdsourcing Platforms
Michal Kosinski, Yoram Bachrach, Gjergji Kasneci, Jurgen Van-Gael, and Thore Graepel
Microsoft Research, UK

10:30-11:00        Coffee break

11:00-11:40        Get another worker? Active crowdlearning with sequential arrivals
James Zou and David Parkes
Harvard University, USA

11:40-12:05        Aggregating Human-Expert Votes using Stacked Generalization
Evgueni Smirnov, Hua Zhang, and Nikolay Nikolaev
Maastricht University, Netherlands & University of London, UK

12:05-12:30        Improving Repeated Labeling for Crowdsourced Data Annotation
Sergiu Goschin, Chris Mesterharm, and Haym Hirsh
Rutgers University, USA

12:30-14:00        Lunch

14:00-14:40       How To Grade a Test Without Knowing the Answers — A Bayesian Graphical Model for Adaptive Crowdsourcing and Aptitude Testing
Yoram Bachrach, Thore Graepel, Tom Minka, and John Guiver
Microsoft Research, UK

14:40-15:05      Factor-based Regression Models for Forecasting
Chih-Chieh Cheng, Robert Sasseen, Tulay Muezzinoglu, and Richard Rohwer
SRI International, USA

15:05-15:30        Dynamic Estimation of Rater Reliability in Regression Tasks using Multi-Armed Bandit Techniques
Alexey Tarasov, Sarah Jane Delany, and Brian Mac Namee
Dublin Institute of Technology, Ireland

15:30-16:00        Coffee break

16:00-16:25        Crowdsourcing Microtasks Using Multiple Crowds
Ittai Abraham,  Omar Alonso,  Vasilis Kandylas, and Aleksandrs Slivkins
Microsoft Research, USA

16:25-16:50        Suggesting Constraints to Interactive Topic Modeling
Yuening Hu and Jordan Boyd-Graber
University of Maryland, USA

16:50-17:30        General Discussion
All Participants

Advertisements

Posted June 18, 2012 by crowdml12

%d bloggers like this: