When selling the benefits of Amazon Mechanical Turk (Mturk) to people, I usually just have to say “faster and cheaper” and the sales pitch is over. Last year, I got 12,000 people to do a task that paid a penny in about 90 minutes. Sometimes I would spend 3 hour getting 20 subjects to take a survey at UCLA. That 12,000 subject task was a special case, however. It took maybe 20 seconds to complete, and involved looking at a picture and then typing something. On average I tell people they will get their subjects in about 24 hours. This typically is ~200 people doing a 10 minute task for at $0.50.
Lately, though, I have had some surprises. Consider the following three studies. 1 posted on a Tuesday, 1 on a Thursday, 1 on a Monday. All use the same posting template with the same number of participants needed and same payment, but go to a different external survey site. They also had slightly different titles. The Monday post took ~4 days. The Thursday post is still going and is projected to take 6 days. The Tuesday post took 4 hours. When the researchers ask me why its taking so long, I have to act surprised and tell them to just sit it out. Then they worry that the added time means something is wrong and their data won’t be very good. I wonder the same things.
While I don’t have answers to why those HITs acted differently, I have found a few tips to maximize response rates. (Note: sometimes below I will generalize thoughts to “they” when really what I’m saying is “me.” In addition to being a master requester, I am an avid Mturk worker.)
1. Pay at least $0.50. With some exceptions, I found 50 cents to be a good price. It’s more than most HITs out there, but surveys are widespread now and people know they may not make as much answering questions than tagging images or pasting Wikipedia URLs. Most workers set a minimum payment for their searches. When I run studies at $0.25 I expect it to take 2 weeks to get enough subjects. The difference is exponential. If you are not pretesting or piloting consider something even higher. For studies that are published I try to pay $1.00 for a 15 minute study. Still under minimum wage, but around 4x the Mturk average. When people see a $1.00 post, they jump at it, and that desire to participate translates into a better research participant.
2. Obvious but use as many keywords as you can think of. Workers tend toward repetition. A person will log on, see a photo tagging HIT and spend the next hour tagging photos and searching for more photo tagging HITs. Similarly, some people don’t want something so monotonous and they search for surveys. Make sure survey is somewhere in your keywords. Start brainstorming and use a lot. People only find
3. Difficult but try to offer multiple HITs in each batch. Most researcher requesters think of their postings a single entity. That is not how Mturk was designed. Mturk was designed to take a template and input and generate many individual HITs from that data, like taking a template and 1000 images and generating a batch of HITs for tagging all 1000 of those images without having to create the 1000 separate HITs individually. Mturk by default orders search results by the number of HITs available. That means if you only have 1 HIT available (i.e. each single person can only do 1 survey) then you will be lower down the list and get fewer workers. Offering multiple HITs is not easy though. You have to design a scenario where you can have someone do two or more tasks but they can stop at 1 if they want. A good idea is if you have multiple projects going and want to run multiple studies at the same time. If you can’t do this, then you have to get really creative. One thing to avoid is what I have seen several times. A survey completes and I try to move on to the next. It then tells me that I cannot do more than 1 HIT in the batch. The requester keeps track of workers and really only wants them to do one of the surveys but posts multiple so they are higher up in the search. This is bad for reputation… which brings me to.
4. Keep a good reputation. All requesters know about the stats of workers. Things like rejection rates, etc. A lot of requesters don’t know about sites like Turker Nation and Turkopticon. Turker Nation is a forum for workers, where among other things they talk about the quality of requesters. Even more useful for workers to gauge the quality of a requester is Turkopticon. This is a browser extension that allows users to hover over the names of requesters and see the Turkopticon ratings for that requester. Turkopticon keeps rating for Generosity, Speed of payment, Fairness, and Communicativity. So answer your emails, pay people quickly (don’t wait a week for it to auto-pay), give real reasons to reject (or better yet, be lenient with you $0.50 and give a warning if you think they tried but still failed), and don’t pay 10 cents for an hour-long task.
5. Consider the time. A few years ago I did a survey and discovered that most workers do tasks during their work day as a way to waste time and get paid instead of waste time and get fired on Facebook. This probably has changed some in the last 3 years, but I still find posting early in the week and in the morning gives you an edge over posting at night on Friday.
There are dozens of more examples. I will probably make this a continuing series, but don’t hesitate to share your ideas. I still have no explanation for the disparity in times recently and would love to hear what you have to say.