The kinds of experiments that work on Mturk
In my never ending attempt to push colleagues into the Mturk world, I often tell them how I have never had a study that worked on another population and not work on the Mturk worker population. However, inevitably my advertisements prove false and I get someone who says that they got some weird results. After studying the types of experiments conducted, I noticed something that should have been obvious from the beginning. Workers seem to be really good at short iterative tasks, or longer tasks that have objective measurements. Thus my decision making work which asks multiple choice questions or asks people to click some buttons and watch some numbers appear and then judge them works really well. It is akin to tasks like photo tagging. Something short over and over again is easy to pay attention to. Similarly, asking someone to write out how they would solve a problem gets good results. Anyone can look at a paragraph on how to choose between two insurance options and tell if the person put effort into it or not.
However, complex imagined scenarios tend to not work very well. A worker is not used to keeping 2 pages worth of instructions memorized to imagine what it would be like to be a Doctor figuring out whether to prescribe a patient drug X which has a higher cure and premature death rate or drug Y which guarantees immediate safety, but may not cure the disease. In general I think it should be assumed that workers are under high-cognitive load at all times. The worker may be taking your survey at work and while answering a question they may also be trying to gauge if their boss is walking by. If cognitive load factors into your variables or you think a high amount of attention is needed, you will be better to use a different recruitment method or pay a significantly higher amount of money. Last, if you are using Mturk, I would recommend using mechanisms to force attention to instructions like not allowing the Next button to appear for a minute or two in Qualtrics.