To be honest, my past self would be shocked to know that I've become a proponent of online testing. Our team uses behavioral psychophysics tasks (among other tools) to examine how listeners derive meaning from speech acoustics. In our physical lab, all testing takes place in a sound-attenuated booth. Listeners wear high quality headphones. We hold the amplitude of stimuli constant across participants. Responses are made using specialized hardware to ensure accurate measurement of response times. We interact with each participant, allowing us to confirm that they understand task instructions and actually exist in human form. Why would we even consider abandoning these standards? Well, for us, there are a few reasons:
Here I provide a tutorial for speech perception researchers who may be considering making a transition to online data collection. This discussion is focused on using Prolific as the online participant pool, Gorilla as the builder/host of online experiments, and the headphone screen of Woods et al. (2017) to promote assurance of an acceptable listening environment.
I'm definitely not an early adopter of these methodologies, and our expertise is still in the “emerging” stage. Part of the reason why I'm late to the game is because I had a very hard time figuring out how to actually use MTurk. This tutorial is geared towards people, like me, who may in principle be open to online testing, but who have had some difficulty in figuring out exactly how to implement this method. Suggestions from the community for this document are most welcome.
I've tried to be very transparent about our successes and challenges. My thoughts here are geared towards speech perception colleagues. There are great resources for online testing in general, like this piece by Dr. Jennifer Rodd. Please feel free to reach out to me ([email protected]) with questions or suggestions.
Prolific is an online participant pool. When people join Prolific, the first thing they do is complete a detailed "About You" section where they provide demographic information. Researchers can use this information to set criteria for who is eligible for a given study. The filters are easy to use, comprehensive, and you get realtime information about how many people are eligible with the applied filters. An example of the interface is shown below.
Right now, over 10,000 people meet the constraints of age between 18-35 years, born in US and currently residing in US, monolingual English speaker, and no history of language related disorders.
Researchers can filter based on participation from their own Prolific studies, either by excluding people if they participated in a specific past study or by only including people based on participation in a past study (e.g., using a custom "whitelist" for a longitudinal study). Researchers can also filter by approval rate and total number of completed Prolific studies, in addition to anything else that is provided in the "About You" section.
Prolific does two important things for you: (1) they get your study distributed to participants, and (2) they handle the money. Researchers Top-up their Prolific account with some funds, which is then distributed as researchers approve submissions. Prolific makes money by charging a fee based on what you pay the participant. I believe that this is 30% for academic researchers (that's what I am charged), but I can't find this information explicitly stated on their website. This tweet says that it is 33%. Perhaps they have made a change recently?
For a study that takes an estimated 20 minutes to complete, the participant would get paid $3.33 from my Prolific account (consistent with our $10/hour payment rate, the same rate we use for in-lab behavioral studies) and Prolific would get paid $1.00. Prolific requires that you pay participants a minimum of $6.50/hour. The money gets placed in the participant's Prolific account, which they can withdraw using PayPal. Prolific is trying to make themselves distinct from MTurk by providing ethical treatment to participants and ensuring high quality data to researchers, and I've really been impressed with their efforts on these fronts. I've also been very impressed with how Prolific facilitates administrative tasks. For example, a click of the mouse will generate a detailed receipt of all payments to participants and Prolific for a given study, which can then be submitted to your university for funds reconciliation.
When you set up a new study on Prolific, there's a place for you to provide a link to your actual study (you'll get this link from Gorilla, more on this below). Prolific provides a "redirect" link that should be embedded at the end of your actual study (you'll paste this link to the Finish node of your study in Gorilla, more on this below).