s edwards

profileJordanjameire

 Please do not forget to add bible verse at the bottom

Over  the past 50 years, recruiting research participants has become more  expensive. Changes in technology have contributed to this problem.  According to the Centers for Disease Control and Prevention, in 2022,  only 27 percent of adults lived in a household with a landline  telephone, compared to more than 90 percent in 2004 (Blumberg and Luke,  2023). However, commodifying personal data has contributed even more  significantly to the problem. Data brokers constantly collect  information about individuals to sell to companies. How often have you  experienced robocalls, a spam text or email, or even being stopped on  the street by someone to answer a few questions? While some of these  inquiries are for legitimate research purposes, others are designed to  sell goods and services, and some are outright scams. Over time people  have become harder to reach and more suspicious of invitations to  participate in research projects.

One innovative solution to recruiting large and diverse  nonprobability samples is Amazon’s Mechanical Turk (MTurk). Launched in  2005, MTurk is a crowdsourcing marketplace where researchers can hire  individuals (Turkers) to complete human intelligence tasks (HITs), such  as surveys. Using MTurk has become so popular that the Journal of  Management commissioned a review of the platform (Aguines, Villamor, and  Ramani, 2021). The main benefits of MTurk are the low cost and ease of  obtaining large and diverse samples of participants as well as the  ability to use a variety of research designs, including experimental and  longitudinal designs.

The problems with using MTurk are associated with internal and  external validity threats. For example, in their study, Herman Aguinis,  Isabel Villamor, and Ravi Ramani (2021) identified inattention, high  attrition rates, inconsistent English language fluency, and non-naivete  (i.e., exposure to the topic more than once) as challenges. These are  potential threats to internal validity. Remember from Chapter 4 that  interval validity threats challenge the causal statement about the  observed covariations between variables. The authors also identified  workers misrepresenting their self-reported sociodemographic  characteristics and self-selection bias as challenges. These problems  are potential threats to external validity. That is, are the  cause-and-effect findings of the study generalizable to other groups?  Another potential challenge is the use of bots or computer programs that  auto-complete HITs and server farms to bypass location restrictions  (Chmielewski and Kucher, 2020).

MTurk has great potential but also some challenges. The platform  allows researchers to reach individuals with specific characteristics,  such as age, race/ethnicity, and educational attainment. Moreover, it  enables researchers to recruit participants who have had contact with  the criminal justice system, work in specific occupations, and live in  particular countries. Research suggests that MTurk samples can be more  representative of the general population than samples of college  students (Goodman, Cryder, and Cheema, 2013). However, research also  indicates that certain population groups are overrepresented, including  females, Whites, college-educated individuals, liberals, and young  people (Levay et al., 2016). Accordingly, scholars note that certain  precautions should be taken when using systems like MTurk. For example,  Aguinis and colleagues (2021) recommend that researchers use multiple  validity checks, such as CAPTCHAs, honeypots (computer code invisible to  people), and attention checks. Furthermore, researchers should  cross-check workers’ profiles and monitor their average response times.

Consider how the spread of Internet access has reduced potential  problems with biases associated with online samples. Also, think about  how such tools make it possible to produce large samples cheaply.  Criminal justice researchers are only beginning to study how these  crowdsourced opt-in samples should be used. You can review research by  Thompson and Pickett (2020) for more information. We return to MTurk in  Chapter 9, showing how MTurk samples can be coupled with online survey  platforms. In the meantime, you can read more about MTurk through the  link below.

Critical Thinking

  1. What justice-focused topics can be studied using crowdsourced opt-in  platforms like Amazon’s MTurk? What topics might not be appropriate to  explore using these platforms for recruiting participants?
  2. Based on the challenges described above, can you provide an example  of how these challenges might affect your ability to produce a sample  representative of the population if you wanted to study topics such as  environmental justice, or immigration, or support for police reform?
    • 18 days ago
    • 15