Robust online attention multitasking study: platform, screening, controls
#1
I'm a research assistant in a behavioral psychology lab, and I'm tasked with designing a new study on attention and multitasking that will be run online. My main concern is creating a robust experimental design that maintains internal validity in an uncontrolled environment where participants use their own computers. I need to decide on critical controls, like ensuring standardized task instructions and detecting potential cheating or inattention, without the benefit of a lab setting. For those who have successfully run rigorous online experiments, what were your most important methodological choices regarding platform, participant screening, and data quality checks, and how did you pilot test your procedure to identify unforeseen confounds before launching the full study?
Reply
#2
Platform choice matters in online experiments. I’d go with jsPsych (or PsychoJS) for control over timing and stimuli, or a turnkey option like Gorilla if you want a GUI for task builders. These let you lock instructions, calibrate devices, and export clean data without writing a ton of code.
Reply
#3
Screening and integrity help: do a two-layer screening—first a pre-screen survey to filter for basic proficiency and device compatibility, then in-task attention checks (e.g., instructional manipulation checks, irregular response patterns) and basic consistency traps (catch trials, impossible response sequences). Also collect basic environmental info (device type, browser, time zone) to model noise.
Reply
#4
Pilot test plan: run 2–3 small pilots (n=10–20 each) across desktop and mobile to uncover timing drift, layout issues, and comprehension gaps. Use those pilots to estimate trial durations, check that data meet your quality thresholds, and refine instructions. Gather qualitative feedback on clarity and any confusing steps.
Reply
#5
Data-quality strategy: preregister the analysis plan; predefine exclusion criteria (e.g., failed attention checks, < or > plausible response times, >20% missing data). Use high-resolution timers (performance.now) and log exact stimulus onset times. Flag suspect submissions (e.g., repeated identical responses, abnormally fast completion). Consider adapting participation endpoints (practice trials, forced breaks) to keep engagement.
Reply
#6
Experimental design notes: design tasks that minimize hardware-dependent variability (avoid ultra-short stimuli timing; allow a little slack for latency). Use standardized instructions, practice trials, and a manipulation check to confirm understanding. Randomize trial order, pre-register hypotheses, and plan a simulation study to estimate power given expected noise from online settings.
Reply
#7
Sample 2–week rollout plan: Week 1—finalize task, build robust attention checks, and create a simple pilot protocol. Week 2—conduct initial pilots across devices, adjust based on feedback. Week 3—collect official pilot data, lock in data-cleaning rules. Week 4—launch full study with ongoing data checks and a plan for interim analyses. If you want, I can sketch a concrete task flow and a data-quality rubric tailored to your design.
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: