Skip to main content

What Our Eyes Miss — and Why It Matters

Merging EEG, behavioral studies, and billions of virtual baggage scans, psychologist Patrick Cox maps the hidden cognitive mechanisms that guide split-second human decisions.

When a radiologist scans an X-ray for tumors or a TSA officer examines baggage for prohibited items, they're engaged in far more than simple observation. They're executing complex cognitive processes that transform sensory information into meaningful decisions. A deeper mechanistic understanding of these cognitive processes could dramatically improve outcomes in high-stakes scenarios.

Patrick Cox, assistant professor of psychology, is working to decode these mechanisms. His research program investigates how the brain processes visual information, directs attention, and conducts visual search — the everyday act of looking for specific objects while filtering out irrelevant stimuli.

"How we successfully perform visual search is a basic science question about something that we do all the time in our daily lives," Cox says. "But it's also really important in lots of really high stakes applied settings." By understanding the fundamental principles of visual perception and attention, his work aims to inform critical real-world applications in medical imaging, security screening, and defense intelligence.

Cox's laboratory employs a comprehensive methodological toolkit. The core work involves traditional behavioral experiments where participants complete visual search and object identification tasks on computer displays. By manipulating variables such as display characteristics, trial sequences, and feedback mechanisms, researchers can measure reaction times and accuracy to determine what factors make searches efficient.

But Cox's approach extends beyond behavioral measurements. The lab also uses electroencephalography (EEG), which allows researchers to observe neural processing in real time. "By putting an EEG cap on the scalp, we can essentially see large, coordinated activation of parts of the brain read out through the scalp," Cox says. "We can get kind of a biological movie of the time course of processing."

Patrick Cox watches students use an EEG cap to study the brain's reactions

Initial visual activation occurs in the occipital cortex within a couple hundred milliseconds of stimulus presentation. This neuroimaging reveals how visual processing then progresses sequentially through attention-related signals, decision-making processes, and finally motor responses. By tracking how experimental manipulations affect these neural signatures, Cox's team can map the cognitive architecture underlying successful visual search.

Perhaps the most innovative aspect of Cox's research involves an unlikely source: a mobile game where players simulate airport security screening. Through a collaboration with the game's developer, Kedlin Co., Cox and colleagues obtained a dataset containing de-identified visual search performance data from more than 15 million independent downloads and nearly 4 billion virtual bags.

The game presents animated baggage scans that mirror actual TSA screening procedures, with items color-coded by material composition — orange for organic materials, blue for metallic objects. "It's a bit cartoony, but it still looks like a real-world search," Cox explains. This massive dataset allows researchers to detect behavioral patterns that reflect underlying neural mechanisms at a scale that is impossible in traditional laboratory settings.

Cox has transformed this data into a research database, enabling his team to mine for traces of cognitive processes predicted by theories from neuroscience and psychology. Combined with experiments conducted through online platforms for recruiting participants like Amazon Mechanical Turk and Prolific, this approach allows the lab to sample diverse demographics beyond typical undergraduate participant pools, encompassing broader age ranges, socioeconomic backgrounds, and geographic locations.

One of Cox's key insights challenges conventional emphases in visual search research. While the field has traditionally focused on physical stimulus properties — the number of distractors, visual clutter, or similarity between targets and distractors — Cox argues that prior experience and expectation exert equally powerful, if not greater, influences on search success.

"Not just what you're looking at now, but what did you see the trial before? Three trials before?” Cox says. “Yesterday? For someone like a TSA agent or a radiologist, these have very large effects." Individual differences in ability further complicate the picture, suggesting that successful visual search depends as much on the observer's history and traits as on the immediate stimulus.

This emphasis on learning and experience has relevance for applied contexts where expertise develops over time. Understanding how training shapes visual perception could inform more effective protocols for security personnel, medical professionals, and military personnel conducting surveillance or intelligence analysis.

Cox's interdisciplinary trajectory reflects the convergence of multiple scientific traditions. Initially a physics major at Georgetown University, he became captivated by the research process and shifted toward modeling cognitive phenomena. "I realized that quantitatively modeling the external world was interesting, but there are actually ways in which we can try to quantitatively model cognitive processes in the mind," he recalls.

After completing a minor in cognitive science and a PhD in computational vision and neuroscience at Georgetown, Cox pursued postdoctoral training at George Washington University with a more applied focus. This training emphasized attention research and provided access to the mobile game dataset that has become highly influential to his current work.

Now in his third year at Lehigh, Cox is pursuing collaborations that bridge his basic science expertise with real-world applications. He has established connections with the National Geospatial-Intelligence Agency and the Army Research Lab, exploring how visual search principles apply to geospatial intelligence and field reconnaissance. He's also investigating potential collaborations with radiologists and local law enforcement to study medical imaging and crime scene investigation.

Cox's integrative approach — combining traditional laboratory experiments, neuroimaging, computational modeling, and big data analytics — exemplifies contemporary cognitive neuroscience at its most ambitious. By maintaining connections between basic science and applied problems, his work promises both theoretical advances in understanding visual cognition and practical improvements in domains where visual search performance can literally save lives.

As AI systems grow more sophisticated and human-machine collaboration becomes more common, understanding the cognitive foundations of visual perception becomes increasingly important. Cox's laboratory at Lehigh works at this intersection, decoding the neural mechanisms that help us find what we're looking for in an increasingly complex visual world.