The processing of compound radial frequency patterns

Gunnar Schmidtmann, Frederick Kingdom, Gunter Loffler

Radial frequency (RF) patterns can be combined to construct complex shapes. Previous studies have suggested that such complex shapes may be encoded by multiple, narrowly-tuned RF shape channels. To test this hypothesis, thresholds were measured for detection and discrimination of various combinations of two RF components. Results show evidence of summation: sensitivity for the compounds was better than that for the components, with little effect of the components’ relative phase. If both RF components are processed separately at the point of detection, they would combine by probability summation (PS), resulting in only a small increase in sensitivity for the compound compared to the components. Summation exceeding the prediction of PS suggests a form of additive summation (AS) by a common mechanism. Data were compared to predictions of winner-take-all, where only the strongest component contributes to detection, a single channel AS model, and multi-channel PS and AS models. The multi-channel PS and AS models were modelled under both Fixed and Matched Attention Window scenarios, the former assuming a single internal noise source for both components and compounds or different internal noise sources for components and compounds respectively. The winner-take-all and single channel models could be rejected. Of the remaining models, the best performing one was an AS model with a Fixed Attention Window, consistent with detection being mediated by channels that are efficiently combined and limited by a single source of noise for both components and compounds.

Schmidtmann, G., Kingdom, F. A. A., & Loffler, G. (2019). The processing of compound radial frequency patterns. Vision Research161, 63–74. [PDF] [PubMed]

Posted
AuthorGunnar Schmidtmann

Detection of distortions in images of natural scenes in mild traumatic brain injury patients.

Ben J. Jennings, Gunnar Schmidtmann, Fabien Wehbé, Frederick Kingdom, Reza Farivar

Mild traumatic brain injuries (mTBI) frequently lead to the impairment of visual functions including blurred and/or distorted vision, due to the disruption of visual cortical mechanisms. Previous mTBI studies have focused on specific aspects of visual processing, e.g., stereopsis, using artificial, low-level, stimuli (e.g., Gaussian patches and gratings). In the current study we investigated high-level visual processing by employing images of real world natural scenes as our stimuli. Both an mTBI group and control group composed of healthy observers were tasked with detecting sinusoidal distortions added to the natural scene stimuli as a function of the distorting sinusoid's spatial frequency. It was found that the mTBI group were equally as sensitive to high frequency distortions as the control group. However, sensitivity decreased more rapidly with decreasing distortion frequency in the mTBI group relative to the controls. These data reflect a deficit in the mTBI group to spatially integrate over larger regions of the scene.

Jennings, B. J., Schmidtmann, G., Wehbé, F., Kingdom, F. A. A., & Farivar, R. (2019), Detection of distortions in images of natural scenes in mild traumatic brain injury patients. Vision Research, 161, 12-17 [PDF] [PubMed]

Posted
AuthorGunnar Schmidtmann

I have been awarded the  VISTA Distinguished Visiting Scholar Award and will stay for a research project at York University (Toronto) from  May 24th to June 14th.

Vision: Science to Applications (VISTA) is a collaborative research programme hosted by York University and funded by the Canada First Research Excellence Fund (CFREF, 2016-2023). My host Dr Ingo Fruend (https://www.yorku.ca/ifruend/) and I will be working on aspects of shape perception and the development of an optimisation algorithm for perimetry. I will also attend the Centre for Vision Research International Conference on Predictive Vision (http://www.cvr.yorku.ca/conference2019). 

More information on the VISTA research programme can be found below and here: http://vista.info.yorku.ca.

CFREF.png
Posted
AuthorGunnar Schmidtmann

Abstract

Faces provide not only cues to an individual's identity, age, gender and ethnicity, but also insight into their mental states. The ability to identify the mental states of others is known as Theory of Mind. Here we present results from a study aimed at extending our understanding of differences in the temporal dynamics of the recognition of expressions beyond the basic emotions at short presentation times ranging from 12.5 to 100 ms. We measured the effect of variations in presentation time on identification accuracy for 36 different facial expressions of mental states based on the Reading the Mind in the Eyes test (Baron-Cohen et al., 2001) and compared these results to those for corresponding stimuli from the McGill Face database, a new set of images depicting mental states portrayed by professional actors. Our results show that subjects are able to identify facial expressions of complex mental states at very brief presentation times. The kind of cognition involved in the correct identification of facial expressions of complex mental states at very short presentation times suggests a fast, automatic Type-1 cognition.

Schmidtmann, G., Jordan, M., Loong, J.T., Logan, A.J., Carbon, C.C., & Gold,I. Temporal processing of facial expressions of mental states. BioRxiv 602375; doi: https://doi.org/10.1101/602375 [PDF]

Posted
AuthorGunnar Schmidtmann

Schmidtmann, G., Jennings, B. J., Sandra, D. A., Pollock, J., & Gold, I. (2019). The McGill Face Database: validation and insights into the recognition of facial expressions of complex mental states. BioRxiv, 586453. https://doi.org/10.1101/586453 [PDF]

The McGill Face Database: validation and insights into the recognition of facial expressions of complex mental states

Current databases of facial expressions of mental states typically represent only a small subset of expressions, usually covering the basic emotions (fear, disgust, surprise, happiness, sadness, and anger). To overcome these limitations, we introduce a new database of pictures of facial expressions reflecting the richness of mental states. 93 expressions of mental states were interpreted by two professional actors and high-quality pictures were taken under controlled conditions in front and side view. The database was validated with two different experiments (N=65). First, a four-alternative forced choice paradigm was employed to test the ability of participants to correctly select a term associated with each expression. In a second experiment, we employed a paradigm that did not rely on any semantic information. The task was to locate each face within a two-dimensional space of valence and arousal (mental state - space) employing a "point-and-click" paradigm. Results from both experiments demonstrate that subjects can reliably recognize a great diversity of emotional states from facial expressions. Interestingly, while subjects' performance was better for front view images, the advantage over the side view was not dramatic. To our knowledge, this is the first demonstration of the high degree of accuracy human viewers exhibit when identifying complex mental states from only partially visible facial features. The McGill Face Database provides a wide range of facial expressions that can be linked to mental state terms and can be accurately characterized in terms of arousal and valence.

Posted
AuthorGunnar Schmidtmann

Ingo Fruend (York University, Toronto) and I demonstrate that only a small fraction of biologically relevant shapes can be represented by Radial Frequency (RF) pattern-based shapes and that this small fraction is perceptually distinct from the general class of all possible planar shapes. In this paper we derive a general method to compute the distance of a given shape's outline from the set of RF patterns, allowing us to scan large numbers of object outlines automatically. This analysis shows that only 1 to 6% of naturally smooth outlines can be exactly represented by RF patterns. In addition, we present results from visual search experiments, which revealed that searching RF patterns among non-RF patterns is efficient, whereas searching an RF pattern among other RF patterns is inefficient (and vice versa).

Our results suggest that RF patterns represent only a small and restricted subset of possible planar shapes and that results obtained with this special class of stimuli can not simply be expected to generalise to any arbitrary planar shape and shape representation in general.

Schmidtmann, G., & Fruend, I. (2019). Radial frequency patterns describe a small and perceptually distinct subset of all possible planar shapes. Vision Research, 154, 122–130.  [PDF]

Posted
AuthorGunnar Schmidtmann

Ania Zolubak, PhD candidate in Dr Garcia-Suarez’ Lab, presented a poster at the European Conference on Visual Perception in Trieste.

Scale-invariance for radial frequency patterns in peripheral vision.

Zolubak, A. B., Schmidtmann, G., Garcia-Suarez, L. 

Radial frequency (RF) patterns are sinusoidally modulated contours. Previous studies have shown that RF shape discrimination (RF vs circle) is scale-invariant, i.e. performance is independent of radius size when presented centrally.
This study aims to investigate scale-invariance in peripheral vision (0-20° nasal visual field, radius 1°, RF=6, SF=1 or 5cpd) by scaling radii according to the Cortical Magnification Factor (CMF) and its fractions (MF1=½, MF2=¼, MF3=1/8).
Results show that performance remains constant with eccentricity for CMF, MF1, MF2 and for two observers (N=4) for MF3. However, the average performance for MF2 was twice and for MF3 four times worse compared to CMF and MF1.
The scale-invariance found for larger stimuli indicates the involvement of global shape processing in the periphery. The higher, yet constant thresholds for smaller patterns suggest that the resolvability of the contours limits peripheral performance and may elicit processing by low-level mechanisms.

ECVP2018 posterF.jpg
Posted
AuthorGunnar Schmidtmann