Barhoom, H.S., Joshi, M.R., Schmidtmann G. (2020) The effect of response biases on resolution thresholds of Sloan letters in central and paracentral vision, BioRxiv, https://doi.org/10.1101/2020.09.04.283119
Clinical Vision Science - A Concise Guide to Numbers, Laws, and Formulas
This book provides a concise and user-friendly guide to the most common and important numbers, laws and formulas in clinical vision science. Clinicians and trainees in ophthalmology, optometry, orthoptics, and ophthalmic dispensing, who are seeking an easy-to-use lab coat pocket size resource, will find this book to be an essential reference in clinical practice.
Clinical Vision Science: A Concise Guide to Numbers, Laws, and Formulas is clearly structured into basics, physical optics, visual optics and ophthalmic lenses, optical instruments, photometry, visual perception, clinical procedures, and anatomy & binocular vision. Each chapter contains a range of tables, formulas, large illustrations and flow charts to allow readers to quickly and accurately find key facts for each type of examination procedure.
Vision Scientists!
With ARVO and VSS cancelled and most of us working from home and under strict social distancing guidelines, we would like to invite vision scientists worldwide to join us for a weekly vision science virtual coffee break.
Come and join us on Zoom next Wednesday, 25th March.
Topic: Vision Science in times of social distancing
Time: Mar 25, 2020 03:00 pm London
08:00 am Pacfic
11:00 am Eastern
03:00 pm UK
04:00 Central European
Join Zoom Meeting
https://unibas.zoom.us/j/661371046
Schmidtmann, G., Jennings, B. J., Sandra, D. A., Pollock, J., & Gold, I. (2020). The McGill Face Database: Validation and Insights Into the Recognition of Facial Expressions of Complex Mental States. Perception. https://doi.org/10.1177/0301006620901671
Schmidtmann, G., Maria Zawadyl, “Summation within and between shapes in central and peripheral vision.“ Applied Vision Association (AVA) Christmas Meeting, Cardiff University, 2019 [SLIDES]
The processing of compound radial frequency patterns
Gunnar Schmidtmann, Frederick Kingdom, Gunter Loffler
Radial frequency (RF) patterns can be combined to construct complex shapes. Previous studies have suggested that such complex shapes may be encoded by multiple, narrowly-tuned RF shape channels. To test this hypothesis, thresholds were measured for detection and discrimination of various combinations of two RF components. Results show evidence of summation: sensitivity for the compounds was better than that for the components, with little effect of the components’ relative phase. If both RF components are processed separately at the point of detection, they would combine by probability summation (PS), resulting in only a small increase in sensitivity for the compound compared to the components. Summation exceeding the prediction of PS suggests a form of additive summation (AS) by a common mechanism. Data were compared to predictions of winner-take-all, where only the strongest component contributes to detection, a single channel AS model, and multi-channel PS and AS models. The multi-channel PS and AS models were modelled under both Fixed and Matched Attention Window scenarios, the former assuming a single internal noise source for both components and compounds or different internal noise sources for components and compounds respectively. The winner-take-all and single channel models could be rejected. Of the remaining models, the best performing one was an AS model with a Fixed Attention Window, consistent with detection being mediated by channels that are efficiently combined and limited by a single source of noise for both components and compounds.
Schmidtmann, G., Kingdom, F. A. A., & Loffler, G. (2019). The processing of compound radial frequency patterns. Vision Research, 161, 63–74. [PDF] [PubMed]
Cover: Jennings et al. (2019), Vision Research, 161, 12-17
Detection of distortions in images of natural scenes in mild traumatic brain injury patients.
Ben J. Jennings, Gunnar Schmidtmann, Fabien Wehbé, Frederick Kingdom, Reza Farivar
Mild traumatic brain injuries (mTBI) frequently lead to the impairment of visual functions including blurred and/or distorted vision, due to the disruption of visual cortical mechanisms. Previous mTBI studies have focused on specific aspects of visual processing, e.g., stereopsis, using artificial, low-level, stimuli (e.g., Gaussian patches and gratings). In the current study we investigated high-level visual processing by employing images of real world natural scenes as our stimuli. Both an mTBI group and control group composed of healthy observers were tasked with detecting sinusoidal distortions added to the natural scene stimuli as a function of the distorting sinusoid's spatial frequency. It was found that the mTBI group were equally as sensitive to high frequency distortions as the control group. However, sensitivity decreased more rapidly with decreasing distortion frequency in the mTBI group relative to the controls. These data reflect a deficit in the mTBI group to spatially integrate over larger regions of the scene.
Jennings, B. J., Schmidtmann, G., Wehbé, F., Kingdom, F. A. A., & Farivar, R. (2019), Detection of distortions in images of natural scenes in mild traumatic brain injury patients. Vision Research, 161, 12-17 [PDF] [PubMed]
I have been awarded the VISTA Distinguished Visiting Scholar Award and will stay for a research project at York University (Toronto) from May 24th to June 14th.
Vision: Science to Applications (VISTA) is a collaborative research programme hosted by York University and funded by the Canada First Research Excellence Fund (CFREF, 2016-2023). My host Dr Ingo Fruend (https://www.yorku.ca/ifruend/) and I will be working on aspects of shape perception and the development of an optimisation algorithm for perimetry. I will also attend the Centre for Vision Research International Conference on Predictive Vision (http://www.cvr.yorku.ca/conference2019).
More information on the VISTA research programme can be found below and here: http://vista.info.yorku.ca..

Abstract
Faces provide not only cues to an individual's identity, age, gender and ethnicity, but also insight into their mental states. The ability to identify the mental states of others is known as Theory of Mind. Here we present results from a study aimed at extending our understanding of differences in the temporal dynamics of the recognition of expressions beyond the basic emotions at short presentation times ranging from 12.5 to 100 ms. We measured the effect of variations in presentation time on identification accuracy for 36 different facial expressions of mental states based on the Reading the Mind in the Eyes test (Baron-Cohen et al., 2001) and compared these results to those for corresponding stimuli from the McGill Face database, a new set of images depicting mental states portrayed by professional actors. Our results show that subjects are able to identify facial expressions of complex mental states at very brief presentation times. The kind of cognition involved in the correct identification of facial expressions of complex mental states at very short presentation times suggests a fast, automatic Type-1 cognition.
Schmidtmann, G., Jordan, M., Loong, J.T., Logan, A.J., Carbon, C.C., & Gold,I. Temporal processing of facial expressions of mental states. BioRxiv 602375; doi: https://doi.org/10.1101/602375 [PDF]