PSYSCIQ 2008-4

CONTENT
MAIKE MALDA, FONS J. R. VAN DE VIJVER, KRISHNAMACHARI SRINIVASAN, CATHERINE TRANSLER, PRATHIMA SUKUMAR & KIRTHI RAO
Adapting a cognitive test for a different culture: An illustration of qualitative procedures
Full article .pdf (Diamond Open Access)
THOMAS AUGUSTIN & TANJA ROSCHER
Empirical evaluation of the near-miss-to-Webers law: a visual discrimination experiment
Full article .pdf (Diamond Open Access)
GUILHERME WOOD, KLAUS WILLMES, HANS-CHRISTOPH NUERK & MARTIN H. FISCHER
On the cognitive link between space and number: a meta-analysis of the SNARC effect
Full article .pdf (Diamond Open Access)
FUMIKO GOTOH, TADASHI KIKUCHI & CHRISTIAN STAMOV ROßNAGEL
Emotional interference in enumeration: A working memory perspective
Full article .pdf (Diamond Open Access)
JEANNE A. TERESI, MILDRED RAMIREZ, JIN-SHEI LAI & STEPHANIE SILVER
Occurrences and sources of Differential Item Functioning (DIF) in patient-reported outcome measures: Description of DIF methods, and review of measures of depression, quality of life and general health
Full article .pdf (Diamond Open Access)
MICHAEL B. STEINBORN, HAGEN C. FLEHMIG, KARL WESTHOFF & ROBERT LANGNER
Predicting school achievement from self-paced continuous performance: Examining the contributions of response speed, accuracy, and response speed variability
Full article .pdf (Diamond Open Access)
ABSTRACTS
Adapting a cognitive test for a different culture: An illustration of qualitative procedures
MAIKE MALDA, FONS J. R. VAN DE VIJVER, KRISHNAMACHARI SRINIVASAN, CATHERINE TRANSLER, PRATHIMA SUKUMAR & KIRTHI RAO
Abstract
We describe and apply a judgmental (qualitative) procedure for cognitive test adaptations. The procedure consists of iterations of translating, piloting, and modifying the instrument. We distinguish five types of adaptations for cognitive instruments, based on the underlying source (construct, language, culture, theory, and familiarity, respectively). The proposed procedure is applied to adapt the Kaufman Assessment Battery for Children, second edition (KABC-II) for 6 to 10 year-old Kannada-speaking children of low socioeconomic status in Bangalore, India. Each subtest needed extensive adaptations, illustrating that the transfer of Western cognitive instruments to a non-Westernized context requires a careful analysis of their appropriateness. Adaptations of test instructions, item content of both verbal and non-verbal tests, and item order were needed. It is concluded that the qualitative approach adopted here was found adequate to identify various problems with the application of the KABC-II in our sam-ple which would have remained unnoticed with a straightforward translation of the original instrument.
Keywords: Kaufman Assessment Battery for Children, Cognitive Test, Adaptation, Bias, Culture
Maike Malda, Ph.D.
Department of Psychology
Tilburg University
PO Box 90153
5000 LE Tilburg, The Netherlands
E-Mail: m.malda@uvt.nl
Empirical evaluation of the near-miss-to-Weber’s law: a visual discrimination experiment
THOMAS AUGUSTIN & TANJA ROSCHER
Abstract
Many pure tone intensity discrimination data support the hypothesis that the sensitivity function grows as a power function of the stimulus intensity (near-miss-to-Weber’s law). In order to test whether the near-miss-to-Weber’s law fits empirical data from other sensory modalities than hearing, the participants of the experiment had to compare the perceived area of squares presented on a computer screen. The results indicate an almost perfect fit of the near-miss-to-Weber’s law, which is in line with many pure tone intensity discrimination data. Different from a recent study on psychoacoustics, however, the exponent in the near-miss-to-Weber’s law does not vary with the criterion value used to define “just-noticeably different. Furthermore, we provide evidence that, for a majority of the participants, Weber’s classical law provides an equally good fit to the data as the near-miss-to-Weber’s law.
Keywords: Weber’s law, near-miss-to-Weber’s law, psychophysical model, size judgment, size discrimination, empirical evaluation
Thomas Augustin, Ph.D.
Department of Psychology
Karl-Franzens-Universität Graz
Universitätsplatz 2/III
8010 Graz, Austria
E-Mail: thomas.augustin@uni-graz.at
On the cognitive link between space and number: a meta-analysis of the SNARC effect
GUILHERME WOOD, KLAUS WILLMES, HANS-CHRISTOPH NUERK & MARTIN H. FISCHER
Abstract
An association of numbers and space (SNARC effect) has been examined in an ever growing literature. In the present quantitative meta-analysis, 46 studies with a total of 106 experiments and 2,206 participants were examined. Deeper number magnitude processing determined by task, stimulus and participants characteristics was associated with a stronger SNARC effect. In magnitude classification tasks the SNARC effect assumed consistently a categorical shape. Furthermore, the SNARC effect was found to increase with age from childhood to elderly age. No specific difference in the size of the SNARC effect was observed due to the explicit use of imagery strategies that could not be explained by increased reaction times. In general, these results corroborate the predictions by the dual-route model of the SNARC effect regarding the activation of number magnitude representation and suggest that automaticity may play a role in the development of the association of numbers and space across the lifespan.
Keywords: SNARC, mental number line, aging, imagery, meta-analysis
Guilherme Wood, Ph.D.
Department of Psychology
Paris-Lodron University of Salzburg
Hellbrunnerstrasse 34
5020 Salzburg, Austria
E-Mail: guilherme.wood@sbg.ac.at
Emotional interference in enumeration: A working memory perspective
FUMIKO GOTOH, TADASHI KIKUCHI & CHRISTIAN STAMOV ROßNAGEL
Abstract
We investigated the influence of emotional stimuli on enumeration. On each trial a set of 1 to 10 affectively positive, negative or neutral words were presented for 200 ms each. Participants counted the words after each trial. Error was greater and response times were longer for negative and positive words than for neutral words. Most importantly, this effect was shown only for set sizes within the countable range (set sizes between 1 and 7 words), with no effect in error rates for sets of 1 to 3 items. The effect disappeared for set sizes in the uncountable range (i.e., 8 to 10 words). Results underline the important role of the central executive in enumeration.
Keywords: Enumeration, Working memory, Affective valence
Fumiko Gotoh, Ph.D.
University of Tsukuba
Tsukuba City
Ibaraki 305-8572, Japan
E-Mail: fumigoto@human.tsukuba.ac.jp
Occurrences and sources of Differential Item Functioning (DIF) in patient-reported outcome measures: Description of DIF methods, and review of measures of depression, quality of life and general health
JEANNE A. TERESI, MILDRED RAMIREZ, JIN-SHEI LAI & STEPHANIE SILVER
Abstract
Examination of the equivalence of measures involves several levels, including conceptual equivalence of meaning, as well as quantitative tests of differential item functioning (DIF). The purpose of this review is to examine DIF in patient-reported outcomes. Reviewed were measures of self-reported depression, quality of life (QoL) and general health. Most measures of depression contained large amounts of DIF, and the impact of DIF at the scale level was typically sizeable. The studies of QoL and health measures identified a moderate amount of DIF; however, many of these studies examined only one type of DIF (uniform). Relative to DIF analyses of depression measures, less analysis of the impact of DIF on QoL and health measures was performed, and the authors of these analyses generally did not recommend remedial action, with one notable exception. While these studies represent good beginning efforts to examine measurement equivalence in patient-reported outcome measures, more cross-validation work is required using other (often larger) samples of different ethnic and language groups, as well as other methods that permit more extensive analyses of the type of DIF, together with magnitude and impact.
Keywords: Differential Item Functioning (DIF), measurement equivalence, patient-reported outcomes, quality of life, depression, general health
Jeanne A. Teresi, Ed.D., Ph.D.
Research Division
HHAR
5901 Palisade Avenue
Riverdale
New York 10471, USA
E-Mail: Teresimeas@aol.com
Predicting school achievement from self-paced continuous performance: Examining the contributions of response speed, accuracy, and response speed variability
MICHAEL B. STEINBORN, HAGEN C. FLEHMIG, KARL WESTHOFF & ROBERT LANGNER
Abstract
Trial-to-trial fluctuations in self-paced performance have long been considered an important aspect of an individuals performance. Whereas average response speed has been considered a cognitive factor indexing the speed of mental processing, response speed variability has been considered an energetic factor indexing an individuals capability to sustain mental processes over prolonged time periods. Here we investigated whether there is an incremental contribution of response speed variability, compared to mental speed, in predicting school achievement. A sample of 89 individuals was tested with the Serial Mental Addition and Comparison Task (SMACT) twice within a retest-interval of three days. In addition to the conventional performance measures speed (MRT) and accuracy (error percentage, EP), we evaluated two intraindividual response speed variability measures, standard deviation (SDRT) and coefficient of variation (CVRT), with regard to their power to statistically predict secondary- and high-school achievement. In general, school performance was best predicted by MRT and not at all by EP. Response speed variability, especially CVRT, appeared to be a good predictor of school performance, especially mathematics performance. The combined intake of MRT and CVRT as predictors in a multiple linear regression model, however, did not yield additional predictive value compared to the single-predictor model that contained only MRT. A further interesting finding was that the performance measures were differentially predictive across genders. In sum, we suggest that response speed variability as indexed by CVRT is a candidate dimension for the assessment of sustained concentration performance. Before applying CVRT in practical assessment settings, however, additional research is required to elucidate effects of different task factors (e.g., task length, task complexity, content domain, etc.) on the predictive power of this performance measure.
Keywords: concentration, attention, school achievement, reaction time variability, distractibility
Michael Steinborn, Ph.D.
Psychologisches Institut
Universität Tübingen
Friedrichstrasse 21
72072 Tübingen, Germany
E-Mail: michael.steinborn@uni-tuebingen.de
Psychology Science Quarterly
Volume 50 · 2008 · 4
Pabst, 2008
ISSN 1866-6140