|Phone:||+31 (0) 20 598 8913|
After my bachelor in Artificial Intelligence at the University of Groningen, I moved to Amsterdam for a Master's programme at the VU. During that programme I became acquainted with the department of cognitive psychology, and worked as a research assistent for Martijn Meeter. For my master's project I went to York University in Toronto, to the lab of John Tsotsos. There I worked on a framework to extend the selective tuning model of visual attention to a general purpose system for visual problem solving.
In 2011 I started my PhD project with Martijn Meeter where we combined experimentation with various computational models to study various questions regarding visual attention. For example, we have constructed a neural model of the oculomotor system and how this gives rise to the trajectory of saccades, and I have implemented saliency map models, and applied these to study bottom-up visual priming. Experimentally, we have investigated the role of long-term learning in the deployment of visual attention. Part of this work has been in collaboration with Per Sederberg..
Currently, I work as a post-doc for Chris Olivers, together with Sander Bohte and Pieter Roelfsema. Primarily, we use neural networks and reinforcement learning to study how the brain can flexibly switch between two or more working memory representations.
The visual system is a fascinating sensory system that transforms the wealth of information that falls on our retina into a neural representation. This representation can be used to detect and recognize objects and make simple judgments about them, determine how we should act on these objects, and importantly, where we should look or attend next.
This representation is dynamic: with different task demands, we can change the representation in our visual system. At the same time, we can use the same machinery to temporarily store and retrieve visual information, called visual Short-term memories. Part of my research focuses on how the brain learns what to do with the visual representation given certain task demands: is it all under our under control? Or does a lot of it rely on automaticity and the implicit influence of our past experiences? To what extent does learning how to control our visual attention from past experiences correspond to 'regular' long-term memory learning and retrieval? .
Such control is not just necessary for the processing of information that is currently present: When we have one or more items in our working memory, how do we regulate which of these representations determine our behavior? How do we integrate present information with that of one or more working memory items? Where do memory items 'go' while the visual system is busy solving a different task? .
In search for the answers to these questions, I recruit experimentation as well as computational modeling techniques. Experiments mostly involve (but are not limited to) behavioral studies and eye-tracking research. The models I use include computational models include neural network models of reinforcement learning, bayesian graphical models, and several saliency map models.
|W Kruijne & M Meeter (2017) You prime what you code: The fAIM model of priming of pop-out. PloS one 12 (11), e0187556||1|
|RM Mattiesing, W Kruijne, M Meeter & SA Los (2017) Timing a week later: The role of long-term memory in temporal preparation. Psychonomic Bulletin & Review, 1-6||1|
|SA Los, W Kruijne & M Meeter (2017) Hazard versus history: Temporal preparation is driven by past experience.. Journal of experimental psychology: human perception and performance 43 (1), 78||7|
|W Kruijne & M Meeter (2016) Implicit short-and long-term memory direct our gaze in visual search. Attention, Perception, & Psychophysics 78 (3), 761-773||5|
|W Kruijne & M Meeter (2016) Long-Term Priming of Visual Search Prevails Against the Passage of Time and Counteracting Instructions.. Journal of experimental psychology. Learning, memory, and cognition||7|
|W Kruijne, JW Brascamp, Á Kristjánsson & M Meeter (2015) Can a single short-term mechanism account for priming of pop-out?. Vision research 115, 17-22||38|
|W Kruijne & M Meeter (2015) Explaining intertrial priming from the visual code.. Journal of vision 15 (12), 1256-1256|
|W Kruijne & M Meeter (2015) The long and the short of priming in visual search. Attention, Perception, & Psychophysics 77 (5), 1558-1573||43|
|NC Anderson, E Ort, W Kruijne, M Meeter & M Donk (2015) It depends on when you look at it: Salience influences eye movements in natural scene viewing and search early in time. Journal of Vision 15 (5), 9-9||15|
|NC Anderson, E Ort, W Kruijne, M Meeter & M Donk (2015) It depends on when you look at it: Salience.|
|JK Tsotsos & W Kruijne (2014) Cognitive programs: software for attention's executive. Frontiers in Psychology 5, 1260||19|
|SA Los, W Kruijne & M Meeter (2014) Outlines of a multiple trace theory of temporal preparation. Frontiers in psychology 5, 1058||17|
|W Kruijne & M Meeter (2014) The Long and the Short of Intertrial Priming. Journal of Vision 14 (10), 708-708|
|W Kruijne, S Van der Stigchel & M Meeter (2014) A model of curved saccade trajectories: Spike rate adaptation in the brainstem as the cause of deviation away. Brain and cognition 85, 259-270||6|
|E Dalmaijer, S Van der Stigchel, L van der Linden, W Kruijne, D Schreij & ... (2013) OpenSesame Opens the Door to Open-Source and User-Friendly Eye-Tracking Research. European Conference on Eye Movements||1|
|Z Wang, W Kruijne & J Theeuwes (2012) Lateral interactions in the superior colliculus produce saccade deviation in a neural field model. Vision research 62, 66-74||17|