Half a year into my new life as a industry person, I can report that “data science” is not only a buzz word that is incredibly hip (everyone does data science, everyone wants to hire data scientists), it’s also an umbrella term that is used to describe a number of quite different jobs involved with handling big (or even medium or small) data. This can be confusing for academics with a quantitative training that want to move to industry.
This blog post summarizes the more important ones, I would like to add straight and very high-level business analytics to the set:
Comments Off on There’s data science and data science
Marc Ernst and I have recently published a review article for Current Opinion in Behavioral Sciences about the research on temporal sensorimotor adaptation that we and others have conducted in recent years. Our review focuses on things that make temporal adaptation different from other cases of sensorimotor adaptation, such as adaptation to displacements of the visual field by prism goggles.
Sensorimotor timing plays an important role in the inference of agency, as actions always occur before their sensory effects. This temporal order constraint influences temporal adaptation. Moreover, depending on which action we choose to perform, it can be more or less easy to identify the presence of a delay in closed-loop sensorimotor control. Therefore, delays are sometimes confused with superficially similar sensory perturbations, such as changes in mass or spatial offsets, which then interferes with temporal adaptation. These properties make research on temporal adaptation difficult, interesting, and prone to confounds.
Illustration of the sensory feedback delays (visual, kinaesthetic) involved in the action of catching a ball in flight
I am particularly happy about the publication of this article because it gave us a chance to share the most important things that we have learned on this topic in a condensed and accessible format (even if some exciting results of ours still await publication – watch this space!). As I have recently left academia to take up a job in industry, it is important for me to wrap up (I will soon write a post about this, too). I hope our article inspires others to keep investigating this interesting topic and saves them from committing some not-so-obvious mistakes.
Citation: Rohde, M. & Ernst, M.O. (2016) Time, agency, and sensory feedback delays during action. Current Opinion in Behavioral Sciences 8, 193-199.
Email me or check Researchgate for a full-text of the article.
Comments Off on Time, Delays, and Agency
Statistically optimal models of cue integration have been immensely influential in multisensory research over the past two decades. Our laboratory here in Bielefeld is one of the centers of this line of research, led by our head of Department, Marc Ernst. As such, we are often asked to review manuscripts on optimal integration studies, including manuscripts from researchers who are new to the field and who are unexperienced with psychophysical methods and (Bayesian) computational modeling approaches. Unfortunately, such first contributions often use experimental paradigms and methods that are problematic and in some cases even render the results effectively meaningless.
For this reason, Loes van Dam, Marc Ernst and I have written an accessible and practical tutorial, a kind of MLE for dummies paper that points out the most important issues to consider when designing experimental to test for optimality of cue integration according to Maximum Likelihood Estimation (MLE). It is targeted at novice researchers, new graduate students entering the field and researchers from neighbouring disciplines (neural imaging, clinical neuroscience, etc.). It can also be used for classroom teaching. As a supplement to this tutorial, Loes and I have also developed a Matlab toolbox for data analysis and an example experiment coded in Matlab (part of the toolbox). With this toolbox, novices can take their first step by trying out MLE optimal integration first hand and play with the parameters described in the tutorial. Also experienced researcher can use the toolbox to analyze data from cue integration experiments. Enjoy!
If you don’t have a subscription to Multisensory Research, you can also access a copy of the full text on Researchgate or email me.
Optimal integration of redundant cues (unimodal and optimal bimodal Likelihood functions)
Comments Off on …and now for something completely different
Together with Marc Ernst and Meike Scheller, we have published results from a study where we looked at how the order of action and external feedback influence our subjective sense of agency.
One would assume that agency can only be experienced if feedback temporally follows up to the action execution. One would also assume that this is a fixed constraint, as the law that cause comes before the effect is one of the most fundamental laws of physics. Against this intuition, we found that humans have a plastic boundary to decide whether or not they might have caused something that happened just before their action, and that this boundary can be relaxed postdictively based on whether later events favor a different causal interpretation of the scene or not.
Check out the final draft of our paper for more details.
Comments Off on Effect before the cause
The International Conference on Timing and Time Perception in Corfu was marvellous. It was officially the last meeting of the Timely network but I am sure it will continue.
So, what is Timely? It is a network of people working on timing and time perception across disciplines and nation borders. It has been built up and maintained over the last years by the tireless Argiro Vatakis and is marked by a cooperative spirit, strong female participation on all levels of seniority and a general convivial feel that does not seem to know concepts like discipline chauvinism or hierarchy.
Image from the ICTTPTimely Twitter page
Nearly every session had a highlight for me, but I want to point out some contributions I particularly enjoyed.
- The first two talks of the first session from Concetta Morrone’s lab on Rhythmic oscillations of visual contrast sensitivity triggered by voluntary action and time compression of tactile stimuli – something to look out for in upcoming publications!
- Alan Wing’s keynote talk on Coordination in Group Timing.
- Martin Riemer’s talk on Time asymmetric presuppositions in perception research.
- Out of my ordinary comfort zone, I also really enjoyed Jennifer Coull’s talk about an fMRI study on the hazard function and how activity builds up with increasing temporal predictability.
There were many other notable posters, talks and symposia – not to mention the abundance of positive and constructive feedback I got for my own presentation. Best scientific meeting in a long time. And, not to forget – Corfu in spring.
Comments Off on Timely ICTTP
We got started on a new collaboration to study non-linear motor learning with redundant degrees of freedom in humans and robots. The project is a collaboration with Jochen Steil and Kenichi Narioka from the Cognitive Robotics (CoR lab) at the University of Bielefeld.
Motivated by observations from early infant development, the researchers of the CoR lab have conceived Goal Babbling as a particular efficient means to directly learn inverse models. This technique has for instance been used to learn the inverse kinematics of Festo’s otherwise untractable bionic elephant trunk robot:
We will now study whether also humans perform goal babbling of this kind to learn new sensorimotor skills…
Comments Off on CITEC go2goal project kick-off
Many people will know about the classical experiments on adaptation to prismatic displacement or inversion: If humans wear goggles that spatially distort their field of view, this initially severely disrupts their behaviour. With extended training however, they adapt, and not only by neutralizing the disruption in motor behaviour, but also by restoring the perception of space to how it was before the goggles were put on – to the point that wearing the distorting goggles feels more natural than not wearing them.
For some decades now, researchers have been trying to find out if the same is also possible in time – if we adapt our behaviour to compensate for a feedback delay (imagine a slowly reacting computer game), will this also change our time perception? Does that mean that after adaptation, it feels more natural to have the delay than to act in real time? There remains considerable controversy about this issue within the scientific community – some researchers believe it is possible, some that we can only compensate on the behavioural level, but not in time perception, and yet others seem to think we cannot adapt to delays at all. The results we present in our newest paper: Predictability is necessary for closed-loop feedback delay adaptation help to clear up some of this controversy by showing the importance of environmental predictability for delay adaptation in behaviour AND perception.
So my tentative answer to the question: Do we adapt to delays like we adapt to prismatic distortions? Is: yes – no – maybe – in a way – sometimes. Watch this space. One thing I do know, however, is that the phenomenology of the aftereffect (i.e., having the delay removed after adaptation) is pretty weird:
Example post-test behaviour from the study. The participant tries to follow the black dot with the red dot by moving her hand left and right.
If you move your hand with the expectation to track a target, and the cursor you control moves even before the target (because you incorporated the additional delay into your time perception), it really messes with your sense of agency – it is as if the cursor runs away from you, rather than to be controlled by you, because it moves before you feel you move. Another issue that surely deserves further scientific attention…
Comments Off on Predictability and delay adaptation
Already my earlier work on proprioceptive drift in the Rubber Hand Illusion left me confused and somewhat skeptical that the subjective experience of owning a rubber hand can be measured by proxy of a more objective measure, like recalibration of perceived hand position. When we measured the results described in my new paper: The Human Touch: Skin Temperature during the Rubber Hand Illusion in Manual and Automated Stroking Procedures, this confusion grew even larger.
To provide some scientific background: If humans see a rubber hand being stroked in the exact time and location that they feel their own, occluded hand being stroked, they get the strong and uncanny subjective feeling that this hand is somehow a part of their body (Rubber Hand Illusion). Among the physiological correlates of the feeling that the rubber hand is part of one’s body is a temperature drop in the stimulated hand (Moseley et al., 2008, PNAS). In the MSc project of Andrew Wold (in collaboration with his former boss Hans-Otto Karnath, Tübingen, 2010), we tried to replicate this result in the robotic stroking setup that I developed for my earlier research. This setup gives rise to a powerful illusory feeling to own the rubber hand.
Main result: hand cooling for the robotic stroking (top) and manual stroking (bottom) setup
Despite our best efforts, we could not replicate the cooling effect; we did not even observe a trend. However, when we copied the original, manual stroking procedure, we observed the same hand cooling reported in the literature (even though it also occurred, to a lesser extent, in the no-vision control condition). This difference between setups does not correspond to a difference in subjective experience, and I have no good explanation for why this is the case – the only thing i can conclude from this research is that also this temperature drop is not directly linked to subjectively experienced body ownership in the RHI and is evidently driven or modulated by other, external factors. The multisensory integration processes involved in body image perception are surely among the least well understood in the field and are truly difficult to study.
Comments Off on Cold fingers need a Human Touch
The second one of three studies on the links between visuomotor timing, the sense of agency and time perception is out:
Asymmetries in visuomotor recalibration of time perception: Does causal binding distort the window of integration? Marieke Rohde, Leonie Greiner, Marc O. Ernst, Acta Psychologica, in press.
This is a follow-up study to our previous work on the recalibration of the point of perceived simultaenity to vision-lead and vision-lag visuomotor discrepancies – can you adjust your perception of presentness to a stimulus that always occurs even before you press a button? In the first study, we asked participants only to rate the temporal order of flash and button press events and found that, if trained to a discrepancy, the decision boundary changes, independent on the direction of the training discrepancy (vision first or movement first).
In the present study, participants also indicated the perceived length of the interval between a flash and a button press, with a surprising result: Depending on the training stimulus (vision lead or vision lag), the size of the temporal window of perceived simultaneity grows and shrinks on the side of the visual lag only. So even if we can adapt in both directions, the recalibration process that seemed symmetrical after our first study really is not symmetrical at all!
It appears that the temporal asymmetry of agency – the cause (=press) has to come before the effect (=flash) has strong influences on the way in which we adapt and reshape our experience of relative visuomotor timing. If you are interested in the result, plus some additional musings on what this has to do with intentional binding and how different temporal psychophysics tasks relate and should be modelled, check out the paper.
Comments Off on The Causal Asymmetry of Agency and Temporal Recalibration
An experiment we did on the recalibration of visuomotor simultaneity perception to vision-lead and movement-lead temporal discrepancies was published as part of the research topic “time and causality” in Frontiers in Psychology. This is the first in a series of experiments where we use signals from early movement onset to predict the timing of a button press (see Figure below) and thus be able to present visual stimuli even before the button press.
Figure: Predicting the timing of a button press in real time from early movement onset
In this first paper, we demonstrate that humans can adapt their perception of simultaneity to both movement-lead and vision-lead temporal discrepancies. However, a more complete picture of temporal recalibration arises only if follow-up work (under review) is taken into account. Therefore, I do not want to comment on the results themselves at this moment in time.
Instead I want to make some remarks about the idea of “tricking” the underlying temporal structure of cause and effect by predicting action and presenting visual events even before the voluntary movement event happens. In the study of sensorimotor timing and time perception, the relative timing of voluntary actions and sensory events cannot be manipulated as easily as, for instance, spatial offsets or the relative timing of external sensory events. This is because the human decides when to act, and this decision furthermore depends on previous sensory inputs, if they are early enough to react to them. How is it possible to present a signal just before a voluntary action?
I got inspired by previous research by Stetson et al. (2006), who estimated the time of a button press from previous rhythmic actions or a “go-signal” to probabilistically time visual stimuli to occur before a voluntary button press. This is a clever idea. Yet, there is likely too much noise in the prediction to use this kind of estimate for recalibration. Also, I couldn’t help feeling that the existence of a clearly perceptible lead event could be a confound in this kind of research. Therefore, I got very excited when I read Dennett & Kinsbourne’s (1992) article on temporal consciousness, where, among other things, they discuss Grey Walter’s reports on using pre-motor brain activity in neurosurgical patients to trigger events in the real world, thereby finding a “shortcut” for inherent sensorimotor latencies.
Figure: Illustration how I envisioned the BCI-based prediction to work (from DFG grant proposal)
It is well-established that neural correlates of voluntary action occur even before humans become aware of committing to the execution of an action (Readiness Potential or Contingent Negative Variation, e.g., Libet et al., 1983). Even if it seems odd that Grey Walter’s work on using these to generate actions, which sounds rather revolutionary, has never been presented in written form, the reports were encouraging enough to start a project on using brain-computer-interface (BCI) in real-time for the generation of psychophysical stimuli. This work was done together with Nicholas Del Grosso and Michael Barnett-Cowan at the Max Planck Institute for Biological Cybernetics in Tübingen in 2010/2011. We used an EEG system there and received both support and encouragement for this project from researchers in EEG imaging and BCI, meaning that the idea was, at least superficially, not completely crazy.
Still, at the end of the day, this project turned out to be too ambitious. I was humbled when realising how noisy EEG data are on a trial-by-trial basis and have huge respect for BCI researchers using EEG. These algorithms involve cutting edge machine learning techniques that detect non-linear interactions in time, space and in the frequency domain together, using built-in knowledge about the underlying neural processes and training both the participant to the algorithm and the algorithm to the participant. The filters can thus detect the neural correlate of an action that does not even take place in real time, on a trial-by-trial basis (ca. 80% classification success). Yet, for our purposes, i.e., to reliably predict a motor event before it takes place with a more or less constant temporal offset, these algorithms did not really work. The strongest time domain signals (e.g., P300) occur after the action, not before, and any correlate prior to action will likely be detected with considerable variability concerning the exact time of detection, if at all. Of course, it is possible that our limited experience with either EEG measurement or non-linear filtering was the root of the problem. Yet, from the literature and my experience, I think that there are currently no tools sensitive enough to solve our problem to predict a button press in real time with high temporal accuracy on a trial-by-trial using EEG.
Thus, when I moved back into the realm of behavioural prediction, it was at first out of need. In retrospect, however, there were also conceptual, not just technical reasons to discontinue research using BCI for this kind of stimulus generation. In our current research (as described in the paper), we use Phantom force-feedback devices (Sensable Technologies Inc.) to haptically display buttons. These devices track participants’ movements as well as display forces, so when participants initiate the press movement, to press the button, this early movement onset can be registered and used to predict the timing of a button press in real time, in order to display a visual flash between 200 ms and 10 ms beforehand (with some error, of course). Using this technique, we encountered a problem. Participants started to construct causal shortcuts to explain why events reliably happened before they performed an action. For instance, they speculated about changes in the sensitivity of the button. When we tried to address this problem by making participants perform a reach through the air before they pressed the button and used this movement to predict the button press, they started speculating about a light barrier installed in the set-up. Only when, verbally/cognitively, we provided participants with alternative causal origins for these early flashes, such as another person performing the same experiment in a different room or the computer randomly producing flashes at certain times, participants stopped deriving causal shortcuts and just did the task. (This corresponds to a relaxation of the criterion of exclusivity; Wegner & Wheatley, 1999)
In this kind of research, the perception of causality and the perception of time are intricately linked, and it is hard to eliminate possible confounds between the two, as a change in the one is likely preceded, followed or accompanied by a change in the other. The best way of controlling these appears to be both on the level of stimulus and on the level of subjects’ beliefs about the causal structure underlying the task. Therefore, I am rather convinced that, had we succeeded with the BCI approach, this effort would have been in vain. Subjects would have also constructed external causal shortcuts, such as a light barrier, to explain the fact that stimuli occur prior to their movment.
- Dennett, D. C., & Kinsbourne, M. (1992). Time and the observer: The where and when of consciousness in the brain. Behavioral and Brain Sciences, 15, 193–247.-
- Libet, B., C. A. Gleason, E.W. Wright & D.K. Pearl (1983): Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain 106 (3): 623-42.
- Rohde, M., & Ernst, M.O. (2013): To lead and to lag – forward and backward recalibration of perceived visuo-motor simultaneity. Front Psychology 3, 599. Research topic: Time and Causality – Frontiers in Perception Science.
- Stetson, C., Cui, X., Montague, P. R., and Eagleman, D. M. (2006). Motor-sensory recalibration leads to an illusory reversal of action and sensation. Neuron 51, 651–659.
- Wegner, D. M., & Wheatley, T. P. (1999). Apparent mental causation: Sources of the experience of will. American Psychologist, 54, 480-492.
Comments Off on Inverting the Order of Cause and Effect for Psychophysics Stimuli