Dr. Caspar Addyman is a Lecturer in the Department. He is a developmental psychologist interested in learning, laughter and behaviour change. The majority of his research is with babies. He has investigated how we acquire our first concepts, the statistical processes that help us get started with learning language and where our sense of time comes from.
Here, he looks at the last of these: how our brain tells the time.
It is a defining feature of the modern world that we all seem to be short of time, all the time. We are poor at managing our time. We are poor at estimating time. And until recently psychologists have been poor at explaining why. I believe my research on how the brain represents short intervals might give us a few clues. The short answer that we’re bad because our brains don’t contain clocks. Instead we must guesstimate the passage of time based on how our memories fade.
This might not sound too radical to you but it goes against received wisdom. For 50 years the main explanation of how you judge intervals has been based a little stopwatch in your head. This is known as the pacemaker-accumulator model, as it involves something that ticks (the pacemaker) and something that counts the ticks (the accumulator). This is a fairly intuitive idea but I would argue it is completely wrong.
It’s wrong for three main reasons. The biggest problem is that if we had an internal clock we would be a lot better at judging time than we are. Secondly, the clock model can’t explain how we judge the time in retrospect. Thirdly, it can’t easily explain why time flies when you’re having fun.
My memory based model of interval time solves all these problems. Developed with colleagues at Birkbeck and Burgundy and called the Gaussian Activation Model of Interval Timing (GAMIT, French, Addyman, Mareschal & Thomas, 2014) it is much simpler than the name suggests. The key idea is that you estimate time passing by how your memories fade. The more time has passed the fuzzier they are. The more things that happen to you the faster they will fade and the faster you will feel time is passing.
You have no clock
Humans and rats are terrible at telling the time. Get us to press a lever when 10 seconds have passed we will generally do so somewhere between 8 and 12 seconds. Correct on average but with quite a bit of variation and humans are no better than rats. If the interval is increased to 20 seconds our estimates are just as bad with estimates spread out between 16 to 24 seconds. If the interval is twice as long you (and your pet rat) are twice as bad.
This is where the real problem lies. If you had some sort of clock in your head then random unreliability in your clock would average out the longer you ran it. Your errors should be proportionally smaller on longer intervals. Pacemaker-accumulator models normally get round this by saying that as numbers get larger, counting gets harder. This is a kludge. In our model the errors are impossible to get around, as memories fade uncertainty increases, bad estimates are as good as it gets.
Timing, all the time.
Mental clocks have a second problem. What to time? Because it seems like we judge the time of any event we remember or can locate in the sequence of our experiences How long ago did you start reading this article, this paragraph? When did the waiter leave our table? When did that blue square appear on the screen in this experiment I am in? If mental timing is done with a clock then either you need to start a separate timer for every single event or master clock labels everything. The former would be highly wasteful while the latter would complex and couldn’t easily account for errors we mentioned above.
With a memory model all this comes for free. When we access our memory of a past event, there is more uncertainty the longer ago it was. The instant the waiter leaves the memory of that event is clear with each passing moment details become less clear. What was he wearing? Was I leaning forward or backward? What did you just say? Our brains are used to dealing with uncertainty, in our view, timing is just another example.
TIme flies when you are having fun
Imagine you are about to give a five minute interview on live television. You are off camera waiting your turn and there is nothing for you to do but focus on the passing time. Five minutes feels like forever. Then it is your turn and suddenly everything is happening at once. You get to the end in no time and are surprised it is over so soon. But then looking back on it, the pattern is reversed. You remember little about the waiting but the interview is full of event. If you didn’t know otherwise you would swear the interview was longer than the wait.
This difference between these so called prospective and retrospective timing judgements has been confirmed in a large meta-analysis of 117 studies (Block, Hancock & Zakay, 2010). It is so striking that most researchers say it means there must be two independent timing systems. This seemed ridiculous to us and our model was largely developed to unify these two effects.
In our view, judging the passing of time is a combination of two things; how much is happening and how much attention we are paying to the passage of time. Our model quantifies how these two factors interact to create distortions of time. The more that happens the faster your memories fade making recent events feel further in the past. Yet with more happening the less attention you can give to the passing of time and it feels like things are happening faster.
Time will tell
We are still developing our model and it’s not only the game in town. In a recent review of the field we found over 20 different approaches to how we judge short intervals (Addyman, French & Thomas, 2016). And then there are whole other classes of models that look at very fast timing under a second or on the order of days. In the realm of our daily experience from seconds to minutes we believe that our model is the most elegant and intuitive. But the real test is how well it fits the data. And that’s what we are working on now.
Dr. Caspar Addyman tweets in real-time at @brainstraining
Postscript
One rather nice thing about this research project was that I experienced a genuine “Eureka moment”, a flash of insight where I unexpectedly solved a big problem.
Derived from the story of ancient Greek scientist Archimedes leaping from his bath, shouting Heurika (I have it) when he realised how to measure the volume of an irregular shaped body by immersing it in water. No-one knows if that story is true. But it has become a common stereotype of the scientific process that it proceeds through giant leaps of insight or discovery. Mostly, it doesn’t. Science, like life, is mostly hard work with the occasional bit of good luck. Ninety-nine percent perspiration and 1 percent inspiration as Thomas Edison was fond of saying.
My lucky moment came one morning on the Victoria line somewhat north of Pimlico. For several months I had been struggling with how build effect of attention into our original computer model of fading memory. I had no clear ideas but was supposed to have a meeting that lunchtime explaining my progress. Sleepy and forlorn I just stared out of the window. Whereupon the answer just popped into my head; adding a loop to our network would let it look at its own previous estimates. When more events were competing for attention those loops would happen less frequently.
I knew straight away it would work and went happily to my meeting. By the end of the day, I had computer model that did as I expected within a week the paper was written (Addyman & Mareschal, 2014) Needless to say, in a few hundred tube journeys since then, it hasn’t happened again. Maybe Edison was exaggerating.
This article is published with a Creative Commons Attribution NoDerivatives licence [http://creativecommons.org/licenses/by-nd/4.0/], so you can republish it for free providing you link to this original copy.