By Richard Fisher
I’m looking at a warning sign inside a laboratory in London. “Do not touch the maser,” it reads. It’s attached to a tall black box, on wheels, mounted in a steel protective case.
Turns out it’s a pretty important box, and the sign is there for a reason. It’s not dangerous, but if I were to meddle with the device, it might just disrupt time itself.
It’s one of a few such devices held at the National Physical Laboratory in south-west London, helping to ensure that the world has an accurate shared sense of seconds, minutes and hours. They’re called hydrogen masers, and they are extremely important atomic clocks. Along with around 400 others, placed all around the globe, they help the world define what time it is, right now, down to the nanosecond. Without these clocks – and the people, technology and procedures around them – the modern world would slowly drift into chaos. For many industries and technologies we rely upon, from satellite navigation to mobile phones, time is the “hidden utility”.
So, how did we arrive at this shared system of timekeeping in the first place, how does it stay accurate, and how might it evolve in the future? The answers involve looking beyond the clockface to explore what time actually is. Dig a little deeper, and you soon discover that time is more of a human construct than first appears.
It wasn’t always the case that everyone in the world kept the same time. For centuries, it was impossible, and time could only be defined locally by the nearest clock. At one place it was midday, but down the road it was 12:15. As recently as the 1800s, the US was operating with hundreds of different time standards, defined by cities and local railroad managers.
Part of the reason was that there was no feasible way to synchronise every clock in a country, let alone right across the Earth. For much of human history, this didn’t matter: people worked when they needed to, didn’t travel far, and if they wanted to know the time, could find out by checking a nearby sundial, town clock, or listening for church bells, or a call to prayer.
However, as the industrial age took off, it became clear that this state of affairs could not continue. In some cases, it was deadly. For example, in New England in the mid-1800s, two trains collided head-on, killing 14 people, because one of the conductors was using a “poor borrowed watch”, which was misaligned with his colleague’s.
To operate effectively, growing economies needed a better shared sense of accurate time: so that factories could employ workforces on the same hours, trains could leave and arrive when they were supposed to, and bankers could time-stamp financial transactions.
As the historian Lewis Mumford once noted, it was therefore the clock, not the steam engine, that was the most important machine of the Industrial Revolution. Steam engines may have powered factories and transport, but they couldn’t synchronise people and their activities.
For a while, the premier arbiter of this new shared time was Greenwich in London. Advanced mechanical clocks kept there showed the “true” time: Greenwich Mean Time (GMT). In 1833, timekeepers added a ball to a mast at the Royal Greenwich Observatory in London. It would drop at 13:00 each day, so that merchants, factories and banks could readjust their drifting clocks.
A few years later, GMT was distributed by telegram nationwide as “railway time” – ensuring that the whole UK train network was aligned. In the 1880s, the Greenwich time signal was sent across the Atlantic via submarine cable to Harvard in Cambridge, Massachusetts. And at the International Meridian Conference in Washington DC, more than 25 countries decided that GMT should become the international time standard.
As the decades passed, however, it became clear that a better way of synchronising time was required. The timekeepers of Greenwich might have laid a claim to operate some of the world’s most accurate clocks, but they based their calculations on an unreliable reference: the time it took for Earth to spin through one rotation.
To deliver accurate time, all clocks require a periodic, repeating process – whether it’s a swinging pendulum, or the electronic oscillations of a quartz crystal. The clocks at Greenwich were calibrated using the time it took for the Sun to reach the same position in the sky after a day. Their pendulum was therefore the Earth itself, spinning at a seemingly predictable rate. (This also applied to Universal Time, which replaced GMT in 1928.)
However, in the 20th Century scientists realised that our planet’s rotation speeds up and slows down over the years, due to gravitational effects from the Moon, Sun and other planets, geological shifts within the core and mantle, and even oceanic and climatic changes. In 1900, it was spinning almost 4 milliseconds slower, on average, than it was at the turn of the 21st Century. So while the world’s best timekeepers could claim greater accuracy than the average watch or grandfather clock, they themselves were wrong about the “true” time.
Around the same period, quantum physicists suggested that atoms might contain a far better way of keeping time than Earth’s rotation. Apply a specific frequency of electromagnetic radiation to an atom and its energy levels change. You can use an electronic counter to keep track of these transitions. Like a swinging pendulum, this makes for a stable periodic process on which to calculate a timescale. It would prove to be the basis for the “atomic clock”.
Atomic clocks keep time far more accurately than any clock based on Earth’s rotation – so accurate in fact, that if we fully based our world on them, eventually time would depart from night and day, so that the Sun would rise at 18:00 in the evening. It’s why the world’s timekeepers add leap seconds every so often.
The hydrogen masers at NPL in London are some of the world’s most important atomic clocks. There are several hundred more of them around the world, operated by national metrology institutes, and they are the new arbiters of time for all of us. But it’s not quite as simple as reading out the time from them: no atomic clock is perfect, due to things like local gravitational effects or differences between their electronics.
Metrologists therefore need to iron out those imperfections. Here’s how that works: a lab like NPL records and refines timing information from its bank of atomic clocks – the hydrogen masers – applying the occasional correction if the clock appears to be drifting (metrologists call this “steering” and they do it using separate equipment used to define the length of a second…we’ll return to that later.)
NPL then sends that to the International Bureau of Weights and Measures (BIPM) in Paris. The timekeepers at the BIPM create an average of all those measurements, giving extra weight to the better performing clocks. Further adjustments are made, and eventually this process spits out what’s called International Atomic Time (TAI – Temps Atomique International).
Once a month, the BIPM sends out TAI in an extremely important document called “Circular-T” (You can download the latest one here.) This document allows national laboratories to steer their clocks again, and crucially, to distribute a precise time to the industries that need it. For the UK, that’s NPL’s job, but in the US it’s the National Institute of Standards and Technology and there are many more around the world. Circular-T essentially informs the modern-day equivalent of the dropping ball at Greenwich.
While most people don’t need to know the time down to the nanosecond, many industries and technologies do. “Satellite navigation is probably one of the most ubiquitous high accuracy requirements but there are others,” says metrologist Patrick Gill at NPL. “Communication synchronisation, energy distribution and financial trading all require high precision time.” New technologies also bring additional demands: the 5G network is built on precise synchronisation, for example, as is the navigation technology guiding autonomous vehicles.
The thing is, though, TAI is still a construct of a hypothetical “true” clock time: a measurement that the world is merely agreeing to keep to. It’s not just that it’s a weighted average of many different atomic clocks, each one giving slightly different readouts. There’s another reason, and it boils down to a fundamental question: what exactly is a second? Over the years, the definition of this SI unit has changed – and therefore so has our definition of time. What’s more, it could change once again soon.
Redefining the second
It used to be that the second was defined as 1/86,400 of the mean solar day – the average time it takes for the Sun to reach the same point in the sky at midday, which takes approximately 24 hours. In other words, it was based on the Earth’s rotation, which we now know is irregular. The second, by this definition, would therefore have been longer in 1900 than it was in 1930, when the planet’s average rotation was faster. (Metrologists once had a similar problem with the kilogram: it was based on a block of metal held in a vault in Paris, but it would inexplicably change over time, and therefore so did everyone’s definition of kilogram.)
Midway through the 20th Century, metrologists decided that this would not do. So, they created a new definition for time. In 1967, it was decided that the second should instead be based on a fixed numerical value of the unperturbed caesium ground-state hyperfine transition. “It’s a bit of a mouthful,” admits Gill. So, what does it mean?
Fundamentally, it’s just another periodic, repeating process – the basis for all timekeeping. If you bathe caesium atoms in microwaves, they release more electromagnetic radiation, with a specific frequency that depends on the energy levels within the atom. By measuring this frequency – like counting pendulum swings – you can measure time’s passage.
The scientists at NPL do this with what’s called a caesium fountain. “We use light to toss the atoms up in the air by about half a metre and they fall back under gravity. You can then interrogate that fountain using tunable microwaves,” explains Gill. The fountain setup is necessary, because “you want to be make them as unperturbed as possible. If you’re holding the atoms by some other means, like electrically, or using light to hold them, that will change your frequency.”
This definition was chosen because caesium is reliable as an isotope – virtually all atoms in a sample will respond to electromagnetic radiation in the same way. Also, in the 20th Century, microwave frequencies could be more accurately and reliably measured than higher frequencies on the electromagnetic spectrum. It’s perhaps analogous to the way that you can measure your own heartbeat with a stopwatch, but you need more advanced technology to measure the frequency of a fly’s wings.
For decades, this definition has held fast. “That’s very good because that means the standard isn’t changing every five minutes, which is important in metrology,” says Gill. And it’s used by NPL and BIPM to underpin the calculations on documents like Circular-T.
However, as science has advanced – and as ever more accurate time is required by new technologies – metrologists have begun to contemplate a new definition for the second. It won’t happen overnight – perhaps in the 2030s – but it would mark the biggest change to shared timekeeping since the 1960s.
“Even when the second was defined in terms of this microwave transition in caesium, it was already understood that you could make a better clock by going to an optical frequency,” explains physicist Anne Curtis at NPL. “Optical frequencies oscillate much, much faster in the hundreds of terahertz. Hundreds of trillions of oscillations per second.”
Why is higher frequency better? “The way you can think about why that matters is to think about a ruler with a finite number of lines,” explains Curtis. So, on a standard ruler, the millimetres are marked, but not the micrometres, for instance. “If you increase the number of lines by four orders of magnitude, you can obviously measure much more precisely.”
So, at laboratories like NPL, scientists are now experimenting with new optical technology, with the hope that within the next decade or so, the second will get a new definition.
Much more testing is needed first though. “You need to create a definition that’s usable, that’s practical, and realisable for all of the different national metrology labs around the world,” says Curtis. “So it can’t be just some bespoke thing that only one group can do. And if they do it really well, it has to be something that we can universally call a redefinition.”
Time as a construct
So what should the rest of us make of all this? For one, it illustrates an extraordinary truth: there is no clock on Earth that can ever be perfectly stable or run at exactly the right rate. This was true when people used sundials, and it’s still true today – even with atomic timekeeping.
The second, for instance, is defined according to the technology we have available, and what a group of metrologists charged with making the decision choose it to be. Atomic clocks, no matter how accurate, still need “steering”. And when metrologists do things like add leap seconds to the timescale, they are adjusting time to human needs: to make sure some things stay the same, like the enjoyment of the sunrise in the morning.
Clock time is what we agree; it’s not the true time.
However, this agreement is a necessity for living and working within modern societies. If we went back to the days when all time was defined locally, many of our technologies would stop working, trains would crash, and financial markets would collapse. Like it or not, the world is built on clock time.
It can be illuminating, though, to consider what the foundations of this construct actually are. When you think about time like a metrologist does, time becomes something different.
Back at NPL, as I read the “do not touch the maser” sign, I ask one of the scientists showing me around if he himself is a good timekeeper: is he personally punctual, for example? “Oh, I only think in nanoseconds,” he replies.