Wind gusts were reaching 125 MPH in Boulder county, if anyone’s curious. A lot of power was shut off preemptively to prevent downed power lines from starting wildfires. Energy providers gave warning to locals in advance. Shame that NIST’s backup generator failed, though.
The disater plan is to have a few dozens stratum 1 servers spread around the world, each connected to a distinct primary atomic clock, so that a catastrophic disaster needs to take down the global internet itself for all servers to become unreachable.
The failure of a single such server is far from a disaster.
And the disaster plan for the disaster plan is to realize that it isn't that important at the human-level to have a clock meticulously set to correspond to other meticulously-set clocks, and that every attempt to force rigid timekeeping on humans is to try to make humans work more like machines rather than to make machines work more like humans.
I really, really can't get behind this sentiment. Having a reliable, accurate time keeping mechanism doesn't seem like an outlandish issue to want to maintain. Timekeeping has been an important mechanism for humans for as long as recorded history. I don't understand the wisdom of shooting down establishing systems to make that better, even if the direct applicability to a single human's life is remote. We are all part of a huge, interconnected system whether we like it or not, and accurate, synchronized timekeeping across the world does not sound nefarious to me.
> Timekeeping has been an important mechanism for humans for as long as recorded history.
And for 99% of that history, Noon was when the sun was half-way through its daily arc at whatever point on Earth one happened to inhabit. The ownership class are the ones who invented things like time zones to stop their trains from running in to each other, and NTP is just the latest and most-pervasive-and-invasive evolution of that same inhuman mindset.
From a privacy point of view, constant NTP requests are right up there alongside weather apps and software telemetry for “things which announce everyone's computers to the global spy apparatus”, feeding the Palantirs of the world to be able to directly locate you as an individual if need be.
oh....no, not really, no, the world needs GPS, so, yeah. this is not like scrooge mcduck telling you to be at work on time. scrooge still has a windup watch
If access to the site is unsafe and thus the site is closed; not having access seems reasonable.
Time services are available from other locations. That's the disaster plan. I'm sure there will be some negative consequences from this downtime, especially if all the Boulder reference time sources lose power, but disaster plans mitigate negative consequences, they can't eliminate them.
Utility power fails, automatic transfer switches fail, backup generators fail, building fires happen, etc. Sometimes the system has to be shut down.
Maybe this is the disaster plan: There's not a smouldering hole where NIST's Boulder facility used to be, and it will be operational again soon enough.
There's no present need for important hard-to-replace sciencey-dudes to go into the shop (which is probably both cold, and dark, and may have other problems that make it unsafe: it's deliberately closed) to futz around with the the time machines.
We still have other NTP clocks. Spooky-accurate clocks that the public can get to, even, like just up the road at NIST in Fort Collins (where WWVB lives, and which is currently up), and in Maryland.
This is just one set.
And beyond that, we've also got clocks in GPS satellites orbiting, and a whole world of low-stratum NTP servers that distribute that time on the network. (I have one such GPS-backed NTP server on the shelf behind me; there's not much to it.)
And the orbital GPS clocks are controlled by the US Navy, not NIST.
So there's redundancy in distribution, and also control, and some of the clocks aren't even on the Earth.
Some people may be bit by this if their systems rely on only one NTP server, or only on the subset of them that are down.
And if we're following section 3.2 of RFC 8633 and using multiple diverse NTP sources for our important stuff, then this event (while certainly interesting!) is not presently an issue at all.
There are many backup clocks/clusters that NIST uses as redundancies all around Boulder too, no need to even go up to Fort Collins. As in, NIST has fiber to a few at CU and a few commercial companies, last I checked. They're used in cases just like this one.
Fun facts about The clock:
You can't put anything in the room or take anything out. That's how sensitive the clock is.
The room is just filled with asbestos.
The actual port for the actual clock, the little metal thingy that is going buzz, buzz, buzz with voltage every second on the dot? Yeah, that little port isn't actually hooked up to anything, as again, it's so sensitive (impedance matching). So they use the other ports on the card for actual data transfer to the rest of the world. They do the adjustments so it's all fine in the end. But you have to define something as the second, and that little unused port is it.
You can take a few pictures in the cramped little room, but you can't linger, as again, just your extra mass and gravity affects things fairly quickly.
If there are more questions about time and timekeeping in general, go ahead and ask, though I'll probably get back to them a bit later today.
I'm the Manager of the Computing group at JILA at CU, where utcnist*.colorado.edu used to be housed. Those machines were, for years, consistently the highest bandwidth usage computers on campus.
Unfortunately, the HP cesium clock that backed the utcnist systems failed a few weeks ago, so they're offline. I believe the plan is to decommission those servers anyway - NIST doesn't even list them on the NTP status page anymore, and Judah Levine has retired (though he still comes in frequently). Judah told me in the past that the typical plan in this situation is that you reference a spare HP clock with the clock at NIST, then drive it over to JILA backed by some sort of battery and put it in the rack, then send in the broken one for refurb (~$20k-$40k; new box is closer to $75k). The same is true for the WWVB station, should its clocks fail.
There is fiber that connects NIST to CU (it's part of the BRAN - Boulder Research and Administration Network). Typically that's used when comparing some of the new clocks at JILA (like Jun Ye's strontium clock) to NIST's reference. Fun fact: Some years back the group was noticing loss due to the fiber couplers in various closets between JILA & NIST... so they went to the closets and directly spliced the fibers to each other. It's now one single strand of fiber between JILA & NIST Boulder.
That fiber wasn't connected to the clock that backed utcnist though. utcnist's clock was a commercial cesium clock box from HP that was also fed by GPS. This setup was not particularly sensitive to people being in the room or anything.
Another fun fact: utcnist3 was an FPGA developed in-house to respond to NTP traffic. Super cool project, though I didn't have anything to do with it, haha.
Now if the (otherwise very kind) guy in charge of the Bureau international des poids et mesures at Sèvres who did not let me have a look at the refrerence for the kilogram and meter could change his mind, I would appreciate. For a physicist this is kinda like a cathedral.
>The actual port for the actual clock, the little metal thingy that is going buzz, buzz, buzz with voltage every second on the dot? Yeah, that little port isn't actually hooked up to anything, as again, it's so sensitive (impedance matching). So they use the other ports on the card for actual data transfer to the rest of the world.
Can you restate this part in full technical jargon along with more detail? I'm having a hard time following it
As you can see, the room is clearly not filled with asbestos. Furthermore, the claim is absurd on its face. Asbestos was banned in the U.S. in March 2024 [1] and the clock was commissioned in May 2025.
The rest of the claims are equally questionable. For example:
> The actual port for the actual clock ... isn't actually hooked up to anything ... they use the other ports on the card for actual data transfer
It's hard to make heads or tails of this, but if you read the technical description of the clock you will see that by the time you get to anything in the system that could reasonably be described as a "card" with "ports" you are so far from the business end of the clock that nothing you do could plausibly have an impact on its operation.
> You can't put anything in the room or take anything out. That's how sensitive the clock is.
This claim is also easily debunked using the formula for gravitational time dilation [2]. The accuracy of the clock is ~10^-16. Calculating the mass of an object 1m away from the clock that would produce this effect is left as an exercise, but it's a lot more than the mass of a human. To get a rough idea, the relativistic time dilation on the surface of the earth is <100 μs/day [3]. That is huge by atomic clock standards, but that is the result of 10^24kg of mass. A human is 20 orders of magnitude lighter.
Agreed the stated claims don't seem to make much sense. Using a point mass 1 meter away and (G*M)/(r*c^2) I'm getting that you'd have to stand next to the clock for ~61 years to cause a time dilation due to gravity exceeding 10^-16 seconds.
> technically if you have 3 or more sources that would be caught; NTP protocol was designed for that eventuality
Either go with one clock in your NTPd/Chrony configuration, or ≥4.
Yes, if you have 3 they can triangulate, but if one goes offline now you have 2 with no tie-breaker. If you have (at least) 4 servers, then one can go away and triangulation / sanity-checking can still occur with the 3 remaining.
Sure, but not needing a failure to cascade to yet another failsafe is still a good idea. After all, all software has bugs, and all networks have configuration errors.
If your application is so critical that NTP timing loss causes disaster and your holdover fails in less than a day and you aren't generating your own via gps, you are incompetent, full stop
Notably, we had the marshal fire here 4 years ago and recently Xcel settled for $680M for their role in the fire. So they're probably pretty keen not to be on the hook again
I guess that explains why they had no qualms shutting down half of Boulder's power with a vague time horizon. After losing everything in my fridge, though, they finally turned it back on today.
Indeed. Losing the contents of (lots of) fridges is cheaper, as a whole, than incidentally burning the countryside. We all ultimately pay for the result no matter what, so that seems like a reasonably-sensible bet.
On the fridge itself: You may find that the contents are insured against power outages.
As an anecdote, my (completely not-special) homeowner's insurance didn't protest at all about writing a check for the contents of my fridge and freezer when I asked about that, after my house was without power for a couple of weeks following the 2008 derecho. This rather small claim didn't affect my rate in any way that I could perceive.
And to digress a bit: I have a chest freezer. These days I fill up the extra space in the freezer with water -- with "single-use" plastic containers (water bottles, milk jugs) that would normally be landfilled or recycled.
This does a couple of things: On normal days, it increases thermal mass of the freezer, and that improves the cycle times for the compressor in ways that tend to make it happier over time. In the abnormal event of a long power outage, it also provides a source of ice that is chilled to 0F/18C that I can relocate into the fridge (or into a cooler, perhaps for transport), to keep cold stuff cold.
It's not a long-term solution, but it'll help ensure that I've got a fairly normal supply of fresh food to eat for a couple of days if the power dips. And it's pretty low-effort on my part. I've probably spent nearly as much effort writing about this system here just now than I have on implementing it.
tl;dr - the fire destroyed over 1,000 homes, two deaths. The local electrical utility, Xcel, was found as a contributing cause from sparking power lines during a strong wind storm. As a result, electrical utilities now cut power to affected areas during strong winds.
Of the various internet .+P, NTP is one I never learned about as a student, so now I'm looking at its web page [1] by its creator David L. Mills (1938-2024). I've found one video of him giving a retrospective of his extensive internet work; he talks about NTP at 34:51 [2] and later at 56:26 [3].
In [3] he mentions that one can use NTP to observe frequency deviations and use it as an early warning system for fire and AC failure. That really intrigues me. Can you actually? Has this ever been implemented?
Oscillators of all kinds are temperature dependent.
That's why the most stable ones are insulated and ovenized[1].
So an AC failure which would lead to higher room temperatures would lead to stronger or more frequent correction by the NTP client, as the local oscillator would drift more.
Not sure about the fire case though. I mean the same applies there but I'm not imaginative enough to think of a realistic scenario where NTP would be useful for averting a fire.
I knew of some experiments in this space back in the late 1980s or early 1990s - but it was specifically with DECstation hardware that had terrible clocks (not used for alerting, just "this graphs nicely against temperature".) https://groups.csail.mit.edu/ana/Publications/PubPDFs/Greg.T... (PDF) 4.2.1 does talk about explaining local clock frequency changes with office temperature changes (because they overwhelm a clock-aging model) but it doesn't have graphs so perhaps they weren't clear enough to include (or just not relevant enough to Time Surveying.)
> Facility operators anticipated needing to shutdown the heat-exchange infrastructure providing air cooling to many parts of the building, including some internal networking closets. As a result, many of these too were preemptively shutdown with the result that our group lacks much of the monitoring and control capabilities we ordinarily have
Having a parallel low bandwidth, low power, low waste heat network infrastructure for this suddenly seems useful.
NIST campus status: Due to elevated fire risk and a power outage for the Boulder area, the DOC Boulder Labs campus is CLOSED on December 19 for onsite business and no public access is permitted; previously approved accesses are revoked.[1]
WWV still seems to be up, including voice phone access.
NIST Boulder has a recorded phone number for site status, and it says that as of December 20, the site is closed with no access.
NIST's main web site says they put status info on various social media accounts, but there's no announcement about this.
Being unfamiliar with it, it's hard to tell if this is a minor blip that happens all the time, or if it's potentially a major issue that could cause cascading errors equal to the hype of Y2K.
I regret to inform you that as a consequence of that sustained time travel, your mind and body will be slowly deteriorating and you’ll sooner or later end up dead.
I couldn't comment on the causal hazards but since time is currently having an outage they've got an improved shot at getting away with it. I say go for it.
Same for database transaction roll back and roll forward actions.
And most enterprises, including banks, use databases.
So by bad luck, you may get a couple of transactions reversed in order of time, such as a $20 debit incorrectly happening before a $10 credit, when your bank balance was only $10 prior to both those transactions. So your balance temporarily goes negative.
Now imagine if all those amounts were ten thousand times higher ...
Google has their own fleet of atomic clocks and time servers. So does AWS. So does Microsoft. So does Ubuntu. They're not going to drift enough for months to cause trouble. So the Internet can ride through this, mostly.
The main problem will be services that assume at least one of the NIST time servers is up. Somewhere, there's going to be something that won't work right when all the NIST NTP servers are down. But what?
Ubuntu using atomic clocks would surprise me. Sure they could, but it's not obvious to me why they would spend $$$$ on such. More plausible to me seems that they would be using GPSDO as reference clocks (in this context, about as good as your own atomic clock), iff they were running their own time servers. Google finds only that they are using servers from the NTP Pool Project, which will be using a variety of reference clocks.
If you have information on what they actually are using internally, please share.
I think people have a wrong idea of what a modern atomic clock looks like. These are readily available commercially, Microchip for example will happily sell you hydrogen, cesium or rubidium atomic clocks. Hydrogen masers are rather unwieldy, but you can get a rubidium clock in a 1U format and cesium ones are not much bigger. I think their cesium freq standards are formerly a HP business they acquired.
It is also important to realize that an atomic clock will only give you a steady pulse. It will count seconds for you, and do so very accurately, but that is not the same as knowing what time it is.
If you get a rubidium clock for your garage, you can sync it up with GPS to get an accurate-enough clock for your hobby NTP project, but large research institutions and their expensive contraptions are more elaborate to set up.
woah hold on a sec. that's not how these clocks are actually used though.
It's a huge huge huge misconception that you can just plunk down an "atomic clock", discipline an NTP server with it and get perfect wallclock time out of it forever. That is just not how it works. Two hydrogen masers sitting next to each other will drift. Two globally distributed networks of hydrogen masers will drift. They cannot NOT drift. The universe just be that way.
UTC is by definition a consensus; there is no clock in the entire world that one could say is exactly tracking it.
Google probably has the gear and the global distribution that they could probably keep pretty close over 30-60 days, but they are assuredly not trying to keep their own independent time standard. Their goal is to keep events correlated on their own network, and for that they just need good internal distribution and consensus, and they are at the point where doing that internally makes sense. But this is the same problem on any size network.
Honestly for just NTP, I've never really seen evidence that anything better than a good GPS disciplined TCXO even matters. The reason they offer these oscillators in such devices is because they usually do additional duties like running PtP or distributing a local 10mhz reference where their specific performance characteristics are more useful. Rubidium, for instance, is very stable at short timescales but has awful long term stability.
> Google probably has the gear and the global distribution that they could probably keep pretty close over 30-60 days, but they are assuredly not trying to keep their own independent time standard.
Sure, but F2 is a bit more accurate: "As of February 2016 the IT-CsF2 cesium fountain clock started reporting a uB of 1.7 × 10−16 in the BIPM reports of evaluation of primary frequency standards." ( from https://web.archive.org/web/20220121090046/ftp://ftp2.bipm.o... )
Spanner depends on having a time source with bounded error to maintain consistency. Google accomplishes this by having GPS and atomic clocks in several datacenters.
And more importantly, the tighter the time bound, the higher the performance, so more accurate clocks easily pay for themselves in other saved infrastructure costs to service the same number of users.
Having clocks synchronized between your servers is extremely useful. For example, having a guarantee that the timestamp of arrival of a packet (measured by the clock on the destination) is ALWAYS bigger than the timestamp recorded by the sender is a huge win, especially for things like database scaling.
For this though you need to go beyond NTP into PTP which is still usually based on GPS time and atomic clocks
Actually interesting to think about what UTC actually means and there is seems to be no absolute source of truth [0]. I guess the worry is not that much about the NTP servers (for which people anyways should configure fail overs) but the clocks themselves.
Could you define an absolute source of truth based on extrinsic features. Something like taking an intrinsic time from atomic sources, pegged to an astronomic or celestial event; then a predicted astronomic event that would allow us to reconcile time in the future.
It might be difficult to generate enough resolution in measurable events that we can predict accurately enough? Like, I'm guessing the start of a transit or alignment event? Maybe something like predicting the time at which a laser pulse will be returnable from a lunar reflector -- if we can do the prediction accurately enough then we can re-establish time back to the current fixed scale.
I think I'm addressing an event that won't ever happen (all precise and accurate time sources are lost/perturbed), and if it does it won't be important to re-sync in this way. But you know...
There's a lot of focus in this thread on the atomic clocks but in most datacenters, they're not actually that important and I'm dubious that the hyperscalers actually maintain a "fleet" of them, in the sense that there are hundreds or thousands of these clocks in their datacenters.
The ultimate goal is usually to have a bunch of computers all around the world run synchronised to one clock, within some very small error bound. This enables fancy things like [0].
Usually, this is achieved by having some master clock(s) for each datacenter, which distribute time to other servers using something like NTP or PTP. These clocks, like any other clock, need two things to be useful: an oscillator, to provide ticks, and something by which to set the clock.
In standard off-the-shelf hardware, like the Intel E810 network card, you'll have an OXCO, like [1], with a GPS module. The OXCO provides the ticks, the GPS module provides a timestamp to set the clock with and a pulse for when to set it.
As long as you have GPS reception, even this hardware is extremely accurate. The GPS module provides a new timestamp, potentially accurate to within single-digit nanoseconds ([2] datasheet), every second. These timestamps can be used to adjust the oscillator and/or how its ticks are interpreted, such that you maintain accuracy between the timestamps from GPS.
The problem comes when you lose GPS. Once this happens, you become dependent on the accuracy of the oscillator. An OXCO like [1] can hold to within 1µs accuracy over 4 hours without any corrections but if you need better than that (either more time below 1µs, or more accurate than 1µs over the same time), you need a better oscillator.
The best oscillators are atomic oscillators. [2] for example can maintain better than 200ns accuracy over 24h.
So for a datacenter application, I think the main reason for an atomic clock is simply for retaining extreme accuracy in the event of an outage. For quite reasonable accuracy, a more affordable OXCO works perfectly well.
I don't know about all hyperscalers, but I have knowledge of one of them that has a large enough fleet of atomic frequency standards to warrant dedicated engineering. Several dozen frequency standards at least, possibly low hundreds. Definitely not one per machine, but also not just one per datacenter.
As you say, the goal is to keep the system clocks on the server fleet tightly aligned, to enable things like TrueTime. But also to have sufficient redundancy and long enough holdover in the absence of GNSS (usually due to hardware or firmware failure on the GNSS receivers) that the likelihood of violating the SLA on global time uncertainty is vanishingly small.
The "global" part is what pushes towards having higher end frequency standards, they want to be able to freewheel for O(days) while maintaining low global uncertainty. Drifting a little from external timescales in that scenario is fine, as long as all their machines drift together as an ensemble.
The deployment I know of was originally rubidium frequency standards disciplined by GNSS, but later that got upgraded to cesium standards to increase accuracy and holdover performance. Likely using an "industrial grade" cesium standard that's fairly readily available, very good but not in the same league as the stuff NIST operates.
GPS satellites have their own atomic clocks. They're synchronized to clocks at the GPS control center at Schriever Space Force Base, Colorado, formerly Falcon AFB.
They in turn synchronize to NIST in Boulder, Colorado.
GPS has a lot of ground infrastructure checking on the satellites, and backup control centers. GPS should continue to work fine, even if there's some absolute error vs. NIST. Unless there have been layoffs.
> There's a lot of focus in this thread on the atomic clocks but in most datacenters, they're not actually that important and I'm dubious that the hyperscalers actually maintain a "fleet" of them, in the sense that there are hundreds or thousands of these clocks in their datacenters.
I mean, fleets come in all sizes; but if you put one atomic reference in each AZ of each datacenter, there's a fleet. Maybe the references aren't great at distributing time, so you add a few NTP distributors per datacenter too and your fleet is a little bigger. Google's got 42 regions in GCP, so they've got a case for hundreds of machines for time (plus they've invested in spanner which has some pretty strict needs); other clouds are likely similar.
My understanding is that people who connect specifically to the NIST ensemble in Boulder (often via a direct fiber hookup rather than using the internet) are doing so because they are running a scientific experiment that relies on that specific clock. When your use case is sensitive enough, it's not directly interchangable with other clocks.
Everyone else is already connecting to load balanced services that rotate through many servers, or have set up their own load balancing / fallbacks. The mistakenly hardcoded configurations should probably be shaken loose anyways.
If you use a general purpose hostname like time.nist.gov: that should resolve to an operational server and it makes sense to adjust during an incident. If you use a specific server hostname like time-a-b.nist.gov: that should resolve to the specific server and you're expected to have multiple hosts specified; it doesn't make sense to adjust during an incident, IMHO. You wanted boulder, you're getting boulder, faults and all.
If NIST NTP goes down, the internet doesn’t go down. But atomic clocks drifting does upset many scientific experiments, which would effectively go down for the duration of the outage.
This is the reason GP listed out all the alternative robust NTP services that are GPS disciplined, freely available, and used as redundant sources by any responsible timekeeper.
What atomic clocks are disciplined by NTP anyway? Local GPS disciplining is the standard. If you're using NTP you don't need precision or accuracy in your timekeeping.
could you list 3 things that you think are more important than the internet? (I know the internet is going to be fine; I just want to understand what you think ranks higher globally...)
Mostly scientific stuff like astronomical observations — e.g. did this event observed at one telescope coincide with neutrinos detected at this other observatory.
Note I didn’t say they are more important than the Internet. That’s a value judgement in any case. I said that NIST level 0 NTp servers are more important to these use cases than they are to the Internet.
I wonder why we bothered building GPS signal waveguides into the bottom of a mine then. Clearly we should have consulted the experts of hacker news first.
I'm not even sure why you're trying to argue this. It's well established that Time over Fiber is 1-2 orders of magnitude more accurate and precise than GNSS time. Fiber time is also immune to many of the numerous sources of interference GNSS systems encounter, which anyone who's done serious timekeeping will be well acquainted with.
Trying to argue that neutrino experiments use GPS time, because they do?
I’m sure synchronising all the worlds detectors over direct fiber links would… work, but, they aren’t.
Unless you are trying to argue internal synchronisation in which case, obviously, but that has absolutely zero to do with losing NTP for a day, the topic of conversation.
The deployments are still obviously limited, but this is something you can straight up buy if you're near a NIST facility [0]. I believe the longest existing link is NJ<->Chicago, which is used for HFT between the exchanges.
I doubt that very much. GPS time integrity is a big deal in many very important applications -- not the least of which is GPS itself -- and is treated as such.
Yes, an individual fiber distribution system can be much more accurate than GNSS time, but availability is what actually matters. Five nines at USNO would get somebody fired.
The ability for humankind to communicate across the entire globe at nearly 1/4 of the speed of light has drastically accelerated our technological advancement. There is no doubt that the internet is a HUGE addition to society.
It's not super important when compared to basic needs like plumbing, food, electricity, medical assistance and other silly things we take for granted but are heavily dependent on. We all saw what happened to hospitals during the early stages of the COVID pandemic; we had plenty of internet and electricity but were struggling on the medical part. That was quite bad... I'm not sure if it's any worse if an entire country/continent lost access to the Internet. Quite a lot of our core infrastructure components in society rely on this. And a fair bit of it relies on a common understanding of what time "now" is.
The satellite clocks are designed to run autonomously for a few days without noticeable degradation, and up to a few weeks with defined levels of inaccuracy, but they are normally adjusted once a day by the ground stations based on the timescale maintained by the USNO. That, in turn, uses an ensemble of H-masers.
NIST maintains several time standards. Gaithersburg MD is still up and I assume Hawaii is as well. Other than potential damage to equipment from loss of power (turbo molecular vacuum pumps and oil diffusion pumps might end up failing in interesting ways if not shut down properly) it will just take some time for the clocks to be recalibrated against the other NIST standards.
No noteworthy impact at all. The NTP network has hundreds to thousands of redundant servers and hundreds of redundant reference clocks.
The network will route around the damage with no real effects. Maybe a few microseconds of jitter as you have to ask a more distant server for the time.
Clocks do drift. Seconds a week is definitely possible. I think there are varying quality of internal clocks in electronic devices, and the cheaper the quality the more drift there is. I think small cheap microcontrollers can drift seconds per day.
I have an extremely cheap and extremely low power automatic cat feed - it’s been on 2 D batteries for 18 months. I just reset it after it had drifted 19 minutes, so about 1 minute a month, or 15 seconds a week!
Can anybody speak to the current best practices around running underground power lines? I see these types of articles about above-ground distribution systems from time-to-time, particularly in California. I feel lucky that my area has underground power, but that was installed back in the 1980s. Would it be prohibitively expensive for Boulder’s utility provider to move to underground distribution? I can’t help but think it could be worth the cost to reduce wildfire risk and offer more reliable service.
Think of it like this: overhead power lines require you to dig a 5-7’ deep hole that’s 2’ in diameter every 90’.
Underground power supplied through cable requires you to bury the cable minimum 3’ in the ground in rigid ductwork the entire 90’. Any time that cable runs under a roadway that ductwork needs to be encased in concrete. In urban and semi urban areas you also compete with other buried infrastructure for space - sewer, city/municipal infrastructure, gas, electrical transmission, etc.
While underground distribution systems are less prone to interruption from bad weather it depends on the circuit design. If the underground portion of the circuit is fed from overhead power lines coming from the distribution substation you will still experience interruptions from faults on the overhead. These faults can also occur on overhead transmission circuits (the lines feeding the distribution substations and/or very large industrial customers).
Underground distribution comes at a cost premium compared to overhead distribution. It’s akin to the cost of building a picket fence vs installing a geothermal heating system for your home. This is why new sub divisions will commonly have underground cable installed as the entire neighborhood is being constructed - there’s no need to retrofit underground cable into an existing area and so the costs are lower and borne upfront.
It’s more cost effective for them to turn the power off as a storm rolls through, patrol, make repairs and reenergize then to move everything underground. Lost revenue during that period is a small fraction of the cost of taking an existing grid and rebuilding it underground. This is especially true for transmission circuits that are strung between steel towers over enormous distances.
Germany here, never heard of any issues regarding underground power (or phone) lines. Ultra High voltage (distribution network) is above ground here, but no issued with that either.
FYI, this was posted a month ago when discussing thermal effects of clock drift. I thought it was quite interesting view of what the WWVB location looks like:
This makes me wonder, if you take the average time of all wristwatches on the planet, accounting for timezones and throwing out outliers, how close would you get to NTP time?
And how many randomly chosen wristwatches would you need to get anything reasonable?
I have a hunch my casio wrist watch is designed to be running a bit too quick to make resetting the seconds easier.
Your averaging assumes manufacturers try to make their watches as accurate as possible for average conditions
I think this comment is referencing the government's recent announcement[0] to shut down the National Center for Atmospheric Research in Boulder. They do climate research at the Mesa Laboratory there.
It's open to the public for visits. They have a small science museum, offices, a library, etc. I highly recommend anyone with interest and opportunity to visit the mesa lab soon. It may not be open much longer. The view alone is worth the trip, and the building is cool too.
Residents and some businesses of Boulder have been without power since Tuesday. There was an issue about 10 years ago which caused 1000 homes to burn down and the power company was found liable. They change their actions. Then during the next high wind event, the power company preemptively cut power and businesses sued them for loss of revenue. Now the power company is playing it safe and turning off power to residents and keeping downtown businesses powered.
Maybe their generator failing was DOGE related, but wouldn’t have happened if state level shenanigans were better handled
We had some fun requesting key for accessing nist time servers. the process is (quoted from website)
NIST currently offers this service free of charge. We require written requests to arrive by U.S. mail or fax containing:
Your organization’s name, physical address, fax number (if desired as a reply method).
One or more point-of-contact personnel or system operators authorized to receive key data and other correspondence: names, phone numbers, email addresses.
Up to four static IPv4 network addresses under the user’s control which will be allowed to use the unique key. By special arrangement, additional addresses or address ranges may be requested.
Desired hash function (“key type”). NIST currently supports MD5, SHA1, SHA256, and HMAC-SHA256. Please list any limitations your client software places on key values, if known: maximum length, characters used, or whether hexadecimal key representations are required. If you prefer, please share details about your client software or NTP appliance so we can anticipate key format issues.
Desired method for NIST’s reply: U.S. mail, fax, or a secure download service operated by Department of Commerce.
NIST will not use email for sending key data.
ps. there actually seems to be improvement over what they had year ago. they added "secure download service". and previously they had message that nobody assigned to actively monitor mailbox so if you didn't get key, please email us so we will check it
There's two other sites for the time.nist.gov service so it'll be okay.
Probably more interesting is how you get a tier 0 site back in sync - NIST rents out these cyberpunk looking units you can use to get your local frequency standards up to scratch for ~$700/month https://www.nist.gov/programs-projects/frequency-measurement...
Most high-availability networks use pool.ntp.org or vendor-specific pools (e.g., time.cloudflare.com, time.google.com, time.windows.com). These systems would automatically switch to a surviving peer in the pool.
Many data centers and telecom hubs use local GPS/GNSS-disciplined oscillators or atomic clocks and wouldn’t be affected.
Most laptops, smartphones, tablets, etc. would be accurate enough for days before drift affected things for the most part.
Kerberos requires clocks to be typically within 5 minutes to prevent replay attacks, so they’d probably be ok.
Sysadmins would need to update hardcoded NTP configurations to point to secondary servers.
If timestamps were REALLY off, TLS certificates might fail, but that’s highly unlikely.
Databases could be corrupted due to failure of transaction ordering.
Financial exchanges are often legally required to use time traceable to a national standard like UTC(NIST). A total failure of the NIST distribution layer could potentially trigger a suspension of electronic trading to maintain audit trail integrity.
Modern power grids use Synchrophasors that require microsecond-level precision for frequency monitoring. Losing the NIST reference would degrade the grid's ability to respond to load fluctuations, increasing the risk of cascading outages.
Great list! Just double-checked the CAT timekeeping requirements [1] and the requirement is NIST sync. So a subset of all UTC.
You don’t need to actually sync to NIST. I think most people PTP/PPS to a GPS-connected Grandmaster with high quality crystals.
But one must report deviations from NIST time, so CAT Reporters must track it.
I think you are right — if there is no NIST time signal then there is no properly auditable trading and thus no trading. MFID has similar stuff but I am unfamiliar.
One of my favorite nerd possessions is my hand-signed letter from Judah Levine with my NIST Authenticated NTP key.
There are lots of Stratum 0 servers out there; basically anything with an atomic clock will do. They all count seconds independently from one another, all slowly diverging over time, with offset intervals being measured by mutual synchronization using a number of means (how is this done is interesting all by itself). Some atomic clocks are more accurate than others, and an ensemble of these is typically regarded as 'the' master clock.
Beyond this, as other commenters have said, anyone who is really dependent on having exact time (such as telcos, broadcasters, and those running global synchronized databases) should have their own atomic clock fleets. There are thousands and thousands of atomic clocks in these fleets worldwide. Moreover, GPS time, used by many to act as their time reference, is distributed by yet other means.
Nothing bad will happen, except to those who have deliberately made these specific Stratum 0 clocks their only reference time. Anyone who has either left their computer at its factory settings or has set up their NTP configuration in accordance to recommended settings will be unaffected by this.
It’s fine. The public pays for sci-fi clocks used by NIST and the Navy and we get shit latency over NTP and a WWVB signal that barely reaches a huge chunk of the country. CLOCKS WE PAID FOR. Jane Street gets lightning access to clocks and our pension managers get their NTP trades front run. NTP is a disgrace and an insult when it is working.
The failure of a single such server is far from a disaster.
But the stratum 1 time servers can shrug and route around the damage.
And for 99% of that history, Noon was when the sun was half-way through its daily arc at whatever point on Earth one happened to inhabit. The ownership class are the ones who invented things like time zones to stop their trains from running in to each other, and NTP is just the latest and most-pervasive-and-invasive evolution of that same inhuman mindset.
From a privacy point of view, constant NTP requests are right up there alongside weather apps and software telemetry for “things which announce everyone's computers to the global spy apparatus”, feeding the Palantirs of the world to be able to directly locate you as an individual if need be.
Time services are available from other locations. That's the disaster plan. I'm sure there will be some negative consequences from this downtime, especially if all the Boulder reference time sources lose power, but disaster plans mitigate negative consequences, they can't eliminate them.
Utility power fails, automatic transfer switches fail, backup generators fail, building fires happen, etc. Sometimes the system has to be shut down.
There's no present need for important hard-to-replace sciencey-dudes to go into the shop (which is probably both cold, and dark, and may have other problems that make it unsafe: it's deliberately closed) to futz around with the the time machines.
We still have other NTP clocks. Spooky-accurate clocks that the public can get to, even, like just up the road at NIST in Fort Collins (where WWVB lives, and which is currently up), and in Maryland.
This is just one set.
And beyond that, we've also got clocks in GPS satellites orbiting, and a whole world of low-stratum NTP servers that distribute that time on the network. (I have one such GPS-backed NTP server on the shelf behind me; there's not much to it.)
And the orbital GPS clocks are controlled by the US Navy, not NIST.
So there's redundancy in distribution, and also control, and some of the clocks aren't even on the Earth.
Some people may be bit by this if their systems rely on only one NTP server, or only on the subset of them that are down.
And if we're following section 3.2 of RFC 8633 and using multiple diverse NTP sources for our important stuff, then this event (while certainly interesting!) is not presently an issue at all.
Fun facts about The clock:
You can't put anything in the room or take anything out. That's how sensitive the clock is.
The room is just filled with asbestos.
The actual port for the actual clock, the little metal thingy that is going buzz, buzz, buzz with voltage every second on the dot? Yeah, that little port isn't actually hooked up to anything, as again, it's so sensitive (impedance matching). So they use the other ports on the card for actual data transfer to the rest of the world. They do the adjustments so it's all fine in the end. But you have to define something as the second, and that little unused port is it.
You can take a few pictures in the cramped little room, but you can't linger, as again, just your extra mass and gravity affects things fairly quickly.
If there are more questions about time and timekeeping in general, go ahead and ask, though I'll probably get back to them a bit later today.
Unfortunately, the HP cesium clock that backed the utcnist systems failed a few weeks ago, so they're offline. I believe the plan is to decommission those servers anyway - NIST doesn't even list them on the NTP status page anymore, and Judah Levine has retired (though he still comes in frequently). Judah told me in the past that the typical plan in this situation is that you reference a spare HP clock with the clock at NIST, then drive it over to JILA backed by some sort of battery and put it in the rack, then send in the broken one for refurb (~$20k-$40k; new box is closer to $75k). The same is true for the WWVB station, should its clocks fail.
There is fiber that connects NIST to CU (it's part of the BRAN - Boulder Research and Administration Network). Typically that's used when comparing some of the new clocks at JILA (like Jun Ye's strontium clock) to NIST's reference. Fun fact: Some years back the group was noticing loss due to the fiber couplers in various closets between JILA & NIST... so they went to the closets and directly spliced the fibers to each other. It's now one single strand of fiber between JILA & NIST Boulder.
That fiber wasn't connected to the clock that backed utcnist though. utcnist's clock was a commercial cesium clock box from HP that was also fed by GPS. This setup was not particularly sensitive to people being in the room or anything.
Another fun fact: utcnist3 was an FPGA developed in-house to respond to NTP traffic. Super cool project, though I didn't have anything to do with it, haha.
Now if the (otherwise very kind) guy in charge of the Bureau international des poids et mesures at Sèvres who did not let me have a look at the refrerence for the kilogram and meter could change his mind, I would appreciate. For a physicist this is kinda like a cathedral.
Can you restate this part in full technical jargon along with more detail? I'm having a hard time following it
but yes, I also want the juicy details!
so this is the clock
https://en.wikipedia.org/wiki/NIST-F1
or this
https://en.wikipedia.org/wiki/NIST-F2
or there's already F4 too, but it doesn't have a Wikipedia article yet
https://www.nist.gov/news-events/news/2025/04/new-atomic-fou...
but maybe they are talking about the new non-microwave clocks that use Ytterbium-based optical combs ...
or about the Aluminum ion clock
https://www.nist.gov/news-events/news/2025/07/nist-ion-clock...
mind blown
https://www.nist.gov/pml/time-and-frequency-division/time-re...
and you can see a photo of the actual installation here:
https://www.denver7.com/news/front-range/boulder/new-atomic-...
As you can see, the room is clearly not filled with asbestos. Furthermore, the claim is absurd on its face. Asbestos was banned in the U.S. in March 2024 [1] and the clock was commissioned in May 2025.
The rest of the claims are equally questionable. For example:
> The actual port for the actual clock ... isn't actually hooked up to anything ... they use the other ports on the card for actual data transfer
It's hard to make heads or tails of this, but if you read the technical description of the clock you will see that by the time you get to anything in the system that could reasonably be described as a "card" with "ports" you are so far from the business end of the clock that nothing you do could plausibly have an impact on its operation.
> You can't put anything in the room or take anything out. That's how sensitive the clock is.
This claim is also easily debunked using the formula for gravitational time dilation [2]. The accuracy of the clock is ~10^-16. Calculating the mass of an object 1m away from the clock that would produce this effect is left as an exercise, but it's a lot more than the mass of a human. To get a rough idea, the relativistic time dilation on the surface of the earth is <100 μs/day [3]. That is huge by atomic clock standards, but that is the result of 10^24kg of mass. A human is 20 orders of magnitude lighter.
---
[1] https://www.mesotheliomahope.com/legal/legislation/asbestos-...
[2] https://en.wikipedia.org/wiki/Gravitational_time_dilation
[3] https://tf.nist.gov/general/pdf/3278.pdf
I thought it was US Space Force / Air Force. Was the Navy previously or currently involved?
In this context, they feed timing updates to the GPS operators https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-N...
Either go with one clock in your NTPd/Chrony configuration, or ≥4.
Yes, if you have 3 they can triangulate, but if one goes offline now you have 2 with no tie-breaker. If you have (at least) 4 servers, then one can go away and triangulation / sanity-checking can still occur with the 3 remaining.
* https://www.meinbergglobal.com/english/products/
On the fridge itself: You may find that the contents are insured against power outages.
As an anecdote, my (completely not-special) homeowner's insurance didn't protest at all about writing a check for the contents of my fridge and freezer when I asked about that, after my house was without power for a couple of weeks following the 2008 derecho. This rather small claim didn't affect my rate in any way that I could perceive.
And to digress a bit: I have a chest freezer. These days I fill up the extra space in the freezer with water -- with "single-use" plastic containers (water bottles, milk jugs) that would normally be landfilled or recycled.
This does a couple of things: On normal days, it increases thermal mass of the freezer, and that improves the cycle times for the compressor in ways that tend to make it happier over time. In the abnormal event of a long power outage, it also provides a source of ice that is chilled to 0F/18C that I can relocate into the fridge (or into a cooler, perhaps for transport), to keep cold stuff cold.
It's not a long-term solution, but it'll help ensure that I've got a fairly normal supply of fresh food to eat for a couple of days if the power dips. And it's pretty low-effort on my part. I've probably spent nearly as much effort writing about this system here just now than I have on implementing it.
tl;dr - the fire destroyed over 1,000 homes, two deaths. The local electrical utility, Xcel, was found as a contributing cause from sparking power lines during a strong wind storm. As a result, electrical utilities now cut power to affected areas during strong winds.
[1] https://www.eecis.udel.edu/~mills/ntp.html
[2] https://youtu.be/08jBmCvxkv4?si=WXJCV_v0qlZQK3m4&t=2092
[3] https://youtu.be/08jBmCvxkv4?si=K80ThtYZWcOAxUga&t=3386
That's why the most stable ones are insulated and ovenized[1].
So an AC failure which would lead to higher room temperatures would lead to stronger or more frequent correction by the NTP client, as the local oscillator would drift more.
Not sure about the fire case though. I mean the same applies there but I'm not imaginative enough to think of a realistic scenario where NTP would be useful for averting a fire.
[1]: https://blog.bliley.com/anatomy-of-an-ocxo-oven-controlled-c...
Having a parallel low bandwidth, low power, low waste heat network infrastructure for this suddenly seems useful.
WWV still seems to be up, including voice phone access.
NIST Boulder has a recorded phone number for site status, and it says that as of December 20, the site is closed with no access.
NIST's main web site says they put status info on various social media accounts, but there's no announcement about this.
[1] https://www.nist.gov/campus-status
Being unfamiliar with it, it's hard to tell if this is a minor blip that happens all the time, or if it's potentially a major issue that could cause cascading errors equal to the hype of Y2K.
What a defeatist attitude, I plan to live forever or die trying! /s
Asking for a friend.
And most enterprises, including banks, use databases.
So by bad luck, you may get a couple of transactions reversed in order of time, such as a $20 debit incorrectly happening before a $10 credit, when your bank balance was only $10 prior to both those transactions. So your balance temporarily goes negative.
Now imagine if all those amounts were ten thousand times higher ...
That purpose equates to over $12 billion in fees for 2024
https://finhealthnetwork.org/research/overdraft-nsf-fees-big...
The main problem will be services that assume at least one of the NIST time servers is up. Somewhere, there's going to be something that won't work right when all the NIST NTP servers are down. But what?
If you have information on what they actually are using internally, please share.
Example: https://www.microchip.com/en-us/products/clock-and-timing/co...
If you get a rubidium clock for your garage, you can sync it up with GPS to get an accurate-enough clock for your hobby NTP project, but large research institutions and their expensive contraptions are more elaborate to set up.
Example: https://www.accubeat.com/ntp-ptp-time-servers
It's a huge huge huge misconception that you can just plunk down an "atomic clock", discipline an NTP server with it and get perfect wallclock time out of it forever. That is just not how it works. Two hydrogen masers sitting next to each other will drift. Two globally distributed networks of hydrogen masers will drift. They cannot NOT drift. The universe just be that way.
UTC is by definition a consensus; there is no clock in the entire world that one could say is exactly tracking it.
Google probably has the gear and the global distribution that they could probably keep pretty close over 30-60 days, but they are assuredly not trying to keep their own independent time standard. Their goal is to keep events correlated on their own network, and for that they just need good internal distribution and consensus, and they are at the point where doing that internally makes sense. But this is the same problem on any size network.
Honestly for just NTP, I've never really seen evidence that anything better than a good GPS disciplined TCXO even matters. The reason they offer these oscillators in such devices is because they usually do additional duties like running PtP or distributing a local 10mhz reference where their specific performance characteristics are more useful. Rubidium, for instance, is very stable at short timescales but has awful long term stability.
Funny you should say that... https://developers.google.com/time/smear
the NIST hydrogen clock is very expensive and sophisticated.
https://static.googleusercontent.com/media/research.google.c...
https://static.googleusercontent.com/media/research.google.c...
For this though you need to go beyond NTP into PTP which is still usually based on GPS time and atomic clocks
[0] https://www.septentrio.com/en/learn-more/insights/how-gps-br...
It might be difficult to generate enough resolution in measurable events that we can predict accurately enough? Like, I'm guessing the start of a transit or alignment event? Maybe something like predicting the time at which a laser pulse will be returnable from a lunar reflector -- if we can do the prediction accurately enough then we can re-establish time back to the current fixed scale.
I think I'm addressing an event that won't ever happen (all precise and accurate time sources are lost/perturbed), and if it does it won't be important to re-sync in this way. But you know...
The ultimate goal is usually to have a bunch of computers all around the world run synchronised to one clock, within some very small error bound. This enables fancy things like [0].
Usually, this is achieved by having some master clock(s) for each datacenter, which distribute time to other servers using something like NTP or PTP. These clocks, like any other clock, need two things to be useful: an oscillator, to provide ticks, and something by which to set the clock.
In standard off-the-shelf hardware, like the Intel E810 network card, you'll have an OXCO, like [1], with a GPS module. The OXCO provides the ticks, the GPS module provides a timestamp to set the clock with and a pulse for when to set it.
As long as you have GPS reception, even this hardware is extremely accurate. The GPS module provides a new timestamp, potentially accurate to within single-digit nanoseconds ([2] datasheet), every second. These timestamps can be used to adjust the oscillator and/or how its ticks are interpreted, such that you maintain accuracy between the timestamps from GPS.
The problem comes when you lose GPS. Once this happens, you become dependent on the accuracy of the oscillator. An OXCO like [1] can hold to within 1µs accuracy over 4 hours without any corrections but if you need better than that (either more time below 1µs, or more accurate than 1µs over the same time), you need a better oscillator.
The best oscillators are atomic oscillators. [2] for example can maintain better than 200ns accuracy over 24h.
So for a datacenter application, I think the main reason for an atomic clock is simply for retaining extreme accuracy in the event of an outage. For quite reasonable accuracy, a more affordable OXCO works perfectly well.
[0]: https://docs.cloud.google.com/spanner/docs/true-time-externa...
[1]: https://www.microchip.com/en-us/product/OX-221
[2]: https://www.u-blox.com/en/product/zed-f9t-module
[3]: https://www.microchip.com/en-us/products/clock-and-timing/co...
As you say, the goal is to keep the system clocks on the server fleet tightly aligned, to enable things like TrueTime. But also to have sufficient redundancy and long enough holdover in the absence of GNSS (usually due to hardware or firmware failure on the GNSS receivers) that the likelihood of violating the SLA on global time uncertainty is vanishingly small.
The "global" part is what pushes towards having higher end frequency standards, they want to be able to freewheel for O(days) while maintaining low global uncertainty. Drifting a little from external timescales in that scenario is fine, as long as all their machines drift together as an ensemble.
The deployment I know of was originally rubidium frequency standards disciplined by GNSS, but later that got upgraded to cesium standards to increase accuracy and holdover performance. Likely using an "industrial grade" cesium standard that's fairly readily available, very good but not in the same league as the stuff NIST operates.
I mean, fleets come in all sizes; but if you put one atomic reference in each AZ of each datacenter, there's a fleet. Maybe the references aren't great at distributing time, so you add a few NTP distributors per datacenter too and your fleet is a little bigger. Google's got 42 regions in GCP, so they've got a case for hundreds of machines for time (plus they've invested in spanner which has some pretty strict needs); other clouds are likely similar.
Everyone else is already connecting to load balanced services that rotate through many servers, or have set up their own load balancing / fallbacks. The mistakenly hardcoded configurations should probably be shaken loose anyways.
What atomic clocks are disciplined by NTP anyway? Local GPS disciplining is the standard. If you're using NTP you don't need precision or accuracy in your timekeeping.
Note I didn’t say they are more important than the Internet. That’s a value judgement in any case. I said that NIST level 0 NTp servers are more important to these use cases than they are to the Internet.
Losing NTP for a day is going to affect fuck-all.
I’m sure synchronising all the worlds detectors over direct fiber links would… work, but, they aren’t.
Unless you are trying to argue internal synchronisation in which case, obviously, but that has absolutely zero to do with losing NTP for a day, the topic of conversation.
[0] https://shop.nist.gov/ccrz__ProductDetails?sku=78200C
Yes, an individual fiber distribution system can be much more accurate than GNSS time, but availability is what actually matters. Five nines at USNO would get somebody fired.
It's not super important when compared to basic needs like plumbing, food, electricity, medical assistance and other silly things we take for granted but are heavily dependent on. We all saw what happened to hospitals during the early stages of the COVID pandemic; we had plenty of internet and electricity but were struggling on the medical part. That was quite bad... I'm not sure if it's any worse if an entire country/continent lost access to the Internet. Quite a lot of our core infrastructure components in society rely on this. And a fair bit of it relies on a common understanding of what time "now" is.
- GPS
- industrial complex that synchronize operations (we could include trains)
- telecoms in general (so a level higher than the internet)
(Random search result from space force https://www.ssc.spaceforce.mil/Newsroom/Article/4039094/50-y... claims that cell phone tower-to-tower handoff uses GPS-mediated timing (only microsecond level though.)
Says it's still mostly up.
The network will route around the damage with no real effects. Maybe a few microseconds of jitter as you have to ask a more distant server for the time.
Perhaps, "We don't know." will become popular?
The answer is no. Anyone claiming this will have an impact on infrastructure has no evidence backing it up. Table top exercises at best.
RC oscillator is poor enough that early days USB communication would fail if running on RC clock.
While underground distribution systems are less prone to interruption from bad weather it depends on the circuit design. If the underground portion of the circuit is fed from overhead power lines coming from the distribution substation you will still experience interruptions from faults on the overhead. These faults can also occur on overhead transmission circuits (the lines feeding the distribution substations and/or very large industrial customers).
Underground distribution comes at a cost premium compared to overhead distribution. It’s akin to the cost of building a picket fence vs installing a geothermal heating system for your home. This is why new sub divisions will commonly have underground cable installed as the entire neighborhood is being constructed - there’s no need to retrofit underground cable into an existing area and so the costs are lower and borne upfront.
It’s more cost effective for them to turn the power off as a storm rolls through, patrol, make repairs and reenergize then to move everything underground. Lost revenue during that period is a small fraction of the cost of taking an existing grid and rebuilding it underground. This is especially true for transmission circuits that are strung between steel towers over enormous distances.
Some of it is physical infrastructure (transformers, wire, poles), but a lot of it is labor.
Labor is expensive in US. It’s a lot of labor to do, plus they’ll likely need regulatory approval, buying out land, working through easements.
At the same time you have people screaming about how expensive energy is.
Furthermore they have higher priorities, replacing ancient aging infrastructure that’s crumbling and being put on higher load every day.
https://practical.engineering/blog/2021/9/16/repairing-under...
THE TIME RIFT OF 2100: How We lost the Future --- and Gained the Past.
https://tech.slashdot.org/comments.pl?sid=7132077&cid=493082...
https://jila.colorado.edu/news-events/articles/spare-time
Discussed here: https://news.ycombinator.com/item?id=46042946
And how many randomly chosen wristwatches would you need to get anything reasonable?
But yes, good point.
Given two time changes per year I guess something like 1 min per year is acceptable
It's open to the public for visits. They have a small science museum, offices, a library, etc. I highly recommend anyone with interest and opportunity to visit the mesa lab soon. It may not be open much longer. The view alone is worth the trip, and the building is cool too.
[0]: https://x.com/russvought/status/2001099488774033692
Maybe their generator failing was DOGE related, but wouldn’t have happened if state level shenanigans were better handled
Don't forget Solar Roof.
Some relevant DOGE’s effects:
-time and frequency division director quit
-NIST emergency management staff at least 50% vacant
-NIST director of safety retired, and NIST safety was already understaffed when compared to DOE labs
-NOAA emergency manager on the same Boulder campus laid off
etc
https://tf.nist.gov/tf-cgi/servers.cgi
NIST currently offers this service free of charge. We require written requests to arrive by U.S. mail or fax containing:
Your organization’s name, physical address, fax number (if desired as a reply method).
One or more point-of-contact personnel or system operators authorized to receive key data and other correspondence: names, phone numbers, email addresses. Up to four static IPv4 network addresses under the user’s control which will be allowed to use the unique key. By special arrangement, additional addresses or address ranges may be requested.
Desired hash function (“key type”). NIST currently supports MD5, SHA1, SHA256, and HMAC-SHA256. Please list any limitations your client software places on key values, if known: maximum length, characters used, or whether hexadecimal key representations are required. If you prefer, please share details about your client software or NTP appliance so we can anticipate key format issues. Desired method for NIST’s reply: U.S. mail, fax, or a secure download service operated by Department of Commerce.
NIST will not use email for sending key data.
ps. there actually seems to be improvement over what they had year ago. they added "secure download service". and previously they had message that nobody assigned to actively monitor mailbox so if you didn't get key, please email us so we will check it
It’s just a good idea, though, not a greedy one… so it won’t happen.
This is some level of eldritch magic that I am aware of, but not familiar with but am interested in learning.
Probably more interesting is how you get a tier 0 site back in sync - NIST rents out these cyberpunk looking units you can use to get your local frequency standards up to scratch for ~$700/month https://www.nist.gov/programs-projects/frequency-measurement...
Also thank you for that link, this is exactly the kind of esoteric knowledge that I enjoy learning about
Many data centers and telecom hubs use local GPS/GNSS-disciplined oscillators or atomic clocks and wouldn’t be affected.
Most laptops, smartphones, tablets, etc. would be accurate enough for days before drift affected things for the most part.
Kerberos requires clocks to be typically within 5 minutes to prevent replay attacks, so they’d probably be ok.
Sysadmins would need to update hardcoded NTP configurations to point to secondary servers.
If timestamps were REALLY off, TLS certificates might fail, but that’s highly unlikely.
Databases could be corrupted due to failure of transaction ordering.
Financial exchanges are often legally required to use time traceable to a national standard like UTC(NIST). A total failure of the NIST distribution layer could potentially trigger a suspension of electronic trading to maintain audit trail integrity.
Modern power grids use Synchrophasors that require microsecond-level precision for frequency monitoring. Losing the NIST reference would degrade the grid's ability to respond to load fluctuations, increasing the risk of cascading outages.
You don’t need to actually sync to NIST. I think most people PTP/PPS to a GPS-connected Grandmaster with high quality crystals.
But one must report deviations from NIST time, so CAT Reporters must track it.
I think you are right — if there is no NIST time signal then there is no properly auditable trading and thus no trading. MFID has similar stuff but I am unfamiliar.
One of my favorite nerd possessions is my hand-signed letter from Judah Levine with my NIST Authenticated NTP key.
[1] https://www.finra.org/rules-guidance/rulebooks/finra-rules/6...
To quote the ITU: "UTC is based on about 450 atomic clocks, which are maintained in 85 national time laboratories around the world." https://www.itu.int/hub/2023/07/coordinated-universal-time-a...
Beyond this, as other commenters have said, anyone who is really dependent on having exact time (such as telcos, broadcasters, and those running global synchronized databases) should have their own atomic clock fleets. There are thousands and thousands of atomic clocks in these fleets worldwide. Moreover, GPS time, used by many to act as their time reference, is distributed by yet other means.
Nothing bad will happen, except to those who have deliberately made these specific Stratum 0 clocks their only reference time. Anyone who has either left their computer at its factory settings or has set up their NTP configuration in accordance to recommended settings will be unaffected by this.