If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.” We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception. You do not wait for the cure before improving the measurement. You improve the measurement so that a cure becomes possible, targeted, and effective.
Medicine is no different. Refusing to improve early, probabilistic diagnosis because today’s treatments are modest confuses sequence with outcome. Breakthroughs do not emerge from vague labels and mixed populations. They emerge from precise, quantitative stratification that allows real effects to be seen. The danger is not that we measure too early. It is that we continue making irreversible clinical and research decisions using imprecise, binary classifications while biological insight and therapeutic tools are advancing rapidly. Building the probabilistic layer now is not premature. It is how we make future intervention feasible.
This analogy has a rather fatal flaw, which is that we already know people who've gotten Alzheimer's, and we also know for a fact people will continue to fall victim to it, at a pretty predictable rate. i.e. the detection has already happened! Anyone who was waiting for a potential victim to appear before researching the cure already has all the reasons they need to research it. Detecting whom exactly the next victim is going to be isn't really going to change anything as far as researching a treatment or cure goes. (Unless the person is super important or popular or rich, I guess?)
This is absolutely nothing like the asteroid example, where knowing that anybody is going to fall victim to it would itself be news of astronomical proportions. Previously there was a high chance the event wouldn't happen, and now it seems likely it will, so that entirely change the calculus of your priorities.
This just completely destroys the analogy. (There are other reasons it doesn't fit too, but one is enough.)
The reason the test and actually knowing who is likely to develop the disease is useful is that we don't know enough about the early pre-symptomatic stages of Alzheimer's. A lot of research has been focused on purging the plaques which form in the late stages of the disease and thus failed because these seem to be symptomatic rather than causative. The false positives are also very interesting from a research point of view because if someone is testing positive for the disease but it's not progressing this may give us a clue about how to control it.
The other slightly sad fact is that is also quite likely that any curative treatment will need to be started before you start to show symptoms, because the brain has already lost a lot of it's resilience by then.
Accurate detection in individuals is still important for testing any potential cure. Otherwise you can only do normal population studies over a very long time and pray that you didn't miss on any confounding variables. With this level of accuracy in diagnosing, you can do targeted testing.
While that is true, it doesn't change the sentiment behind “We have no cure. I don’t want to know.” if knowing the diagnosis doesn't help you personally. Sure you might have a sense of responsibility for mankind but you still know you can't do anything to save yourself.
With that said, lifestyle changes can slow down the onset of Alzheimer's, so knowing the diagnosis isn't totally useless.
A lot of people have enough of a sense of responsibility to donate blood, or donate their organs.
I've long had the suspicion that much of what is called Alzheimers or dementia is some form of prion disease. This study doesn't show that, exactly, but it shows that abnormal proteins may be directly correlated.
So - and I'm not saying this is the case - but suppose that the abnormal proteins identified in this study could be transmitted by blood transfusions or organ transplants. Wouldn't that itself be enough for your diagnosis to help you personally not transmit those proteins to someone else?
If your attitude is that no one else in the world matters once you get a bad diagnosis, then nothing really mattered to you before. Other people are working day and night trying to cure you, so there's no cause for that level of nihilism. You may as well try to help from the vantage point you have.
> Detecting whom exactly the next victim is going to be isn't really going to change anything as far as researching a treatment or cure goes.
Your reasoning relies heavily on this statement, which is only true if occurrence is entirely random, which is in most cases not true. A condition can easily mask the cause of the condition and then you have a confounder(-s) that you have no way of controlling. If you can build multiple strata with high risk ratios, you can find baseline similarities and differences in those groups. Early detection is highly important in knowing these confounders in the first place and then controlling for; and as GP mentions allows for more targeted research in treatment. Without this we could easily spend all the research effort on the effect (symptom) of a condition without even approaching treatment of the cause, i.e. prevention.
A very similar thing has happened with the infamous atherosclerotic plaques. AFAIK (correct me if you are aware of any evidence) there is currently no mechanistic model of how these atherosclerotic plaques form. Yet we spend so much effort in lowering the symptomatic side of increased cholesterol/LDL (which has well-known positives) even if there are known metabolic pathways for LDL increase, based entirely on correlational studies, when LDL is not even close to being the best predictor of cardiovascular conditions. LDL just happens to be easy to measure in a blood test and easy to control with oral medication.
And even if occurrence was random, there might be effects that can only be measured early on. By identifying patients before the onset of serious symptoms we can get a much more comprehensive medical history than by only looking once the symptoms are bad enough to make Alzheimer's obvious, or by monitoring large strata of the population in hopes of including enough future Alzheimer's patients in the sample
It may not change much as far as researching a treatment or cure goes, but it may help with other stuff, like being better prepared for the future, like organizing a health care worker or getting family members to help out or to look out for you, and other stuff, which you might not be able to do at a later stage of Alzheimers.
While there aren't any cures yet, certain treatments and lifestyle strategies may slow its progression, keep quality of life as long as possible and stuff like that for as long as possible. (And the sooner you start with that, the better)
More information is not always better for the patient. If you could detect the disease 5 years before symptoms began, there are certain psychological harms that come with that knowledge. These must be balanced against the things you mention about "slowing" the disease (unclear if any treatments do much for a given individual) and planning your future. You talk about quality of life, but quality of life declines the MOMENT you learn that you have a progressive, incurable, disease that will slowly rob you of your mind. It's not clear at all that knowing about your disease earlier is actually better for anyone.
I understand the concern about anxiety about their impending condition, but medical providers must not paternalistically decide to withhold a diagnosis from patients, at least not in all cases.
If I got an early diagnosis, it would motivate me to get my affairs in order to lessen the burden on my family and check off some bucket list items before it's too late. Don't rob me of that opportunity.
Before ordering the test, ask patients "If you were going to get Alzheimer's, would you want to know?"
Not really. Alzheimer's is about 60% of dementia, and is frequently misdiagnosed without a full workup. As primary care shifts to more poorly trained providers and doc in the box delivery models, you'll get less workups and more misdiagnoses.
A more objective blood test will make for more accurate diagnoses and and better treatment.
If you know they’re going to have it twenty years early, then you can try out preventative treatments. You can look at causes. You can get them to put power of attorney in place and prepare for their future.
Why are you so furious about the idea of people knowing?
No one is furious, but it is well established in epidemiology that more knowledge is not necessarily better for the patient. There are psychological harms that occur from the moment you learn that you have a disease that will slowly rob you of your mind. In general, you want fewer harms in your life.
> If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.”
I don’t think the analogy fits, for a couple reasons.
1. People not wanting to know whether they have Alzheimer’s is because of the fear of a fate worse than death — living with Alzheimer’s.
2. People not wanting to know whether they have Alzheimer’s is not the same was not wanting a way to detect it. As you said, being able to measure it may help lead to a cure/treatment. I doubt people are against improving detection — they may just not want the detection to be applied personally.
Cure is the wrong word. Alzheimer’s can be best described as a failure of a system and "debris" accumulates faster than it can be "cleared". There are many moving parts and everyone is unique about the cause of their system failure.
Wrote up my current systems understanding here https://metamagic.substack.com/p/the-alzheimers-equation, but it makes clear why treatments that target only one variable are mathematically doomed to fail to work on everyone and why there will never be a single "cure". It explains without needing to read 10,000 papers why we keep getting research talking about treatment X helps in some, but not all cases or symptom Y is associated in some, but not all, etc.
This is some personal opinion that I would bet the vast majority of Alzheimer's researchers would not actually agree with. The current consensus is that Alzheimer's is a particular disease, or a cluster of similar diseases.
I'm not saying your wrong, just that the level of confidence in your assertions is not warranted.
After spending years tracking through the genetics, conditions, lab work, research papers and seeing individuals years into the condition, this model is the best I have and explains everything I currently know. Why the cluster of conditions result in the same outcome, why some treatments help some folks, but not others.
But that is sort of the point of science, you take all the evidence you have and create a hypothesis and iterate as you get more evidence. If I find evidence that suggests something else then I will be happy to tweak or abandon this. My level of confidence comes from the existing evidence and lack of evidence otherwise.
Exactly - there are things that I would change now to make sure I make thing easier for myself and - more importantly - easier for the people around me.
Like what? You should already have a will, life insurance, etc. even without the disease. All you're doing by knowing earlier is causing psychological harms to yourself and the people you tell, adding more years of anxiety, grief, and sadness for no gain. Think about the bigger picture.
I think there are many people (myself included) whose plans would change dramatically upon discovery of Alzheimer's, dementia, or some other degenerative disease. I might consider moving to somewhere with more liberal assisted suicide laws for example.
My genetics are such I'm more likely to drop dead of a heart attack too young.
If I were likely to develop alzheimer's, I'd make more and more expensive accommodation for power of attorney and trusts to shield assets while I was competent to do so.
Notarizing any wishes against some medical procedures in case a sudden accident ruins your ability to dissent prevents doctors from being forced to keep your body alive as long as possible.
That doesn't apply to Alzheimer's disease directly though. If you don't want to live when your conscious life is limited to short flashes of awareness among a deeply terrifying melange of visions of the past and hallucinations, DNR laws don't in any way force or even allow doctors to euthanize you. You can persist in this state for many years without ever triggering a DNR check.
Absolutely, the belief in scientific circles is that the way forward to develop cures (or at least treatment that slows down the progression) is to treat it early. When you get to the point where you start showing clear symptoms, your brain is already mush. If you have a potential treatment that attacks the root cause, you would have to catch the very early, pre-clinical, stages of the disease, but without good diagnostics there is no way to do that (short of giving the disease to a wide swath of the population, like a vaccine... but that gets expensive very quickly, and side effects become a bigger worry.
These are 50yos, not elderly retirees. What if knowing caused your employer to deny you a promotion? Im in the military. This sort of diagnosis on one's file could have real impact on future prospects. People already fear ADHD tests for the same reason. I know a guy who is leaving the military after 20+ years flying transports. He is applying to airlines. If you were an airline, would you higher an experienced pilot with a positive alzheimer's diagnois in thier medical data?
> We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception
That's not what would happen. We wouldn't mobilize. We'd fragment. Within days, the prediction would be declared partisan. One bloc would call it settled science; another would call it statistical hysteria. Billionaires would quietly commission private shelters while publicly funding studies questioning whether the asteroid even qualified as "large." News panels would debate whether the projected impact zone was being unfairly politicized. Conspiracy channels would insist the asteroid was fabricated to justify global governance. Others would insist the real asteroid was being hidden. Amateur analysts would flood the internet with homemade trajectory charts proving the professionals wrong. Death threats would arrive in astronomers' inboxes faster than research grants.
“We have no cure. I don’t want to know.” isn’t the same as “We have no cure. We as a society don’t want to know.”
People can be fine with being tested so that epidemiologists can work on growing our knowledge and, at the same time, not wanting to know their own diagnosis.
Maybe if you can keep the results a secret from health insurance companies you’d have a point. However, not everyone has coverage under a large organization’s umbrella, and these people might be denied coverage.
Alzheimer isn't new. You should compare it with a situation like:
Imagine you're born and you eventually learn that there's an asteroid on a collision course with earth, from way before you were born. It's going to take many years to get here and you may die before it hits and so far no scientists have been able to come up with a way to deflect it. Do you care?
Adding newness to the situation makes it wildly different.
We have no cure now but that may change and depend on early detection just as taking meds now can slow the onset. I don’t want to know makes no sense to me. You would plan your whole life differently and it would actually be quite liberating once you’d come to terms with it.
PSA to those with family affected by dementia/Alzheimer's at a relatively early age (say <70yo): Get them tested for STDs, specifically Syphilis.
Left untreated for a very long time (decade+), it spreads to the brain and causes dementia among other things. Older generations with stigmas, taboos, or from lower educational backgrounds seem (to me) less likely to get tested, so it seems plausible.
Source: Have recently discovered this myself with a family member from their neurologist.
> Source: Have recently discovered this myself with a family member from their neurologist.
The reason this was detected is that such testing is a standard practice with new dementia patients—among many other tests that identify etiologies of dementia.
> Definitely not the case, not in all regions, not even within the same country
Perhaps—but it's also possible that whoever was in the room with the patient declined STI testing (which I have seen, and which sometimes reflects lack of knowledge around extramarital affairs).
I'm just trying to make it clear that there are dozens of reversible/non-degenerative causes of dementia and there is no way that a fully-trained neurologist doesn't have these memorized.
It's like not knowing what a type system is as a programmer with a reputable degree—impossible.
edit: in fairness, many doctors have unease around discussing sex/infidelity—but the PSA maybe should be to encourage your doctor to put aside concerns around niceties in your parent's care.
He was shot in 1918 leaving bullets in his body and left him weakened, then in 1922 he suffered at least three debilitating strokes. He lost his ability so speak and had to learn to speak again. He even proposed assisted suicide because he was suffering so much. He then had a final fatal stroke in 1924. I don't think "he looks crazy so he must have had some STD" is medically accurate or remotely appropriate on what's supposed to be an academic forum. He most likely had brain damage towards the end thus his illness of course was reflected on his body looked. Lenin's eyes reflected his post-stroke damage.
He was quite sane in his life and his work expanding on Marx is of course extremely important.
A detail, but an important one: the blood test brought the initial diagnosis closer to the final one. The result reflects the agreement between both diagnoses. The usefulness of the blood test depends on the quality of the final diagnosis, which can still be wrong.
For a disease which (to my knowledge) can’t be slowed down or reversed, I think it’s a fair question why we would want to detect Alzheimer’s. Maybe there are other reasons, but my suspicion is that we will be able to, and an easy detection method significantly widens the pool of subjects to study later on.
If it turns out that driving a Prius on Tuesdays slows down Alzheimer’s, a larger pool of subjects would allow us to figure that out.
I would personally want to know as early as possible, so I could get my affairs in order and register my wishes around end of life care and euthanasia while I am still recognised as having full mental capacity.
It's also better for people around the Alzheimer's patient, as it will let them understand why someone's personality and behaviours may be changing, and possibly let them be bit more forgiving of such changes. It will also give family more time to plan and understand the health and community services and support are offered wherever they live.
I know two people who have been taking the new monoclonal antibody treatment for it. One who was a bit further along when she started, and did not show any significant improvement. The one who started while she was still in the early stages has completely arrested her descent. She hasn't recovered much of what she already lost, but she's still able to live independently and enjoy life, and her mental acuity scores are (slightly) better than they were last year. That's a hell of a thing.
I also know someone who's significantly better now than they were a few years ago thanks to alzheimer's medication. And Trontinemab, which is currently in phase III trials I believe, seems even better than what is publicly available as it crosses the blood-brain barrier more readily. We're entering a brighter future for alzheimer's patients.
Completely arrested? I don't. But it appears to be arrested in ways that matter for mental acuity, for now. I've taken care of a parent with Alzheimer's, and helped several other caregivers over the years with their own family's journeys, and one thing I can tell you is that I have never, ever seen an actual halting of the progression for this long. The descent is usually a stairstep pattern, but the steps are on the matter of weeks to a month or two. My friend has been stable for a year.
This is all new. There is research hinting at Alzheimer's subtypes, some of which are more likely to respond than others. Even halting the decline is a huge potential breakthrough.
The way I’ve watched Alzheimer’s work in a family member is that it’s a step down function rather than gradual. And once something is lost, it doesn’t come back. So anything that can delay the next step even just for months is a win right now.
That's 4–6 months in the 18 months the trials lasted for, i.e. about a 30% slowdown of progression. The open-label extensions suggest this relative slowdown seems to continue at least to the 4-year mark (at which point it would have bought you over a year of time): https://www.alzforum.org/news/conference-coverage/signs-last...
Time will tell if the 30% slowdown continues beyond four years, and/or if earlier treatment with more effective amyloid clearance from newer drugs has greater effects. The science suggests it should.
It's very useful to understand what you're struggling from even if it's not curable. It explains your symptoms, your experience and help you understand what you're going through. Understanding that you're suffering from something incurable is also helpful in not looking for other ineffective methods to cure a mysterious illness.
> For a disease which (to my knowledge) can’t be slowed down or reversed, I think it’s a fair question why we would want to detect Alzheimer’s. Maybe there are other reasons, but my suspicion is that we will be able to, and an easy detection method significantly widens the pool of subjects to study later on.
Your point at the end is essentially correct. There's a couple of reasons that come to my mind:
Early detection lets us test cures more quickly. You can see if the treatment is working without waiting 30 years for symptoms to develop or not. If prevention is all that works, we can verify lifestyle changes, again without having to wait 30 years for symptoms to develop.
Early detection means there's more of a chance of any future treatment succeeding and the patient returning to a normal life. Think of early detection of cancer or heart disease meaning you can be treated with less risky medication and procedures and minimise the damage being done.
Most people get a dementia (or related) diagnosis after they are deep enough in it so that they cant do much about it or get their affairs in order.
My grandfather had a "fall" at work, he then left that job, and held down 2 more engineering jobs before he was diagnosed with a stroking condition and subsequent dementia. I got the distinct impression he thought he had more time, but rapidly declined.
If he knew he was short of time before his rapid decline he probably would have done things differently. Like not buying a house he would later have to sell to pay for aged care.
If he knew he was at risk of a workplace accident he probably wouldn't have worked as an after hours safety engineer at a major treatment plant, where if the worst had happened he could have endangered others.
At a personal level, I've been through this with my grandfather.
I want to know. My family wants to know. I want to prepare because there are things I want to do today that I know I won't be able to do in the future.
In many ways, it's just like many terminal cancer diagnoses. You're going to lose that person, but you have some time.
But it is a wildly variated, almost meaningless diagnosis. 3 of my 4 grandparents got Alzheimer's diagnosis as well as my mom and mother-in-law. The variation of progression and symptoms is so wide that it really seems like a catch-all. One grandmother was fine until about 72 and in 2 years forgot who people were and 4 years had lost all executive function and passed away. The other one was diagnosed in her early 80s and lived to be 96 with no major progression, like slightly more repeating, but never forgetting people or not knowing how to talk etc. Similar dichotomy between my mother and mother-in-law but with considerably different presentations of symptoms.
It's a weird disease and IMO not even really a disease it's a bunch of different causes of cognitive impairment under one umbrella but shouldn't be separated out much further to find actual causes and treatments.
The accuracy of this test is nowhere nearly good enough to do population-wide screening. The clinical setting for this test is memory clinics in which Alzheimers is already relatively highly likely differentially, and even there you're going to get a surprising number of false positives.
(There's enough info in the supplemental link on this page to have an LLM do the Bayes math for you.)
> doctors correctly diagnosed Alzheimer's in 75.5% of cases, but when incorporating blood test results, diagnostic accuracy increased to 94.5%
These patients are already seeing doctors. Would you rather your doctor to hide the diagnosis just because your disease isn't curable (for now)? It's not like we're testing the whole population in masse.
the other big reason is clinical trials. if you can identify people who are pre-symptomatic but on a trajectory toward Alzheimer's, you can actually test whether early interventions work. that's been one of the big blockers for drug development -- by the time people show symptoms, they're often too far along for treatments to show effect. a reliable blood test changes the trial design fundamentally.
Being able to know someone's risk factor would be important for how we treat elderly people. I know someone who is 85 and super sharp (previously worked as a corporate accountant and banker), they still have a better memory than a lot of 40-50 year olds, and yet they are constantly harassed by eldercare "agents" for the state because whenever they make a investment decision that is even slightly questionable they get reported to the state by the bank. Sometimes the bank refuses to authorize transactions. If they could conclusively prove they aren't at risk I think they would be left alone much more often.
If the patient still has periods of lucidity but the disease is suspected to be advancing, knowing they have it could prompt them to get their legal affairs in order.
I assume this is hugely beneficial for research on intervention methods, not for treatment. I think everyone is focusing on "I'd rather know" but imagine if you could get larger populations with a diagnosis earlier on, how impactful that would be for testing an intervention?
Having struggled with hard to diagnose health issues before, I can’t emphasize enough how much of a relief it is to put a name on the disease that is causing you so much harm.
It is frankly shocking to think disease diagnosis would be a useless thing
Not saying anything about the article at-hand, but assuming we were able to detect it with such certainty, I think it would greatly increase the funding, rigor, and breadth or research devoted to finding a cure or treatment that actually worked.
For 20-ish% of Alzheimer's patients, the Shingles vaccine may be a treatment. This has been suspected for a few years now but has received recent confirmation studies.
While the study was about the shingles vaccine, I wonder if having passed normally through shingles influences positively or negatively the chances of later developing Alzheimer's.
I wonder if after services like 23andMe became popular and millions of people found out they have the Alzheimer genes, did donations towards brain research rise?
Nobody is ever going to do that with this test, because the overwhelming majority of positive test results in a population-wide sample will be false, and the proposed diagnosis is devastating. This is a test for people who already have symptomatic dementia that helps confirm the diagnosis.
Well this test isn't for whether you will get Alzheimer's, so that disqualifies it before we even consider the accuracy.
But apparently your odds go above 30% if you live long enough, so if you could test for being in that cohort I think that result would be too common to actually be devastating.
> Tell 50 million people they’re likely to have Alzheimer’s then tell them where to donate towards a cure, or treatments to slow it by a decade.
Pharmaceutical companies have spent something like $50 billion on developing Alzheimer's drugs with, well, the most furtive of straw-grasping to show for it. It's probably the most expensive single disease target (especially as things like cancer are families of diseases)... the failure to have good results isn't for lack of money, and merely throwing more money at it is unlikely to actually make progress towards meaningful treatments.
It just seems really obvious to me that it's not one disease. One problem with the research is that there is SO much money. It's corrupting. There's a whole thing about the plaque cartel and if you aren't testing around a possibly flawed concept the availability of funds is much lower.
I just feel the thinking is off, it's like we are trying to treat cuts by removing scabs and scar tissue. We really need deep investigation on the sources, which I feel in many cases are industrial chemicals and how some people's body / immune system respond to them.
One of the most compelling studies I saw was how distance from a Golf Course predicted neurodegenerative diseases, based on their use of certain pesticides.
Trontinemab is in trials right now and has 92% of patients achieving low amyloid levels. And more people should be able to take it as it causes less brain swelling (ARIA-E). I'm unaffiliated, I just follow medical research in my free time. But I'm quite hopeful about this medication
I understand the "detect deadly progression but no cure" problem; this was the same rationale people used when Huntington disease could be verified in diagnostics. Many people don't want to know, but some want to know, in particular as you can manage some things here or there - diet affects many things, for instance, even aside from metaboic genetic defects. And for any (molecular) therapy at a later time you need to understand the molecular basis to some extent. Some things can be found out via trial and error (vaccination and before) but for some disease that can not work. Alzheimer is quite complex.
If a loved one is suffering from this, this diagnostic would allow for interventions such as guardianship to assume financial and logistical responsibility for them with less subjective decisioning based on observations alone.
Even though it cannot be reversed or eradicated (yet, let's hope) detection can allow individuals to adopt interventions that help either adjust their lives to better cope with its progression or help mitigate some of the detrimental behavioral consequences. In addition, if you have family to care for it may be impetus to get certain things in order for them before later stages of the disease, etc. It's horrible and bleak, but I could certainly see why one might want to know.
In the lucky case, it can also relieve anxiety. Even though false negatives may still be possible, receiving a negative detection might give people who have anxiety about certain symptoms relief, since they can rule out (rightly or wrongly) a pretty severe disease.
that doesn't bother me but what is actually suspicious is persistently only mentioning "accuracy" but not sensitivity / specificity / precision etc.
Basically, almost everybody doesn't have Alzheimers. Sampling from the general population you can get better than 94.5% accuracy just by returning negative on every test. You have to know sensitivity and specificity separately to make any informed judgement ... which they try extremely hard not to tell you.
I.e. it needs the original 75% accuracy or so and boosts it another 20%.
The problem is that the assessment itself is slow, expensive and requires skill.
What we really want from a test is high specificity (a positive test means you have accuracy) and high sensitivity (if you get a negative test you don't have it).
> Compared with the final diagnosis, the pre-biomarker diagnosis was maintained in 71/200 cases (75.5%) (Kappa = 0.576), while the post-biomarker diagnosis was maintained in 189/200 cases (94.5%) (Kappa = 0.906).
Even without this method, the doctors have been able to give diagnosis with 75.5% accuracy (according to the paper's claim).
Maybe I've misunderstood something, but how can they know the accuracy of the test? It is the best test out there, so if it misses a diagnosis, how do they reliably catch the false negative?
Besides being a bad joke, this is in terrible taste on a thread read by people with Alzheimer's patients in their lives, and it violates HN's rules that discussion should be valuable and inspire curiosity.
One of interesting checks in this study might be to check when (if) any of the participants had taken this vax and what the impact might be on an Alzimer's diagnosis.
Purely anecdotal, but I witnessed a person starting to experience a severe cognitive decline right after the two doses of the Shingles Vaccine. It can surely be a coincidence, but I was very surprised when I read about this study.
This needs to include life-changing false positive rates. Imagine being given a diagnosis like this - people around you who know and any corporations who can sniff it out by snooping on your communications can lead to much rejection early in life. What happens when the diagnosis is as positive when it shouldn’t have been?
https://doi.org/10.1038/s41591-025-03622-w this is the paper they're basing the research on. So in primary care, the accuracy rates are in the 80s. So that's something like a 17% false positive rate. That's still like 5 to 1 odds of getting a correct result though. It's much better than nothing.
I got bad news about the specificity for most things this serious. Think the only one we absolutely nail is infectious disease detection.
Spoilers: It's anywhere between 1-15 and 5-30% for false positives and 1-15/5-40 for false negatives. That's imaging, biomarkers, cancer screenings, etc
Like, where do you think the concept of "second opinions" came from? Whimsy? Lets go ask a second doctor if I actually have cancer, it'll be fun!
This statement is quite broad and misses several important factors.
First of all, a test's sensitivity and specificity. The math in your example assumes a balanced test, but on what basis? The math comes out quite different for high-sensitivity or high-specificity tests. (Unfortunately, I could not find the numbers for the test in the linked article.)
Secondly, whom are we testing? The prevalence rate in your example (1%) is unrealistically low even for the general population. But would we screen the general population? No, we'd screen high-risk groups: the elderly, those with certain APOE genotypes etc. Predictive values of a test depend hugely on the prevalence rate.
Lastly, it depends on how the results are used. If it's a high-sensitivity test used to decide whom to send to the next tier in a multi-tier diagnostic system, it could actually be quite effective at that (very rarely missing the disease while greatly reducing the need for more expensive or more invasive testing).
If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.” We would immediately build better telescopes to track it precisely, refine its trajectory models, and begin developing propulsion systems capable of interception. You do not wait for the cure before improving the measurement. You improve the measurement so that a cure becomes possible, targeted, and effective.
Medicine is no different. Refusing to improve early, probabilistic diagnosis because today’s treatments are modest confuses sequence with outcome. Breakthroughs do not emerge from vague labels and mixed populations. They emerge from precise, quantitative stratification that allows real effects to be seen. The danger is not that we measure too early. It is that we continue making irreversible clinical and research decisions using imprecise, binary classifications while biological insight and therapeutic tools are advancing rapidly. Building the probabilistic layer now is not premature. It is how we make future intervention feasible.
This is absolutely nothing like the asteroid example, where knowing that anybody is going to fall victim to it would itself be news of astronomical proportions. Previously there was a high chance the event wouldn't happen, and now it seems likely it will, so that entirely change the calculus of your priorities.
This just completely destroys the analogy. (There are other reasons it doesn't fit too, but one is enough.)
The other slightly sad fact is that is also quite likely that any curative treatment will need to be started before you start to show symptoms, because the brain has already lost a lot of it's resilience by then.
With that said, lifestyle changes can slow down the onset of Alzheimer's, so knowing the diagnosis isn't totally useless.
I've long had the suspicion that much of what is called Alzheimers or dementia is some form of prion disease. This study doesn't show that, exactly, but it shows that abnormal proteins may be directly correlated.
So - and I'm not saying this is the case - but suppose that the abnormal proteins identified in this study could be transmitted by blood transfusions or organ transplants. Wouldn't that itself be enough for your diagnosis to help you personally not transmit those proteins to someone else?
If your attitude is that no one else in the world matters once you get a bad diagnosis, then nothing really mattered to you before. Other people are working day and night trying to cure you, so there's no cause for that level of nihilism. You may as well try to help from the vantage point you have.
Your reasoning relies heavily on this statement, which is only true if occurrence is entirely random, which is in most cases not true. A condition can easily mask the cause of the condition and then you have a confounder(-s) that you have no way of controlling. If you can build multiple strata with high risk ratios, you can find baseline similarities and differences in those groups. Early detection is highly important in knowing these confounders in the first place and then controlling for; and as GP mentions allows for more targeted research in treatment. Without this we could easily spend all the research effort on the effect (symptom) of a condition without even approaching treatment of the cause, i.e. prevention.
A very similar thing has happened with the infamous atherosclerotic plaques. AFAIK (correct me if you are aware of any evidence) there is currently no mechanistic model of how these atherosclerotic plaques form. Yet we spend so much effort in lowering the symptomatic side of increased cholesterol/LDL (which has well-known positives) even if there are known metabolic pathways for LDL increase, based entirely on correlational studies, when LDL is not even close to being the best predictor of cardiovascular conditions. LDL just happens to be easy to measure in a blood test and easy to control with oral medication.
While there aren't any cures yet, certain treatments and lifestyle strategies may slow its progression, keep quality of life as long as possible and stuff like that for as long as possible. (And the sooner you start with that, the better)
If I got an early diagnosis, it would motivate me to get my affairs in order to lessen the burden on my family and check off some bucket list items before it's too late. Don't rob me of that opportunity.
Before ordering the test, ask patients "If you were going to get Alzheimer's, would you want to know?"
A more objective blood test will make for more accurate diagnoses and and better treatment.
Why are you so furious about the idea of people knowing?
> If astronomers announced that a large asteroid might strike Earth in twenty years, and that we currently had no way to deflect it, nobody would respond by saying, “Come back when you already have the rocket.”
I don’t think the analogy fits, for a couple reasons.
1. People not wanting to know whether they have Alzheimer’s is because of the fear of a fate worse than death — living with Alzheimer’s.
2. People not wanting to know whether they have Alzheimer’s is not the same was not wanting a way to detect it. As you said, being able to measure it may help lead to a cure/treatment. I doubt people are against improving detection — they may just not want the detection to be applied personally.
Wrote up my current systems understanding here https://metamagic.substack.com/p/the-alzheimers-equation, but it makes clear why treatments that target only one variable are mathematically doomed to fail to work on everyone and why there will never be a single "cure". It explains without needing to read 10,000 papers why we keep getting research talking about treatment X helps in some, but not all cases or symptom Y is associated in some, but not all, etc.
I'm not saying your wrong, just that the level of confidence in your assertions is not warranted.
But that is sort of the point of science, you take all the evidence you have and create a hypothesis and iterate as you get more evidence. If I find evidence that suggests something else then I will be happy to tweak or abandon this. My level of confidence comes from the existing evidence and lack of evidence otherwise.
It is a tale as old as time. See the story behind the term. ultracrepidarian: https://en.wiktionary.org/wiki/ultracrepidarian#English
I am absolutely not going to plan on a care facility right now. That sounds absolutely bogus.
If I were likely to develop alzheimer's, I'd make more and more expensive accommodation for power of attorney and trusts to shield assets while I was competent to do so.
https://www.imdb.com/title/tt11286314/
That's not what would happen. We wouldn't mobilize. We'd fragment. Within days, the prediction would be declared partisan. One bloc would call it settled science; another would call it statistical hysteria. Billionaires would quietly commission private shelters while publicly funding studies questioning whether the asteroid even qualified as "large." News panels would debate whether the projected impact zone was being unfairly politicized. Conspiracy channels would insist the asteroid was fabricated to justify global governance. Others would insist the real asteroid was being hidden. Amateur analysts would flood the internet with homemade trajectory charts proving the professionals wrong. Death threats would arrive in astronomers' inboxes faster than research grants.
People can be fine with being tested so that epidemiologists can work on growing our knowledge and, at the same time, not wanting to know their own diagnosis.
I do want to know.
If it is positive, that is still helping you accurately deal with whatever is happening to you.
Imagine you're born and you eventually learn that there's an asteroid on a collision course with earth, from way before you were born. It's going to take many years to get here and you may die before it hits and so far no scientists have been able to come up with a way to deflect it. Do you care?
Adding newness to the situation makes it wildly different.
Left untreated for a very long time (decade+), it spreads to the brain and causes dementia among other things. Older generations with stigmas, taboos, or from lower educational backgrounds seem (to me) less likely to get tested, so it seems plausible.
Source: Have recently discovered this myself with a family member from their neurologist.
The reason this was detected is that such testing is a standard practice with new dementia patients—among many other tests that identify etiologies of dementia.
No need for a 'PSA'.
We only found out for my family member after the 3rd neurologist's opinion after ~2 years of this.
Not everyone does their professional due diligence - cue endless anecdata about the healthcare industry. It's good to just be aware.
Perhaps—but it's also possible that whoever was in the room with the patient declined STI testing (which I have seen, and which sometimes reflects lack of knowledge around extramarital affairs).
I'm just trying to make it clear that there are dozens of reversible/non-degenerative causes of dementia and there is no way that a fully-trained neurologist doesn't have these memorized.
It's like not knowing what a type system is as a programmer with a reputable degree—impossible.
edit: in fairness, many doctors have unease around discussing sex/infidelity—but the PSA maybe should be to encourage your doctor to put aside concerns around niceties in your parent's care.
And "The effect of shingles vaccination at different stages of dementia" https://news.ycombinator.com/item?id=46164646 (yes, also the Herpes family).
He was quite sane in his life and his work expanding on Marx is of course extremely important.
If it turns out that driving a Prius on Tuesdays slows down Alzheimer’s, a larger pool of subjects would allow us to figure that out.
It's also better for people around the Alzheimer's patient, as it will let them understand why someone's personality and behaviours may be changing, and possibly let them be bit more forgiving of such changes. It will also give family more time to plan and understand the health and community services and support are offered wherever they live.
Best these type of drugs can do is give you a few months extra window (say 4-6 months). They're not a cure. Sadly.
This is all new. There is research hinting at Alzheimer's subtypes, some of which are more likely to respond than others. Even halting the decline is a huge potential breakthrough.
Time will tell if the 30% slowdown continues beyond four years, and/or if earlier treatment with more effective amyloid clearance from newer drugs has greater effects. The science suggests it should.
> her mental acuity scores are (slightly) better than they were last year
Your point at the end is essentially correct. There's a couple of reasons that come to my mind:
Early detection lets us test cures more quickly. You can see if the treatment is working without waiting 30 years for symptoms to develop or not. If prevention is all that works, we can verify lifestyle changes, again without having to wait 30 years for symptoms to develop.
Early detection means there's more of a chance of any future treatment succeeding and the patient returning to a normal life. Think of early detection of cancer or heart disease meaning you can be treated with less risky medication and procedures and minimise the damage being done.
My grandfather had a "fall" at work, he then left that job, and held down 2 more engineering jobs before he was diagnosed with a stroking condition and subsequent dementia. I got the distinct impression he thought he had more time, but rapidly declined.
If he knew he was short of time before his rapid decline he probably would have done things differently. Like not buying a house he would later have to sell to pay for aged care.
If he knew he was at risk of a workplace accident he probably wouldn't have worked as an after hours safety engineer at a major treatment plant, where if the worst had happened he could have endangered others.
At a personal level, I've been through this with my grandfather.
I want to know. My family wants to know. I want to prepare because there are things I want to do today that I know I won't be able to do in the future.
In many ways, it's just like many terminal cancer diagnoses. You're going to lose that person, but you have some time.
It's a weird disease and IMO not even really a disease it's a bunch of different causes of cognitive impairment under one umbrella but shouldn't be separated out much further to find actual causes and treatments.
(There's enough info in the supplemental link on this page to have an LLM do the Bayes math for you.)
Looks like my prior was not too bad :)
These patients are already seeing doctors. Would you rather your doctor to hide the diagnosis just because your disease isn't curable (for now)? It's not like we're testing the whole population in masse.
Getting an accurate diagnosis is always important. Cognitive decline could be caused by other problems, some of which are more treatable than others.
If this test came back negative it would suggest extra testing to rule out other conditions like a brain tumor or hydrocephalus.
It is frankly shocking to think disease diagnosis would be a useless thing
https://www.alzheimers.org.uk/news/2025-11-18/promising-rese...
The test is optional. Feel free to skip it.
Tell 50 million people they’re likely to have Alzheimer’s then tell them where to donate towards a cure, or treatments to slow it by a decade.
But apparently your odds go above 30% if you live long enough, so if you could test for being in that cohort I think that result would be too common to actually be devastating.
Pharmaceutical companies have spent something like $50 billion on developing Alzheimer's drugs with, well, the most furtive of straw-grasping to show for it. It's probably the most expensive single disease target (especially as things like cancer are families of diseases)... the failure to have good results isn't for lack of money, and merely throwing more money at it is unlikely to actually make progress towards meaningful treatments.
I just feel the thinking is off, it's like we are trying to treat cuts by removing scabs and scar tissue. We really need deep investigation on the sources, which I feel in many cases are industrial chemicals and how some people's body / immune system respond to them.
One of the most compelling studies I saw was how distance from a Golf Course predicted neurodegenerative diseases, based on their use of certain pesticides.
Someone always says “merely throwing money at the problem…”
What time period was the money spent? The last 25 years?
The United States spends $1 trillion a year in debt interest. $50 billion is nothing
There's Lecanemab and Donanemab. The effects are modest however.
Even though it cannot be reversed or eradicated (yet, let's hope) detection can allow individuals to adopt interventions that help either adjust their lives to better cope with its progression or help mitigate some of the detrimental behavioral consequences. In addition, if you have family to care for it may be impetus to get certain things in order for them before later stages of the disease, etc. It's horrible and bleak, but I could certainly see why one might want to know.
In the lucky case, it can also relieve anxiety. Even though false negatives may still be possible, receiving a negative detection might give people who have anxiety about certain symptoms relief, since they can rule out (rightly or wrongly) a pretty severe disease.
Basically, almost everybody doesn't have Alzheimers. Sampling from the general population you can get better than 94.5% accuracy just by returning negative on every test. You have to know sensitivity and specificity separately to make any informed judgement ... which they try extremely hard not to tell you.
I.e. it needs the original 75% accuracy or so and boosts it another 20%.
The problem is that the assessment itself is slow, expensive and requires skill.
What we really want from a test is high specificity (a positive test means you have accuracy) and high sensitivity (if you get a negative test you don't have it).
This is how we can offer screening.
Even without this method, the doctors have been able to give diagnosis with 75.5% accuracy (according to the paper's claim).
No it's not, that's a reported mean, presumably with the right number of significant digits.
If you want to criticize the variance/stddev, do so, but you picked the wrong metric if that's what you wanted to complain about.
One of interesting checks in this study might be to check when (if) any of the participants had taken this vax and what the impact might be on an Alzimer's diagnosis.
It's used to refine clinical diagnosis after patients present with cognitive severe decline.
By the time someone gets this test, they have severe problems. The purpose of this test is to assist with the right diagnosis.
If you have a prevalence of 10 in 1000, how do the numbers shake out?
Well, you test all 1,000. If we assume a 95% accuracy for false-positive and false negatives?
Of the 990 that you test that don't have the disease, the test will false state 50 do have the disease. Yikes!
And of the 10 that do have the disease? You'll miss 1 of them.
It's not terrible. This is a relatively good number. Diagnostics is just terribly difficult.
Spoilers: It's anywhere between 1-15 and 5-30% for false positives and 1-15/5-40 for false negatives. That's imaging, biomarkers, cancer screenings, etc
Like, where do you think the concept of "second opinions" came from? Whimsy? Lets go ask a second doctor if I actually have cancer, it'll be fun!
This statement is quite broad and misses several important factors.
First of all, a test's sensitivity and specificity. The math in your example assumes a balanced test, but on what basis? The math comes out quite different for high-sensitivity or high-specificity tests. (Unfortunately, I could not find the numbers for the test in the linked article.)
Secondly, whom are we testing? The prevalence rate in your example (1%) is unrealistically low even for the general population. But would we screen the general population? No, we'd screen high-risk groups: the elderly, those with certain APOE genotypes etc. Predictive values of a test depend hugely on the prevalence rate.
Lastly, it depends on how the results are used. If it's a high-sensitivity test used to decide whom to send to the next tier in a multi-tier diagnostic system, it could actually be quite effective at that (very rarely missing the disease while greatly reducing the need for more expensive or more invasive testing).
"A narrative review on the effects of a ketogenic diet on patients with Alzheimer's disease"
https://www.sciencedirect.com/science/article/pii/S127977072...
"Effects of ketogenic diet on cognitive function of patients with Alzheimer's disease: a systematic review and meta-analysis"
And anecdotes from the field:
https://www.youtube.com/watch?v=s86CFw0qhVc
Revolutionizing Assisted Living: Hal Cranmer's Ketogenic & Carnivore Approach to Senior Wellness / Metabolic Mind