
Cognitive speed exercises tied to ~25% lower dementia diagnoses after two decades
Key long-term finding — Researchers linked a specific form of computerized processing-speed practice to a meaningful drop in later-life diagnoses: participants who completed the training showed about a 25% lower chance of receiving a dementia diagnosis when health records were followed for roughly 20 years. The analysis matched trial participants to federal medical claims to trace outcomes across decades. This is not a short-term improvement; the signal persisted long after the initial training ended.
Intervention details — The original randomized trial enrolled 2,802 older adults and compared several cognitive exercises; only the module focused on accelerating visual processing produced the downstream reduction in diagnoses. The training dose associated with the effect was modest: roughly 8–10 hourlong sessions plus at least one follow-up booster. The exercise adapts speed and adds distractions as users improve, reinforcing rapid, automatic responses.
Mechanism and real-world use — Investigators suggest the benefit may come from strengthening implicit, automatic processing networks that remain durable over time, similar to motor skills. Commercial platforms have implemented similar drills, and some long-term users report sustained cognitive performance. Still, the finding here rests on linkage between trial enrollment and administrative claims rather than continuous clinical re-evaluations.
Next-phase testing — A separate, larger prevention study is underway to test a higher training dose and repeated sessions; that trial has enrolled about 7,500 people and asks for approximately 45 sessions spread over multiple years. Results from that experiment are expected to clarify whether greater cumulative practice yields proportionally larger reductions in dementia risk.
Practical perspective — Experts emphasize that a relatively small time investment produced measurable, long-horizon gains in this cohort — suggesting low-burden preventive options could scale. At the same time, researchers caution that replication and mechanistic work are needed before reshaping public-health recommendations.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you
Twin Health AI Program Lowers A1C, Cuts GLP‑1 Use in 12‑Month Trial
A 12-month randomized study showed Twin Health’s AI-driven program helped 71% of participants reach A1C below 6.5% with fewer medications versus 2% in controls. The 150-person trial recorded an average weight loss of 8.6% in the intervention arm and a fall in GLP‑1 prescriptions from 41% to 6%.

Nvidia’s Dynamic Memory Sparsification slashes LLM reasoning memory costs by up to 8x
Nvidia researchers introduced Dynamic Memory Sparsification (DMS), a retrofit that compresses the KV cache so large language models can reason farther with far less GPU memory. In benchmarks DMS reduced cache footprint by as much as eightfold, raised throughput up to five times for some models, and improved task accuracy under fixed memory budgets.