Zeron open-sources two frameworks to compute human-aware cyber risk
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

U.S. Treasury to publish AI cyber-risk guidance for financial firms
The U.S. Treasury will roll out a set of six practical resources this February, created by a public-private oversight group to help financial firms manage cyber and AI risk. The materials aim to set baseline practices across governance, data stewardship, transparency and fraud controls to support safer AI adoption in banking and related services.
White House cyber office moves to embed security into U.S. AI stacks
The Office of the National Cyber Director is developing an AI security policy framework to bake defensive controls into AI development and deployment chains, coordinating with OSTP and informed by recent automated threat activity. The effort intersects with broader debates about AI infrastructure — including calls for shared public compute, interoperability standards, and certification regimes — that could shape how security requirements are funded, enforced and scaled.
UK: Concentric AI presses for context-first controls to tame GenAI data risk
Concentric AI says rapid GenAI use is widening enterprise data risk as employees share sensitive material with external models, and urges context-aware discovery, application-layer enforcement and model governance to close the gap. The vendor frames these measures as practical complements to broader industry moves toward provenance, zero-trust and runtime observability to make AI adoption auditable and defensible.
SOC Workflows Are Becoming Code: How Bounded Autonomy Is Rewriting Detection and Response
Security operations centers are shifting routine triage and enrichment into supervised AI agents to manage extreme alert volumes, while human analysts retain control over high-risk containment. This architectural change shortens investigation timelines and reduces repetitive workload but creates new governance and validation requirements to avoid costly mistakes and canceled projects.
ION Group Founder Warns Investors Misjudge AI Risk as Software Stocks Lose $2 Trillion
Andrea Pignataro of ION Group says investors are fixating on feature‑level automation while underestimating systemic risk from embedding models into institutional workflows; equity markets have pared roughly $2 trillion from software valuations amid that reassessment. The more consequential exposures, he argues, are governance, contractual liability and integration costs once models are handed the language of operations.
Patch Rush, Penalties and Power Plays: This Week’s Cybersecurity Events
A fast-exploited Fortinet flaw and an agentic-AI vulnerability in ServiceNow forced urgent remediation, while telecoms, a university, and a logistics provider faced data and security crises that drew enforcement and public scrutiny. National agencies issued OT and zero-trust guidance and investors poured $136M into defense-focused software, highlighting shifting incentives toward resilience and regulatory accountability.
Zero Trust in 2026: Identity, AI and the long, pragmatic climb from theory to practice
Zero trust has moved from slogan to operational pressure, with identity control now the linchpin and AI both amplifying attacks and offering detection gains. Recent work on agent identity fabrics — pairing human-readable discovery with cryptographic attestations and policy-as-code — shows how identity-first designs can harden autonomous workflows and materially reduce blast radius.

OpenAI Frames ChatGPT as a Tool to Speed Scientific Discovery, Backed by Usage Data
OpenAI says conversational AI is becoming a practical research assistant and released anonymized usage figures showing sharp growth in technical-topic interactions through 2025. Industry demos and competing vendor announcements — including agentic developer tools and strong commercial uptake — underscore a broader shift toward models that can act, observe outcomes, and accelerate knowledge‑work, but validation and governance remain urgent obstacles.