ISSUE 06 | What We Assume Is Safe
Security Conscience: Enterprise Cyber Weekly
Issue #05 • Week of Feb 2, 2026
|
Sponsored message
Security Tip of the Week🔐 NTLM Reality Check Enable enhanced NTLM auditing in Windows 11 24H2 or Server 2025 and let it run for 48 hours. If you see authentication attempts from systems you don’t recognize or to services you thought were Kerberos-only, your authentication fallback is broader than your architecture diagrams suggest. I had a conversation last week with our infrastructure lead about why we still see NTLM traffic in certain segments. The answer was what it always is: “It just works, and we’re not sure what would break if we turned it off.” That’s the sentence that’s been quietly expanding our attack surface for years, and I’m not the only one who’s heard it in architecture reviews or incident debriefs. This week’s stories point to the same uncomfortable truth: attackers have stopped trying to break through our defenses and started living inside the infrastructure we’ve implicitly trusted. Update mechanisms from open-source tools. Authentication protocols that persist as invisible fallbacks. ERP systems we can’t patch on an adversary’s timeline. Development pipelines that ship code before security teams even know it exists. These aren’t edge cases. They’re the operational reality in every enterprise I know, including mine. The hard part isn’t admitting we have exposure. It’s that we’ve been deferring the work required to see where trust is placed, who owns it when it breaks, and whether we can respond faster than attackers can exploit it. That gap between “we assume this is safe” and “we can prove this is defensible” is where breaches are starting now. And unlike perimeter compromises, these don’t announce themselves with alerts. They look like normal operations until the money’s gone or the production line has been offline for six weeks. Top StoryERP Systems Are Now Board-Level LiabilitiesSource: https://cyberscoop.com/boardroom-erp-cybersecurity-sap-ransomware-resilience-op-ed/ TLDRThe September 2025 Jaguar Land Rover ransomware attack halted production for six weeks and cost over $1.2 billion, exposing how enterprise resource planning systems like SAP have become single points of catastrophic failure. Attackers now exploit SAP vulnerabilities within 72 hours of patch releases while enterprise patch cycles take weeks, and regulatory frameworks increasingly hold board members personally liable for ERP security failures. Why it Matters to EnterprisesThis isn’t about SAP vulnerabilities. It’s about the fundamental assumption that your operational core is defensible under traditional security models. When 90% of the Fortune 500 run on SAP and threat actors exploit patches faster than enterprises can deploy them, you’re operating in a permanent state of exposure. The real issue is architectural: ERP systems were never designed as hardened attack surfaces, yet they now hold financial data, PII, supply chain logic, and payroll execution in a single blast radius. Boards can no longer defer to IT when regulators like SEC, NIS2, and DORA explicitly assign personal liability for inadequate oversight. The gap between “we have firewalls” and “we can survive a targeted ERP takedown” is where companies like Stoli Group file for bankruptcy. What to do this Week
Big StoriesAPT28 Weaponizes Office Patch in 48 HoursWhat HappenedAPT28 began exploiting CVE-2026-21509 within days of Microsoft’s emergency patch on January 26, delivering malicious documents to Ukrainian government agencies and European targets. Malicious RTF files bypassed Office OLE protections and deployed dual-track malware: MiniDoor for email exfiltration and PixyNetLoader leading to Covenant framework implants. Russian hackers exploit recently patched Microsoft Office bug in attacks CERT-UA discovered documents created just one day after patch disclosure, suggesting rapid reverse engineering of Microsoft’s fix rather than existing zero-day tooling. Why It MattersThe 48-hour weaponization window demonstrates that patch disclosure itself becomes an intelligence asset for sophisticated actors, shrinking defensive response time to near zero. If your enterprise patch cadence runs weekly or monthly, you’re operating inside attacker decision cycles for critical Office flaws. APT28’s use of COM hijacking, scheduled tasks restarting explorer.exe, and cloud C2 through Filen.io shows privilege escalation and persistence happening below traditional endpoint visibility, particularly in environments that trust signed Office processes by default. This matters because it reframes patching as a competitive timing problem, not a compliance exercise. Microsoft Ends NTLM’s 33-Year Run as Default AuthWhat HappenedMicrosoft announced it will disable NTLM authentication by default in the next major Windows Server and client releases, ending reliance on a protocol introduced in 1993 that remains vulnerable to relay, replay, and pass-the-hash attacks due to weak cryptography. The three-phase transition begins with enhanced auditing tools now available in Windows 11 24H2 and Server 2025, continues with Kerberos expansion features like IAKerb and Local Key Distribution Center in late 2026, and concludes with NTLM disabled by default while remaining available for explicit policy-based re-enablement. Why It MattersThis isn’t about deprecating another legacy protocol. It’s about forcing enterprises to confront authentication assumptions that have been silently failing for years. NTLM relay attacks let threat actors force compromised devices to authenticate against attacker-controlled servers to escalate privileges and take complete domain control, yet the protocol persists as automatic fallback in environments where Kerberos connectivity assumptions don’t hold. The real exposure is that most enterprises have no visibility into where NTLM is actually invoked until it breaks, and features shipping in H2 2026 like Local KDC specifically address scenarios where domain controller connectivity previously forced NTLM fallback, which means those “temporary” workarounds and edge cases have been silently expanding attack surface. Microsoft leaving NTLM available for explicit re-enablement is the tell: they expect resistance, which means enterprises need to map their actual authentication flows now before they’re forced into reactive firefighting when defaults flip. Quick HitsNotepad++ Breach Shows Hosting Is Now the Soft TargetChinese state-sponsored attackers compromised Notepad++’s hosting provider for six months, selectively redirecting updates to telecom and government targets while retaining access through stolen credentials even after the initial breach was patched. Update channels from trusted open source tools are now espionage infrastructure because enterprises assume they bypass scrutiny. https://thehackernews.com/2026/02/notepad-hosting-breach-attributed-to.html Supply Chain Risk Now Starts With “Move Fast”Code now moves from development to production in hours while enterprises still assume software suppliers are secure by default, creating vulnerable applications that nation-states exploit through known flaws and third-party integrations. Speed without verification scales risk faster than security teams can map it. https://cyberscoop.com/move-fast-break-things-cybersecurity-supply-chain-security-op-ed/ AI Agents Are Identity Sprawl at Machine SpeedEnterprises discover hundreds of AI agents once they audit, with custom GPTs and MCP servers persisting outside traditional IAM platforms that weren’t designed for autonomous, decentralized identities. Agent identities bypass lifecycle processes and accumulate privilege without review in environments where most security teams lack the visibility to correlate their access. Wrapping it all upThe pattern this week isn’t about new attack techniques. It’s about the widening gap between how fast enterprises need to move and how much visibility they actually have into what’s running, who owns it, and whether it can be defended when it matters. We’ve built architectures on assumptions that made sense when patch cycles were measured in quarters and authentication was binary. But attackers now weaponize patches in 48 hours, legacy protocols persist as invisible fallbacks, and AI agents proliferate faster than identity teams can inventory them. The operational tempo has changed, and our governance models haven’t kept up. What makes this uncomfortable is that these aren’t problems you solve by buying another platform or running another audit. The exposure lives in the space between what security teams know exists and what business units are already running in production. It shows up in architecture reviews where no one can answer “where is NTLM still firing” or “how many AI agents are accessing customer data.” It surfaces in board meetings when someone asks about ERP resilience and the answer is “we’re working on a plan.” The common thread is deferred visibility, and the cost is compounding. The enterprises that navigate this aren’t the ones with perfect inventories or zero legacy debt. They’re the ones that stopped treating visibility as a prerequisite for action and started treating it as the action itself. Map what’s actually running, even if it’s messy. Assign ownership, even if it’s uncomfortable. Test whether you can survive without the systems you assume are untouchable. The alternative is discovering your assumptions were wrong during an incident, when the only people with visibility are the attackers. |
