gemini generated image k9c2zsk9c2zsk9c2

VPN Strategy in 2026: When You Should Keep It, When You Should Kill It

I am going to start with something that makes vendors uncomfortable: VPNs are not dead.

If your remote access story in 2026 still begins and ends with “everyone logs into the VPN,” that is a problem. But pretending VPN has no place in a modern architecture is just as wrong. Real environments sit in the uncomfortable middle. You have a ZTNA or “secure access” platform, a growing pile of SaaS, more workloads in cloud than anyone wants to admit, and somewhere in the mix a VPN deployment that has survived at least three CIOs.

The useful question is not “VPN or Zero Trust.” The useful question is “Which access pattern actually fits this workload, this data, and this set of constraints?” Sometimes the answer is VPN. Sometimes it is ZTNA. Most of the time you are living with both.

This is a guide for making those choices on purpose instead of inheriting them by default.


How We Got Here

In 2020 a lot of organizations scaled VPN in a panic. Concurrent sessions went from hundreds to thousands. Appliances were stacked, licenses were rushed through, and the success metric was simple: can people log in and work from home.

That emergency buildout quietly solidified into “the way we do remote access.” At the same time, several other things shifted:

Identity and device posture replaced network location as the main trust decision. Internal applications moved to SaaS or public cloud, shrinking the “inside” footprint. Outages at DNS providers and access vendors reminded everyone what a single choke point looks like.

And don’t forget the access path that bypasses most of this discussion: automation. Service principals and workload identities can move through your environment without VPN or ZTNA user controls, which is why Workload Identities Are the Real Perimeter.

So now you are left with an architecture that grew out of an emergency and a world that no longer looks like the one it was built for. The job in 2026 is to untangle that, without breaking everything that currently works.


Where VPN Still Earns Its Keep

For all the talk about “VPN is over,” there are real use cases where turning it off would be reckless.

One of the big ones is regulated workloads that still assume network controls. Healthcare, finance, and government environments often have control sets and auditors who expect to see specific network segments, encrypted tunnels, and approved paths between systems. You can absolutely implement the spirit of those controls with a modern Zero Trust approach, but convincing an auditor that has lived their whole career in firewall diagrams takes time and energy.

A classic example is a hospital PACS environment or other medical devices that sit on a dedicated subnet. Those systems are expensive, tied into clinical workflows, and not easily replaced. A VPN landing zone that terminates into that segment, with simple rules and clear logging, may be the thing that keeps both operations and compliance teams comfortable while the long term replacement plan crawls along.

Then there are the ugly legacy applications that simply cannot speak modern identity. Thick clients that talk to servers over random ports. Industrial control systems that expect fixed IP ranges and built-in logins. ERP modules written in an era when SSL offload was the fancy new thing. You can rebuild or replace them, but the price tag is measured in years and millions. Keeping a small, heavily segmented VPN path for a few dozen users is sometimes the only practical bridge from here to there.

Site to site connectivity is another category people tend to conflate with user VPN. A branch office that needs persistent connectivity to a datacenter so it can reach domain controllers, file shares, or internal services is not a candidate for per app ZTNA on every device. Under the covers, most SD-WAN and SASE solutions still rely on tunnels. They just manage and steer them better. That is still VPN technology, just in a more opinionated wrapper.

Short lived contractor access is more mundane but shows up everywhere. A vendor needs access for ninety days to work on an internal portal. They do not handle regulated data. They are one small team. You can fully onboard them into your identity provider, deploy agents, and wire them into your ZTNA fabric. Or you can give them a constrained VPN profile that only reaches a narrow segment, watch it, and revoke it when they are done. If you are honest about the risk profile, the simpler option is often fine.

Finally, there are highly restricted or non-internet connected environments. Some defense and OT networks are designed around physical and logical isolation. They may use “Zero Trust like” decisioning internally, but when they talk to another enclave it is over rigidly controlled tunnels. In that world VPN is simply a transport. The interesting questions are how you authenticate the endpoints, how you segment once traffic lands, and how you monitor what crosses that boundary.

The common thread in all of these: the use case is narrow, the user population is well understood, and the blast radius is tightly contained. In that context VPN is still a tool worth keeping.


Where VPN Has Become Technical Debt With A Login Prompt

On the other side are scenarios where VPN is not just unnecessary, it is actively getting in the way.

The easiest to spot is SaaS behind VPN. If people have to connect to your VPN to reach Office 365, Salesforce, Workday, or similar services, you are carrying a very expensive habit. You add latency and create a single point of failure in front of platforms that already have strong identity and access controls. The VPN does not meaningfully improve security for those apps. It just means that if your concentrator has a bad day, nobody can reach their email.

The same pattern shows up when VPN is the backbone for a permanent remote workforce. These deployments were built for occasional remote access, not for a company that basically lives on Zoom and Teams. You see it in dropped calls, endless split tunnel arguments, and tickets that oscillate between “my home internet is bad” and “the VPN is slow” without anyone really being sure which it is. Per app access models handle this better because they accept that the internet is the transport and focus on gating each application cleanly.

Security is another weak point. VPN appliances sit on the edge of your network, accept connections from the entire internet, and have a long history of severe vulnerabilities. Even if you are diligent, you are constantly racing patch cycles and hoping you catch each critical issue before someone else does. Add stolen VPN credentials on top of that and you have a perimeter door that attackers actively target and know how to use.

VPNs are also noisy from an operational standpoint. They generate a steady stream of tickets about client installs, OS upgrades, hotel networks that block protocols, and strange routing behavior when two internal ranges overlap. None of this is exotic work. It is simply expensive in human time. If you never quantify it, leadership assumes VPN is “already paid for” and cheap to run.

Underneath all of that sits the core architectural limitation: VPN makes decisions at the network level. Once a user is “inside,” your controls are mostly about where packets can route. That is a coarse way to express intent. It does not tell you which application someone used, what they did inside it, or whether any of it made sense in context. You can layer better logging and microsegmentation on top, but you are always fighting the original assumption that being on an internal IP is a meaningful signal of trust.

In 2026 that assumption is more liability than asset.


Living With Both Without Losing The Plot

Most organizations are not going to flip a switch and walk away from VPN. You are going to live in a hybrid model for a while. The difference between a healthy hybrid and a mess is whether you have a story you can explain in one or two pages.

A healthy pattern usually starts with a simple principle: user facing applications should be reached per app, not per subnet. That means SaaS is accessed directly with strong identity and conditional access. Modern internal web apps sit behind an access proxy or ZTNA platform. Users never think about “connecting” before they work. They just sign in and the right things are available.

VPN is then reserved for the cases that genuinely need network presence. That might be infrastructure management, certain legacy systems, or a very small number of highly specialized workflows. These paths are tightly segmented, documented, and reviewed on a regular cadence. They are not a convenient catch-all for everything that does not fit neatly elsewhere.

The migration to that state happens one application family at a time. First you remove SaaS from VPN. Then you work through internal web apps that already have usable authentication. Then you look at the client-server workloads and decide which can be proxied, which can move to remote app publishing, and which are stuck on VPN until they are replaced. The regulated and OT use cases are explicitly parked in the “VPN for now” column with clear reasoning.

Different regions and business units will move at different speeds. That is fine. Some offices have better connectivity, more modern infrastructure, and stronger local teams. Others are still hanging on to old stacks that nobody wants to touch. A sensible target architecture allows for those differences without losing sight of where you want to land.


A Simple Way To Decide For Each Use Case

When someone says “we need remote access to this thing,” treat it as a small design exercise instead of a rubber stamp.

First, look at the application itself. Does it have a browser front end and any support for SAML, OIDC, or at least a basic external auth pattern? If the answer is yes, it probably belongs behind your access proxy. If the answer is “no, it is a thick client that dials straight into a server on a high port and handles its own logins,” then VPN may be the only realistic choice until the system is replaced.

Second, ask what kind of data lives behind it and what compliance regimes apply. If auditors or regulators explicitly care about network segmentation and tunnel paths, that will shape how bold you want to be. You might still be able to satisfy them with a Zero Trust design, but you need the appetite for that explanation. For lower sensitivity systems there is usually more freedom to optimize for user experience and supportability.

Third, look at the audience. A thousand users hitting an app all day long are a strong argument for getting off VPN. The user experience dividends and support savings will pay back the migration effort. A group of twenty specialists using it a few times a week is a different equation. It might be cheaper and safer to keep those on a narrow VPN path for another refresh cycle.

Finally, be honest about your team and tools. If your identity, endpoint, and app security capabilities are mature, per app access will fit naturally. If your strength is still mostly in network engineering and firewalling, you may need to build new skills before you lean fully into ZTNA. That does not mean you wait forever. It means you factor learning curve into your plan.

You will not get a perfectly objective answer from this kind of framework, but you will force the right conversations and avoid “because we have always done it that way” as a justification.


Avoiding The Messy Failure Modes

There are a few migration patterns that repeatedly cause pain.

The first is the big bang cutover. Turning off VPN on a Friday and moving everyone to a new access platform on Monday is a reliable way to discover every undocumented dependency in your environment in about six hours. A phased, boring rollout with overlapping access and clear rollback plans will never get you applause on stage, but it will save your weekend.

The second is underestimating hidden uses of VPN. It is not just humans clicking icons. Background jobs, scripting, integration endpoints, and one-off vendor connections often rely on the same tunnels. If you only interview app owners and never look at traffic, you will miss them. Even if you do both, expect to discover a few after the fact. Build time and patience for that into your plan.

The third is letting the new model feel worse from the user’s point of view. If ZTNA is slower to connect, breaks more often, or forces extra clicks, users will cling to VPN as long as they can. You will end up running two models indefinitely because nobody trusts the new one. Put real effort into performance and sign-in experience. Pilot with people who are willing to give blunt feedback and iterate until they prefer the new path.

The fourth is ignoring your exit strategy from the new platform. The ZTNA market is evolving fast. Tools consolidate, vendors change direction, and your own environment will not look the same in three years. Favor designs that use standard identity protocols and clear, portable policy expressions. Ask up front how you would unwind things if you had to. If you cannot get a straight answer, that is a risk all by itself.


Making The Case In Business Terms

None of this moves if the conversation stays purely technical. You need to be able to explain why changing your remote access model is worth real money and attention.

On the cost side, you can quantify what VPN actually costs you: licenses, appliances or cloud gateways, bandwidth where you hairpin traffic, and the labor to maintain and support it. Help desk ticket data is especially useful here. If you can show how many hours each quarter are consumed by VPN issues, you can attach a real number to “operational drag.”

Then you estimate the future state: ZTNA licensing, the effort to onboard applications, the time your team will spend running it day to day, any new monitoring you need. Migration has its own spike in cost: project work, testing, perhaps vendor or consulting support.

On the benefit side, you can talk about fewer support tickets, less downtime tied to VPN issues, and better user experience for the majority of staff. You can also be explicit about security risk: fewer exposed perimeter devices, less reliance on static credentials, and better visibility into who did what. For highly regulated environments, improving the story you tell auditors about access control and logging can also be a real, if indirect, benefit.

Sometimes the math will say “move aggressively.” Sometimes it will say “chip away over the next few years.” Either answer is better than drifting along on infrastructure that was never meant to carry this much weight.


Pulling It Together

VPN is not a villain and ZTNA is not a magic wand. They are just different tools with different assumptions baked in. The mistake is not using VPN at all. The mistake is using it as the default long after your environment stopped matching those assumptions.

A sensible 2026 strategy starts from identity and device posture, uses per app access wherever it reasonably can, and keeps VPN as a constrained option for the cases where network presence is still the only practical choice. It treats every VPN exception as something to be revisited, not something to forget about. That exception management discipline — clear ownership, expiry dates, and regular review — is the same operating model described in Cybersecurity Governance in Practice.

If you can look at your own environment and plainly answer three questions, you are ahead of most:

  • Where does VPN still exist here, exactly?
  • For each of those places, what is the specific reason it has not moved?
  • When are we going to re-check whether that reason is still true?

Once you can say that out loud without hand waving, your VPN is no longer a relic you are stuck with. It is a conscious part of your architecture, kept where it still helps and retired where it does not.

Check out Remote Access in 2026: Stop Arguing VPN vs Zero Trust and Build a Portfolio next.

Similar Posts