Cloud
Azure VNet Integration for App Service and Functions is not a Private Endpoint
A practical note on Azure App Service and Functions VNet Integration, focused on outbound reachability, DNS, routing, NSGs, UDRs, NAT, and the design mistakes that appear when teams assume the app itself becomes privately exposed.
VNet Integration is one of those Azure features that gets explained too quickly. Many designs start with a correct need, such as reaching a private SQL server, a Key Vault behind a private endpoint, or an internal API reachable only through peering or ExpressRoute. The first implementation step is often correct too. The app is integrated with a subnet. The confusion starts right after that.
VNet Integration does not make your App Service or Function App privately reachable from the network. It gives the application outbound reachability into or through a virtual network. Inbound exposure is a different subject, handled by features such as Private Endpoint, access restrictions, or an App Service Environment depending on the design.
This article keeps the scope narrow on purpose. The goal is to design VNet Integration cleanly for App Service and Azure Functions, understand what changes in routing and DNS, and avoid the common mistake of treating it like a generic “put the app inside the VNet” switch.
What this article assumes
The examples below assume a multi-tenant App Service or Function App that needs outbound access to private resources. The target services can be in the same VNet, a peered VNet, on premises through ExpressRoute or VPN, or behind private endpoints.
The reference design is simple.
- One dedicated integration subnet for App Service or Functions
- No other resources deployed into that subnet
- Private endpoints kept on separate subnets
- DNS resolution planned before the app is integrated
- Route and security policy validated before all outbound traffic is forced into the VNet
A minimal variable set keeps the commands readable.
export RG_APP=rg-app-demo
export RG_NET=rg-network-demo
export LOCATION=westeurope
export APP_NAME=naxaya-web-demo
export PLAN_NAME=asp-naxaya-demo
export FUNC_NAME=naxaya-func-demo
export VNET_NAME=vnet-app-demo
export INTEGRATION_SUBNET=snet-appsvc-integration
export PE_SUBNET=snet-private-endpoints Start with the design rule that saves the most time later
Treat VNet Integration as outbound application connectivity.
Treat Private Endpoint as inbound private access to the platform service.
If the design goal is “the app must call a private database”, VNet Integration is part of the answer.
If the design goal is “users or internal clients must reach the app over a private IP”, VNet Integration is not the answer on its own.
That distinction sounds simple, but it is where many broken reviews begin.
Build the VNet and keep the integration subnet dedicated
The integration subnet should not become a catch-all subnet for random workloads. Keep it dedicated to the integration feature so route tables, NSGs, and troubleshooting stay readable.
az group create -n ${RG_NET} -l ${LOCATION}
az network vnet create -g ${RG_NET} -n ${VNET_NAME} -l ${LOCATION} --address-prefixes 10.42.0.0/16 --subnet-name ${INTEGRATION_SUBNET} --subnet-prefixes 10.42.10.0/27
az network vnet subnet create -g ${RG_NET} --vnet-name ${VNET_NAME} -n ${PE_SUBNET} --address-prefixes 10.42.20.0/27 Validate the subnet before integrating anything.
az network vnet subnet show -g ${RG_NET} --vnet-name ${VNET_NAME} -n ${INTEGRATION_SUBNET} --query '{name:name,addressPrefix:addressPrefix,delegations:delegations}'
az network vnet subnet show -g ${RG_NET} --vnet-name ${VNET_NAME} -n ${PE_SUBNET} --query '{name:name,addressPrefix:addressPrefix,privateEndpointNetworkPolicies:privateEndpointNetworkPolicies}' Do not mix the integration subnet with private endpoints, jump hosts, or appliance style resources just because the address space looks available. Clean subnet purpose is part of clean troubleshooting.
Enable VNet Integration on the app you actually want to route out of
The most direct App Service flow is shown first. The same design logic applies to Function Apps on supported plans.
az group create -n ${RG_APP} -l ${LOCATION}
az appservice plan create -g ${RG_APP} -n ${PLAN_NAME} --sku P1v3 --is-linux
az webapp create -g ${RG_APP} -p ${PLAN_NAME} -n ${APP_NAME} --runtime "NODE|22-lts" Integrate the app with the dedicated subnet.
az webapp vnet-integration add -g ${RG_APP} -n ${APP_NAME} --vnet ${VNET_NAME} --subnet ${INTEGRATION_SUBNET} Check the resulting configuration instead of assuming the portal blade tells the whole story.
az webapp vnet-integration list -g ${RG_APP} -n ${APP_NAME}
az resource show -g ${RG_APP} -n ${APP_NAME} --resource-type Microsoft.Web/sites --query 'properties.virtualNetworkSubnetId' If the app is a Function App, the integration is still about outbound connectivity. The details of plan support and surrounding networking options differ, but the design question is the same. What private destinations must the code reach, and what routing and DNS chain will make that predictable.
If you need private inbound access to the app, say it explicitly
This is the decision point that gets blurred in many diagrams.
A VNet-integrated app can reach private resources. That does not mean a client inside the network can now reach the app over a private IP.
If the inbound requirement is private access to the application itself, plan a Private Endpoint for the app or a different hosting model such as an App Service Environment when the platform boundary requires it.
A clean review sentence is this one.
“VNet Integration solves outbound private reachability from the app. It does not by itself make the app privately reachable inbound.”
DNS is usually where a correct network design still fails
The most common failure mode is not the integration itself. It is name resolution after integration.
Example: the app must call a Storage account, SQL server, or Key Vault over a private endpoint. The network path can exist and still fail if the app resolves the public FQDN to the public address or receives a broken private DNS response.
For resources behind private endpoints, validate the private DNS zone links first.
az network private-dns zone create -g ${RG_NET} -n privatelink.database.windows.net
az network private-dns link vnet create -g ${RG_NET} -n link-vnet-app-demo -z privatelink.database.windows.net -v ${VNET_NAME} -e false If the destination is on premises, validate where the app resolves names after integration. If the destination is in Azure behind private DNS, validate zone links, record sets, and any forwarder chain before blaming routing.
A very practical check is to expose a small diagnostic endpoint in the app that resolves and tests the target host, then compare that result to a VM in the same VNet. If the VM resolves the private address but the app resolves a public one or fails entirely, the problem is often in DNS design rather than IP routing.
Decide whether only application traffic or all outbound traffic must use the VNet
This is one of the most important design choices.
Default VNet Integration behavior is not the same as saying every outbound dependency is now forced into the VNet path. Azure lets you configure broader outbound routing, and that changes much more than application calls. It can affect container image pulls, content share access, backup traffic, and managed identity token acquisition depending on the configuration.
Enable all-traffic routing only when you actually mean it.
az webapp config set -g ${RG_APP} -n ${APP_NAME} --generic-configurations '{"vnetRouteAllEnabled": true}' Then verify the setting.
az webapp config show -g ${RG_APP} -n ${APP_NAME} --query '{vnetRouteAllEnabled:vnetRouteAllEnabled}' This is where some teams create a new outage after believing they improved security. The app reaches the database now, but startup, image pull, content mount, or some control-plane related dependency is suddenly subject to NSGs and UDRs that were never tested for those flows.
NSGs and UDRs on the integration subnet are real controls, not decorative controls
Once outbound traffic is routed through the VNet, the integration subnet becomes part of the operational path. That means NSGs and route tables need to reflect real intent.
A simple route table example makes the point.
az network route-table create -g ${RG_NET} -n rt-appsvc-egress
az network route-table route create -g ${RG_NET} --route-table-name rt-appsvc-egress -n default-to-firewall --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address 10.42.30.4
az network vnet subnet update -g ${RG_NET} --vnet-name ${VNET_NAME} -n ${INTEGRATION_SUBNET} --route-table rt-appsvc-egress If you do this, test name resolution, target reachability, outbound internet dependencies, and managed identity flows right after the change. Do not wait for the next deployment to discover that “secure egress” also blocked part of the platform behavior your app relied on.
NAT Gateway solves a different problem than VNet Integration
Teams often want predictable outbound public IPs for third-party allow lists. That is not the same thing as private reachability to internal resources.
VNet Integration gives the app a path into or through the VNet. NAT Gateway gives predictable public egress for traffic that exits through the internet path from the integrated subnet. These features complement each other, but they do not replace each other.
A basic NAT attachment on the integration subnet looks like this.
az network public-ip create -g ${RG_NET} -n pip-appsvc-nat --sku Standard
az network nat gateway create -g ${RG_NET} -n nat-appsvc --public-ip-addresses pip-appsvc-nat
az network vnet subnet update -g ${RG_NET} --vnet-name ${VNET_NAME} -n ${INTEGRATION_SUBNET} --nat-gateway nat-appsvc Use this when the problem is egress identity. Do not use it as a mental substitute for private access to Azure PaaS resources in the same region or for inbound privacy of the app.
A realistic end-to-end pattern combines VNet Integration with Private Endpoints
A common and valid pattern looks like this.
- App Service or Function App uses VNet Integration on a dedicated subnet
- Target resources such as SQL, Storage, Key Vault, or internal APIs are private by design
- Private endpoints live on separate subnets
- Private DNS zones are linked to the application VNet
- Optional NSGs, UDRs, and NAT are applied deliberately after validation
That combination is often what people meant when they first said “put the app in the VNet”, but the actual mechanics are split across more than one feature.
Validation after the first successful deployment
A successful blade status is not enough. Validate the behavior that matters.
az webapp vnet-integration list -g ${RG_APP} -n ${APP_NAME}
az webapp config show -g ${RG_APP} -n ${APP_NAME} --query '{vnetRouteAllEnabled:vnetRouteAllEnabled}'
az network vnet subnet show -g ${RG_NET} --vnet-name ${VNET_NAME} -n ${INTEGRATION_SUBNET}
az network private-dns link vnet list -g ${RG_NET} -z privatelink.database.windows.net
az network route-table show -g ${RG_NET} -n rt-appsvc-egress
az network nat gateway show -g ${RG_NET} -n nat-appsvc Then test from the application runtime itself.
- Resolve the target private FQDN from inside the app path
- Open a real connection to the private dependency
- Confirm whether outbound public calls still work if they are still required
- Validate managed identity or secret retrieval if routing was broadened
- Compare behavior before and after applying NSGs or UDRs
If you only test from a VM in the VNet, you can still miss an application-specific DNS or routing problem.
Failure modes that appear after the first green status
The most common ones are predictable.
The team assumes the app is now privately exposed, but inbound traffic is still public because only outbound integration was configured.
The app is integrated successfully, but private resources remain unreachable because the private DNS chain was never completed.
Route-all is enabled to satisfy one security requirement, then undocumented platform dependencies fail because NSGs or UDRs were written too aggressively.
The integration subnet is reused for unrelated resources, which makes route analysis and policy changes harder than they need to be.
A NAT Gateway is added for fixed egress IP, then treated as if it solved private Azure PaaS access.
The architecture review says “it is in the VNet” without distinguishing outbound integration from inbound private access.
The architecture decisions worth writing down
Before standardizing this pattern, make these choices explicit.
Does the app need private outbound access only, or private inbound access too.
Will only application traffic use the VNet path, or will all outbound traffic be forced into it.
Which DNS path resolves private Azure services, and which path resolves on-premises names.
Which NSGs and UDRs are part of the design, and which are optional controls to add later.
Do you need predictable public egress by NAT, or is the primary problem private reachability.
Is App Service or Functions on a multi-tenant plan still the right hosting model, or does the boundary require an App Service Environment.
If those questions are not documented, teams tend to retrofit the answers later by trial and error.
References
- Microsoft Learn, Integrate your app with an Azure virtual network
- Microsoft Learn, Enable virtual network integration in Azure App Service
- Microsoft Learn, Manage Azure App Service virtual network integration routing
- Microsoft Learn, Azure Functions networking options
- Microsoft Learn, Use Private Endpoints for Apps
- Microsoft Learn, Integrate Azure services with virtual networks for network isolation