Cloud

Azure DNS Private Resolver and hybrid forwarders without fragile name resolution

A practical guide to Azure DNS Private Resolver, inbound and outbound endpoints, rulesets, and hybrid forwarding patterns for private DNS and on premises name resolution.

20 Apr 2026 AzureDNSNetworkingHybridPrivate EndpointPrivate Resolver

DNS usually starts to drift long before anyone calls it a design problem. A hybrid environment still resolves names, a few test queries still work, and a first Private Endpoint looks healthy in the portal. Then a second zone arrives, conditional forwarding becomes inconsistent, one team adds custom DNS rules in a spoke, and the path between on premises, Azure virtual networks, and private zones starts behaving differently depending on the source of the query.

Azure DNS Private Resolver is useful precisely because it removes a class of DNS virtual machines that used to exist only to relay, recurse, and forward queries between environments. That does not make the design automatic. The real work is deciding which side should ask which resolver, where rules belong, how private zones are linked, and how to validate name resolution after routing, peering, and Private Endpoint changes.

This article focuses on the pattern that matters most in mixed environments. Azure workloads must resolve selected on premises zones, on premises workloads must resolve Azure private zones, and the whole chain must stay predictable after adding new services such as Storage, SQL, Key Vault, or internal application names.

What Azure DNS Private Resolver actually changes

Azure DNS Private Resolver is a managed resolver service deployed inside a virtual network. It gives you two building blocks.

An inbound endpoint exposes an IP address inside Azure that other DNS servers can target. This is the piece typically used by on premises DNS forwarders when they need answers for Azure private DNS zones. Microsoft describes inbound endpoints as the destination for DNS queries sent into the resolver from other networks.

An outbound endpoint is used when the resolver needs to forward queries out of Azure according to a DNS forwarding ruleset. This is the piece that lets Azure workloads resolve selected on premises namespaces without deploying custom resolver VMs. Outbound endpoints do not get an IP address like inbound endpoints, and they must live in a dedicated subnet.

A DNS forwarding ruleset is where the forwarding logic lives. You define the suffixes that should be forwarded and the destination DNS servers that should receive those queries. Rulesets are then linked to virtual networks so that workloads in those VNets use the forwarding behavior for matching namespaces.

The service is valuable because it replaces VM based forwarding patterns that required patching, monitoring, availability design, and ad hoc firewall exceptions. Microsoft positions it specifically as the managed option for hybrid resolution between Azure VNets and on premises environments.

The reference pattern worth using first

For most environments, the cleanest starting point is a hub virtual network that hosts these components.

A DNS Private Resolver.

One dedicated subnet for the inbound endpoint.

One dedicated subnet for the outbound endpoint.

Private DNS zones linked where needed, or centrally linked in a hub depending on your operating model.

A ruleset on the outbound side for on premises zones such as corp.example.local or mgmt.example.local.

On premises DNS servers configured with conditional forwarders for Azure private zones and Private Endpoint zones toward the inbound endpoint IP.

This pattern maps well to Microsoft guidance for centralized resolution in hub and spoke environments, where the resolver can sit in the hub and serve multiple linked VNets through forwarding rulesets and private zone design.

A simple naming goal before any deployment

Before creating anything, define which namespaces must be resolved from which side.

A minimal matrix often looks like this.

SourceMust resolveExample
Azure workloadsOn premises zonesdc01.corp.example.local
On premises workloadsAzure private zonesapp01.priv.contoso.internal
On premises workloadsAzure Private Endpoint namesmyvault.vault.azure.net via private mapping
Azure workloadsPrivate Endpoint namesmystorageaccount.blob.core.windows.net

If this matrix is not explicit, DNS design starts drifting into guesswork. It becomes easy to create rulesets that solve only one direction while leaving the reverse path implicit and fragile.

Network prerequisites that should be explicit

The resolver is not difficult to deploy, but it is easy to deploy into the wrong shape.

You need a virtual network with dedicated subnets for inbound and outbound endpoints. Microsoft requires the subnets for these endpoints to be delegated to Microsoft.Network/dnsResolvers, and no other resources should live in them. Outbound endpoints do not expose an IP address and still require their own dedicated subnet.

A practical address plan can look like this.

hub-vnet:                 10.40.0.0/16
snet-dnspr-inbound:      10.40.10.0/28
snet-dnspr-outbound:     10.40.10.16/28

If the resolver will serve hybrid traffic, the hub must also have a network path to on premises DNS servers through VPN or ExpressRoute.

Deploying the subnets with Azure CLI

The exact deployment model can be ARM, Bicep, Terraform, or the portal. For a runbook style article, Azure CLI is a good reference because it makes the objects visible.

bash
RG="rg-net-dns-prod"
LOCATION="westeurope"
VNET="vnet-hub-net-prod"
INBOUND_SUBNET="snet-dnspr-inbound"
OUTBOUND_SUBNET="snet-dnspr-outbound"

az group create -n $RG -l $LOCATION

az network vnet create -g $RG -n $VNET -l $LOCATION --address-prefixes 10.40.0.0/16 --subnet-name $INBOUND_SUBNET --subnet-prefixes 10.40.10.0/28

az network vnet subnet create -g $RG --vnet-name $VNET -n $OUTBOUND_SUBNET --address-prefixes 10.40.10.16/28

Then delegate the subnets.

bash
az network vnet subnet update -g $RG --vnet-name $VNET -n $INBOUND_SUBNET --delegations Microsoft.Network.dnsResolvers

az network vnet subnet update -g $RG --vnet-name $VNET -n $OUTBOUND_SUBNET --delegations Microsoft.Network.dnsResolvers

The dedicated subnet requirement is not decorative. If someone later wants to place a VM, appliance, or another network function in the same subnet, the design is already off track.

Creating the resolver and endpoints

Create the resolver first, then the inbound and outbound endpoints.

bash
RESOLVER="dnspr-hub-prod"
INBOUND_EP="inbound-hub-prod"
OUTBOUND_EP="outbound-hub-prod"

az network dns-resolver create -g $RG -n $RESOLVER -l $LOCATION --virtual-network /subscriptions/<subscription-id>/resourceGroups/$RG/providers/Microsoft.Network/virtualNetworks/$VNET

az network dns-resolver inbound-endpoint create -g $RG --dns-resolver-name $RESOLVER -n $INBOUND_EP -l $LOCATION --ip-configurations '[{"privateIpAllocationMethod":"Dynamic","subnet":{"id":"/subscriptions/<subscription-id>/resourceGroups/'$RG'/providers/Microsoft.Network/virtualNetworks/'$VNET'/subnets/'$INBOUND_SUBNET'"}}]'

az network dns-resolver outbound-endpoint create -g $RG --dns-resolver-name $RESOLVER -n $OUTBOUND_EP -l $LOCATION --subnet /subscriptions/<subscription-id>/resourceGroups/$RG/providers/Microsoft.Network/virtualNetworks/$VNET/subnets/$OUTBOUND_SUBNET

Check the inbound endpoint address after deployment.

bash
az network dns-resolver inbound-endpoint show -g $RG --dns-resolver-name $RESOLVER -n $INBOUND_EP --query 'ipConfigurations[0].privateIpAddress' -o tsv

That IP becomes the target for conditional forwarding from on premises DNS.

Linking private zones still matters

A resolver does not replace private DNS zones. It helps queries reach the right place, but the zones must still exist and be linked correctly.

For a Private Endpoint backed Key Vault, Storage account, or SQL server, you still need the correct privatelink zone design and the right records inside it. Microsoft’s Private Endpoint DNS guidance remains relevant because the resolver only transports the query path. The record ownership and zone model still decide whether the final answer is correct.

An example for Storage looks like this.

bash
ZONE="privatelink.blob.core.windows.net"
LINK_NAME="link-hub-vnet-blob"

az network private-dns zone create -g $RG -n $ZONE

az network private-dns link vnet create -g $RG -z $ZONE -n $LINK_NAME -v /subscriptions/<subscription-id>/resourceGroups/$RG/providers/Microsoft.Network/virtualNetworks/$VNET --registration-enabled false

If you are using centralized DNS, decide early whether the private zones live only in the hub model or whether some zones remain application scoped. A mixed model often works, but only if the ownership is explicit.

Forwarding Azure to on premises with rulesets

Now configure Azure side forwarding for on premises namespaces.

Suppose Azure workloads must resolve corp.example.local and mgmt.example.local through on premises DNS servers 192.168.10.10 and 192.168.10.11.

Create the ruleset.

bash
RULESET="dnsfw-hub-prod"

az network dns-resolver forwarding-ruleset create -g $RG -n $RULESET -l $LOCATION --outbound-endpoints /subscriptions/<subscription-id>/resourceGroups/$RG/providers/Microsoft.Network/dnsResolvers/$RESOLVER/outboundEndpoints/$OUTBOUND_EP

Create the forwarding rules.

bash
az network dns-resolver forwarding-rule create -g $RG --ruleset-name $RULESET -n corp-example-local --domain-name 'corp.example.local.' --target-dns-servers '[{"ipAddress":"192.168.10.10","port":53},{"ipAddress":"192.168.10.11","port":53}]'

az network dns-resolver forwarding-rule create -g $RG --ruleset-name $RULESET -n mgmt-example-local --domain-name 'mgmt.example.local.' --target-dns-servers '[{"ipAddress":"192.168.10.10","port":53},{"ipAddress":"192.168.10.11","port":53}]'

Then link the ruleset to the VNets whose workloads need these forwarders.

bash
SPOKE_VNET_ID="/subscriptions/<subscription-id>/resourceGroups/rg-app-prod/providers/Microsoft.Network/virtualNetworks/vnet-app-prod"

az network dns-resolver forwarding-ruleset vnet-link create -g $RG --ruleset-name $RULESET -n link-vnet-app-prod --virtual-network $SPOKE_VNET_ID

This is the point many teams miss. Creating the ruleset in the hub is not enough. The consuming VNets still need to be linked to the ruleset so their workloads use those forwarding rules. Microsoft documents rulesets and VNet links as separate objects for exactly this reason.

Forwarding on premises to Azure private zones

The reverse direction is different.

On premises DNS servers do not consume Azure rulesets directly. Instead, they send queries to the inbound endpoint IP of the Azure resolver through a standard conditional forwarder.

A Windows DNS example looks like this.

powershell
Add-DnsServerConditionalForwarderZone -Name 'privatelink.blob.core.windows.net' -MasterServers 10.40.10.4 -ReplicationScope 'Forest'

Add-DnsServerConditionalForwarderZone -Name 'privatelink.vaultcore.azure.net' -MasterServers 10.40.10.4 -ReplicationScope 'Forest'

The target IP above is the inbound endpoint private IP. Replace it with the address returned by Azure in your deployment.

This direction is what makes on premises clients able to resolve names inside Azure private zones without a custom VM based forwarding layer. Microsoft’s hybrid DNS guidance explicitly uses this model to resolve Azure private DNS zones from on premises environments.

Validation from Azure workloads

A design is not real until it has query paths you can validate.

From a VM in a VNet linked to the ruleset, test on premises names.

bash
nslookup dc01.corp.example.local
nslookup repo01.mgmt.example.local

dig dc01.corp.example.local

dig @168.63.129.16 dc01.corp.example.local

If the forwarding ruleset is applied correctly, the Azure workload should receive an answer through the Azure platform resolver path without having to point directly to on premises DNS.

Also verify the application path, not only a test VM path. A common mistake is validating from a single diagnostic VM while the actual App Service, AKS node pool, or private workload uses a different DNS chain.

Validation from on premises workloads

From on premises, validate that Azure private names resolve through the inbound endpoint.

powershell
Resolve-DnsName mystorageaccount.blob.core.windows.net
Resolve-DnsName myvault.vault.azure.net
Resolve-DnsName myinternalapp.priv.contoso.internal

For blob.core.windows.net, look carefully at the CNAME chain and the returned private address. Microsoft’s Private Endpoint DNS model relies on public names eventually resolving through the correct privatelink path when private DNS is configured correctly.

The design mistake that appears most often

The most common failure is assuming that all Private Endpoint related names should be forwarded to Azure the same way, without considering the exact zone and service behavior.

Storage is a good example. Blob, file, queue, and table use different privatelink zones. A design that forwards only one zone can appear correct until another storage subresource is introduced. Microsoft publishes the private DNS zone values per service specifically because these mappings are not interchangeable.

Another recurring mistake is placing custom DNS servers in Azure and then forgetting that some workloads still use the Azure platform resolver path while others depend on the custom DNS chain. That split can survive for months before someone notices the same name resolves differently from different subnets or services.

Centralized versus distributed resolver placement

A hub deployed resolver is usually the right default, but not every environment should centralize everything.

A centralized resolver makes sense when DNS governance is centralized, hybrid connectivity is already hub based, and multiple application VNets need the same on premises forwarding behavior.

A more distributed approach can make sense when business units own separate environments, peering is constrained, or the blast radius of DNS changes must stay smaller.

Microsoft provides architectural guidance for both centralized and distributed models, which is a good reminder that the service does not force a single topology. The topology still has to match the operating model.

Checks worth keeping in a runbook

These checks catch most bad changes early.

bash
az network dns-resolver show -g $RG -n $RESOLVER -o table
az network dns-resolver inbound-endpoint list -g $RG --dns-resolver-name $RESOLVER -o table
az network dns-resolver outbound-endpoint list -g $RG --dns-resolver-name $RESOLVER -o table
az network dns-resolver forwarding-ruleset list -g $RG -o table
az network dns-resolver forwarding-rule list -g $RG --ruleset-name $RULESET -o table

And from validation hosts.

bash
nslookup mystorageaccount.blob.core.windows.net
nslookup myvault.vault.azure.net
nslookup dc01.corp.example.local

resolvectl status || true

Keep the expected answer source documented next to each name. The question is never only whether a name resolves. The question is whether it resolves from the right source through the right path.

Errors that look like network problems but are actually DNS design problems

A Private Endpoint exists but on premises still resolves the public IP because the right privatelink zone was never forwarded.

Azure workloads can resolve one on premises namespace but not another because only one suffix was added to the ruleset.

The resolver exists in a hub, but the consuming spoke VNet was never linked to the forwarding ruleset.

The inbound endpoint was deployed, but on premises DNS was pointed at the wrong address or the route back to Azure was never validated.

A team expects the resolver to create private DNS records automatically for arbitrary internal zones. It does not. It forwards queries. Zone ownership and record lifecycle still need to be designed.

Where this service is a strong fit and where it is not

Azure DNS Private Resolver is a strong fit when you want managed hybrid name resolution without maintaining forwarding VMs, and when your challenge is primarily controlled forwarding between on premises namespaces and Azure private DNS.

It is a poor substitute for general DNS governance. It will not clean up fragmented private zone ownership, inconsistent naming, or applications that were already built against the wrong assumptions. It removes infrastructure burden. It does not remove architectural ambiguity.

References

  • Microsoft Learn, Azure DNS Private Resolver overview
  • Microsoft Learn, Resolve Azure and on premises domains
  • Microsoft Learn, Azure DNS Private Resolver endpoints and rulesets
  • Microsoft Learn, Azure DNS Private Resolver architecture guidance
  • Microsoft Learn, Azure Private Endpoint DNS integration scenarios
  • Microsoft Learn, Azure Private Endpoint private DNS zone values