AI
Deploying an Azure AI agent without leaving the private network
Building an Azure AI Foundry agent that reads internal data, triggers controlled actions, and remains operable inside a private network architecture.
An Azure AI agent becomes useful when it goes beyond a simple chatbot. The practical use case is not to ask a model to answer general questions, but to give it a clear scope: read internal data, reason over a request, call an approved tool, produce an operational trace, and remain inside a controlled network architecture. In an organization that limits Internet exposure, this distinction matters.
The scenario here is concrete. An infrastructure team wants an agent that can help diagnose simple incidents on internal services. The agent must query a documentation base indexed in Azure AI Search, read a small amount of technical metadata, then trigger a controlled action through a Logic App or an internal API. It must not have broad network access, must not freely call the Internet, and must not hold generic rights on the Azure subscription.
The goal is therefore not to create an autonomous agent that can do everything. The goal is to build a bounded agent, connected to precise tools, with separated identities, logs, and an explainable network path.
Define the use case before the tools
The first trap is to start with tools. An agent can call Azure AI Search, storage, a Logic App, an API, a connector, or a search engine. But if the use case is not bounded, every additional tool increases risk. For a first production agent, it is better to choose one simple action and one limited knowledge base.
In this scenario, the agent receives a question such as: why is an internal application no longer reachable from an Azure spoke? It must not modify the firewall or restart services. It must first produce a guided diagnosis from internal procedures, then propose a collection or verification action.
Initial use case
Help diagnose simple infrastructure incidents
Answer from an internal documentation base
Trigger only collection or verification actions
Produce a readable operations trace
Out of scope at the beginning
Firewall rule changes
Resource deletion or restart
Network configuration changes
Free-form command execution
Uncontrolled Internet access This framing changes the design. The agent is not an administrator. It is an operations assistant that reads, reasons, asks for confirmation when needed, and calls limited tools.
Separate knowledge and action
A useful agent often combines two tool families. Knowledge tools provide context. Action tools trigger something. They must be separated from the beginning because they do not carry the same risk.
Knowledge can come from Azure AI Search, populated with internal procedures, runbooks, architecture diagrams, operating rules, or technical articles. Action can go through a Logic App, Function, or internal API that exposes only a few approved operations.
Azure AI Foundry agent
Reasoning model
Strict system instructions
Knowledge tool: Azure AI Search
Action tool: Logic App or internal API
Azure AI Search
Runbook and procedure index
No secrets in documents
Source and date metadata
Logic App or internal API
Limited actions
Validated inputs
Detailed logs
Structured response back to the agent This separation simplifies troubleshooting. If an answer is wrong, inspect the index and grounding. If an action is risky, inspect the action tool and its permissions. Mixing both in an overly open design makes the agent hard to audit.
Build the documentation index
The first operable component is the Azure AI Search index. Sending a random pile of documents is not enough. Content must be chunked, named, and enriched with useful metadata. A network runbook, DNS procedure, and Active Directory guide should remain distinguishable. Date, scope, and trust level should be visible.
{
"id": "runbook-private-dns-001",
"title": "Azure Private DNS Zone diagnosis",
"category": "networking",
"environment": "production",
"lastReviewed": "2026-04-20",
"source": "internal-runbook",
"content": "Private Azure DNS diagnostic procedure..."
} The agent should be instructed to reference the internal source it used in the operational trace, at least in logs or conversation metadata. This does not mean displaying public citations in the final interface. It means an operator must be able to understand why the agent proposed a given step.
An unmanaged documentation base produces a confused agent. If runbooks are obsolete, contradictory, or undated, the agent amplifies that disorder. Agentic architecture therefore starts with minimal documentation hygiene.
Bound actions with a narrow API
A common mistake is to give the agent broad access to an administration API. For a first use case, the action tool should be narrow. A Logic App or Function can expose an operation such as checkDnsResolution, collectVmNetworkState, or getServiceHealth. The agent does not choose a free command. It calls a predefined action.
{
"action": "checkDnsResolution",
"input": {
"hostname": "app01.internal.example",
"sourceNetwork": "spoke-prod-01"
},
"allowedEnvironments": ["preproduction", "production"],
"requiresApproval": false
} The input contract must be validated on the tool side. The agent can misunderstand a request, receive ambiguous instructions, or produce an incomplete call. The Logic App or API must not blindly execute whatever the model asks. It must validate format, scope, environment, and authorization.
Sensitive actions require human approval. The agent can prepare the action, explain impact, and request confirmation. The tool should refuse execution if approval is missing.
Use managed identity as a boundary
Identity is a central guardrail. The agent and its tools should not rely on static secrets stored in code. Managed identities make it possible to grant precise rights to Azure components, then audit them through Entra ID and Azure RBAC.
A minimal model separates the identity that reads knowledge from the identity that triggers action. Access to Azure AI Search does not carry the same risk as access to a Logic App collecting VM metadata. If one identity is compromised or misconfigured, its scope should remain limited.
Identity agent-knowledge-reader
Reads the Azure AI Search index
No action rights on Azure resources
Identity agent-action-runner
Calls a specific Logic App or Function
No generic Contributor rights
Permissions limited to the useful resource group
Human identity
Approves sensitive actions
Reads logs and audit traces Generic Contributor rights should be avoided. An agent with overly broad permissions becomes an automation surface that is hard to control. Permissions should follow the actions actually exposed, not future ambitions.
Keep the private network path explainable
For a Naxaya-style environment, the agent should not be imagined outside the network. The design must state where private endpoints exist, which Private DNS Zones are used, which subnets are required, and how the agent reaches Azure AI Search, Storage, Cosmos DB, or the internal API.
Azure AI Foundry Agent Service can fit into a private networking approach. The key architecture point is to make the path readable: agent, dedicated subnet, private endpoints, Private DNS Zones, then private resources. If the team cannot explain that path, troubleshooting will be difficult during the first incident.
Agent subnet
Runs the agent in an isolated environment
Private endpoint subnet
Foundry
Azure AI Search
Storage
Cosmos DB if used
Internal API if exposed through Private Link
Private DNS Zones
privatelink.openai.azure.com
privatelink.search.windows.net
privatelink.blob.core.windows.net
privatelink.services.ai.azure.com Private DNS becomes a direct dependency of the agent. If the agent cannot resolve the index, storage, or internal API, it cannot work. Resolution tests must therefore be part of deployment, as with any private application.
Log decisions and actions
An operations agent must be observable. It is not enough to know that an answer was produced. The team must know which user made the request, which documents were consulted, which tool was called, which parameters were sent, which response the tool returned, and whether human approval was required.
{
"requestId": "agt-20260427-001",
"user": "operator@example.com",
"agent": "infra-diagnostic-agent",
"knowledgeSources": ["runbook-private-dns-001"],
"toolCalled": "checkDnsResolution",
"approved": false,
"result": "private_dns_resolution_failed"
} These logs must be usable by the operations team. An agent that acts without a trace quickly becomes suspicious. An agent that leaves a clear trail can be progressively integrated into operations practices.
Test the agent like a critical application
Validation must not focus only on answer quality. Failure paths must be tested. What happens if Azure AI Search does not answer? If the internal API returns an error? If the user asks for an out-of-scope action? If the internal document is obsolete? If private DNS no longer resolves the expected zone?
Pre-production tests
Question covered by a valid runbook
Question absent from the documentation base
Out-of-scope action request
Internal API unavailable
Private DNS broken toward Azure AI Search
Identity without sufficient permission
Sensitive action without approval A good agent refuses cleanly. It should say when it does not know, when a source is missing, when the action is not authorized, and when human approval is required. That behavior matters more than a long answer.
Conclusion
An operable Azure AI agent is not a model with many tools. It is an automation application with a scope, sources, identities, a network path, logs, and explicit refusals. Azure AI Foundry Agent Service provides the agentic layer, tools, and managed hosting, but the quality of the result mainly depends on the operational design around the agent.
The right first use case is not an agent that can administer everything. It is an agent that can answer a simple incident, rely on a clean documentation index, call a bounded action, remain inside the private network, and leave a trace. That foundation already creates value without introducing a new uncontrolled surface in the infrastructure.