OpenStack network types: tenant vs provider networks explained
Introduction
OpenStack networking through Neutron provides two fundamental network types: tenant networks and provider networks. Understanding the differences between these network types, when to use each one, and how to properly connect virtual machines is crucial for designing efficient and scalable cloud infrastructure.
This comprehensive guide explores the technical distinctions, use cases, and implementation strategies for both network types, along with practical examples of VM connectivity and cross-project network sharing.
Tenant networks vs provider networks: core differences
Example logical diagram of provider and tenant networking
Tenant networks
Tenant networks are software-defined virtual networks created by users within their specific OpenStack project (or “tenant”). They are functionally equivalent to a Virtual Private Cloud (VPC) in AWS, providing a private, isolated network space for your virtual machines. This isolation is crucial in a multi-tenant environment, ensuring that different projects sharing the same physical hardware cannot interfere with each other’s network traffic.
To achieve this isolation, tenant networks use encapsulation protocols. In modern OpenStack deployments using OVN (Open Virtual Network), the standard encapsulation method is Geneve. This protocol wraps the original network packets inside another packet, creating an “overlay” network that runs on top of the physical infrastructure. These encapsulated tunnels are terminated on the compute nodes, where the traffic is unwrapped before it reaches the virtual machines.
For external access, tenant networks rely on a virtual router. All outbound traffic from a tenant network is first sent to this router, which performs Network Address Translation (NAT). Crucially, this virtual router’s external gateway is always set on a provider network, which serves as the bridge to the outside world. For a service to be accessible from the internet, it requires a floating IP address, which is a public IP address also allocated from a pool available on a provider network.
Provider networks
Provider networks are a direct reflection of existing physical network segments, most often in the form of VLANs, but for training or testing environments, they frequently use untagged frames on a shared physical network, which OpenStack refers to as a “flat” network. To ensure maximum performance, they typically avoid Layer 3 overlay encapsulation, although rare implementations using VXLAN do exist. By relying on direct L2 connectivity, traffic is switched directly by physical switches with minimal latency.
As the direct link to the physical infrastructure, provider networks serve two key roles for tenant workloads: they provide the exit gateway for all NATed traffic from virtual routers, and they are the source from which floating IP addresses are allocated for inbound connectivity. Managing these networks is the task of the cloud administrator, who defines the physical network mappings and subnets. The default gateway for a provider network subnet resides on a physical network device, like a core router or L3 switch.
One of the key advantages of provider networks is the ability to share the same physical subnet with other systems, such as bare-metal servers or other virtualization platforms. To avoid IP address conflicts, the administrator can define allocation pools to reserve specific IP address ranges from a subnet exclusively for OpenStack’s needs.
Key technical differences
Aspect | Tenant networks | Provider networks |
---|---|---|
Creation | Self-service by project users | Administrative privilege required |
Encapsulation | Geneve/VXLAN tunneling | Direct VLAN or flat networking |
Isolation | Software-defined isolation | Physical VLAN isolation |
External Access | NAT + floating IPs required | Direct external connectivity |
Sharing | Project-scoped by default | Can be shared across projects |
Scalability | Limited by tunnel endpoints | Limited by physical infrastructure |
Network sharing across projects
OpenStack network sharing models
By default, OpenStack networks are isolated within a project to ensure security in a multi-tenant environment. However, many applications require controlled communication between projects or direct access to physical networks. OpenStack provides several mechanisms to enable this network sharing, primarily by configuring access to provider networks or, in specific cases, tenant networks.
Sharing provider networks
Provider networks are the most frequently shared network type, as they connect projects to the physical infrastructure. Administrators have two main methods for sharing them: globally with all projects or selectively with specific projects using Role-Based Access Control (RBAC).
Global sharing methods (external
and shared
)
An administrator can set one of two attributes on a provider network to make it globally available to all projects in the cloud. Each attribute serves a different purpose.
- The
external
network (--external
) This attribute designates a provider network as the official gateway to outside networks. Its functions are strictly for Layer 3 connectivity:- Router gateways: It allows users to set the gateway of their virtual routers on this network, enabling outbound (NAT) traffic from their private tenant networks.
- Floating IP allocation: It provides the pool of public IP addresses (Floating IPs) that users can assign to their VMs for inbound connectivity.
With a network marked as
external
, non-administrative users cannot attach VMs directly to it. Its function is limited to routing and IP allocation. - The
shared
network (--shared
) This attribute provides direct Layer 2 access to all projects. When a network isshared
, any user can create a port on it and attach a VM directly. This model is used to directly provide a VM with a Layer 2 network segment and a subnet that is routed within the datacenter.
Before creating a provider network, the administrator must define the mapping to the physical infrastructure. The --provider-physical-network
parameter uses a logical name (e.g., physnet1
) that must be pre-configured on the relevant OpenStack nodes (compute and network). In an OVN deployment, this mapping is typically defined in the Open vSwitch configuration, where the logical name is associated with a physical bridge that handles the provider traffic (e.g., ovn-bridge-mappings=physnet1:br-provider
).
Here is how an administrator would create these networks:
- Creating an
external
network:# This network uses the 'physnet1' mapping and is tagged with VLAN 10. # It is intended for external router connectivity. openstack network create "public-fip-network" \ --provider-network-type vlan \ --provider-physical-network physnet1 \ --provider-segment 10 \ --external
- Creating a
shared
network on the same physical infrastructure:# This network also uses 'physnet1' but is tagged with a different VLAN, 20. # It is intended for direct VM attachment by any project. openstack network create "shared-access-vlan20" \ --provider-network-type vlan \ --provider-physical-network physnet1 \ --provider-segment 20 \ --shared
It is also possible to create a single network that has both the --external
and --shared
attributes. This would allow all projects to use the network for their router gateways and Floating IPs, as well as for attaching VMs directly for L2 access, combining both functionalities into one network.
Targeted sharing with RBAC policies
Sharing a network with all projects can be too permissive. For more granular control, administrators use Role-Based Access Control (RBAC) policies.
It’s important to understand the ownership model here. If an administrator creates a network within a specific user’s project, that project automatically has full permissions (effectively acting as both access_as_shared
and access_as_external
). Therefore, to provide a project with dedicated but limited access, the best practice is to create the network in an administrative or “technical” project. From there, the administrator can use a specific RBAC policy to share it with the target project with the intended fine-grained permissions.
- Granting Shared Access (
access_as_shared
) This RBAC rule gives a specific project direct Layer 2 access to a network, allowing it to attach VMs. This provides the same functionality as the--shared
flag but is restricted to the target project.# Grant 'db-cluster-project' direct L2 access to 'provider-net-vlan100' openstack network rbac create \ --target-project <db-cluster-project-id> \ --action access_as_shared \ --type network \ <provider-net-vlan100-id>
- Granting External Access (
access_as_external
) This rule allows a specific project to use the network for its router gateways and Floating IPs. This provides the same functionality as the--external
flag but is restricted to the target project.# Allow 'web-apps-project' to use 'provider-net-fips' for its routers and FIPs openstack network rbac create \ --target-project <web-apps-project-id> \ --action access_as_external \ --type network \ <provider-net-fips-id>
Sharing tenant networks
Sharing a tenant network is less common than sharing a provider network, but it serves specific architectural needs. Since tenant networks are virtual overlays (using Geneve), they are not used for external connectivity.
The primary use case is for multi-project applications. For example, if an application’s web servers reside in Project-A
and its databases are in Project-B
, an administrator can share the database network with Project-A
. This allows the web servers to communicate with the databases over a private network without traversing a router. This type of sharing is also configured using RBAC policies with the access_as_shared
action, enabling Project-A
to attach its VMs to the tenant network owned by Project-B
.
Best Practices and Recommendations
Best Practices and Recommendations
Prioritize tenant networks by default: For the vast majority of workloads, tenant networks should be your first choice. They provide the true “cloud-style” experience with full isolation, security, and self-service capabilities for users. Always start with the assumption that resources will live in a tenant network unless a specific requirement forces a different approach.
Use provider networks only when necessary: The two primary use cases for provider networks are:
Direct L2 connectivity is required: Use a provider network when a virtual machine must share a Layer 2 broadcast domain with an external physical resource, such as a bare-metal database, a storage array, or a licensing server.
NAT is problematic: Certain protocols do not function well when passed through NAT. If your workload relies on services like VoIP (SIP, RTP) or active-mode FTP, placing it on a provider network (either directly or via a Floating IP) can prevent connectivity issues.
Use
--shared
provider networks with extreme caution: Avoid using the--shared
flag, especially in public or large multi-tenant clouds. A shared network gives all projects direct Layer 2 access, which breaks tenant isolation, increases the risk of security breaches, and can expose the network to broadcast storms or IP conflicts caused by a single tenant. For targeted sharing, always prefer RBAC policies.Understand IP address abstraction on
external
networks: Tenants using an--external
network for their router gateway or Floating IPs are intentionally kept unaware of the underlying subnet topology. They cannot see the IP address ranges or select a specific Floating IP. If a tenant requires a predictable public IP, an administrator must allocate a specific Floating IP directly to that project.Scale IP capacity with multiple subnets: If you exhaust the IPs in a provider network’s subnet, you don’t need to create a new network. You can simply add another subnet to the existing provider network (which is typically a single VLAN).
On a
--shared
network, users will see all available subnets and can choose which one to connect their VM to.On an
--external
network, this is handled automatically. Floating IPs and router gateways will be allocated from any available subnet on the network without user intervention.
Use SR-IOV for ultra-high performance: For workloads that are extremely sensitive to network latency or CPU overhead (like NFV or high-frequency trading), standard SDN can be a bottleneck. In these cases, use SR-IOV to grant a VM direct access to a Virtual Function (VF) on a physical network card. This bypasses the host’s virtual switch entirely, offering near-native hardware performance. Be aware of the trade-offs: SR-IOV ports typically cannot be live-migrated and may lose some SDN features.
Always apply security groups: This is the most critical rule. Any VM port connected to a provider network—either through a direct attachment or a Floating IP—is potentially exposed to the public internet or the datacenter network. Always attach a restrictive security group to these instances to act as a stateful firewall, allowing only the specific traffic you intend to receive.
Prefer RBAC for sharing: Instead of using the global
--shared
or--external
flags, use RBAC policies whenever possible. This allows you to grant network access only to the specific projects that need it, following the principle of least privilege.
Conclusion
Understanding the distinction between tenant and provider networks is fundamental to designing a robust OpenStack cloud. The networking model is built on a powerful duality: the complete, software-defined isolation of tenant networks versus the high-performance, direct physical integration of provider networks.
The platform’s true flexibility, however, is revealed in its sharing capabilities. While global flags offer broad access, the modern, recommended approach is to use granular RBAC policies to bridge the gap between projects securely. By defaulting to tenant networks for most applications and strategically using provider networks for specific L2 integration or performance needs, administrators can build a secure, scalable, and efficient cloud environment. Ultimately, mastering these network types and sharing models is essential for any successful OpenStack deployment.
This post is part of our OpenStack networking series. For more advanced networking topics, check out our OpenStack NFV training course and OpenStack administration training.