Installing OpenShift 4.18 on RHOSP 17.2 with VLAN Provider Networks

In this guide we walk through a full Installer-Provisioned Infrastructure (IPI) deployment of Red Hat OpenShift Container Platform 4.18 on Red Hat OpenStack Platform (RHOSP) 17.2. We will cover preparation of the OpenStack environment (including a VLAN-based provider network), generating the OpenShift install configuration, customizing network settings for VLANs and performing the cluster installation. Each section provides step-by-step instructions, example commands, and tips.

1. Preparation: Environment Validation and Configuration

  1. Confirm RHOSP Version and Features: Verify you are using RHOSP 17.2 and that key services are available. Ensure that the OpenStack deployment includes:

    • Networking (Neutron): with ML2 drivers supporting provider networks (VLAN) and SR-IOV. For SR-IOV, the compute nodes must have SR-IOV-capable NICs and the SR-IOV Neutron mechanism driver or agent configured.
    • Load Balancing (Octavia): This is used by the OpenShift installer for API and Ingress VIPs when not using pre-allocated VIPs. Make sure Octavia is deployed and you have quota for at least two load balancers (if you plan to use OpenStack’s load balancers for the API/Ingress).
    • Compute Quotas: Sufficient quotas for instances, vCPUs, RAM, volumes, and especially ports. Each OpenShift node will use one port for its primary network​. Ensure your project can create the needed number of ports/VFs and at least 3 instances (for masters) plus the required worker nodes.
    • Images: Upload the Red Hat CoreOS (RHCOS) 4.18 base image to Glance or ensure the installer can download it. You can download the OCP 4.18 RHCOS image (QCOW2) from Red Hat and add it to Glance, or let the installer upload it via URL. If uploading manually, note the image name or ID to use in the install config if needed.
    • Flavors: Create flavors for OpenShift nodes. Masters and workers typically need at least 4 vCPU, 16 GB RAM, 100 GB disk (for production)​. If you plan to use SR-IOV with high-performance or DPDK workloads, define flavors that enable hugepages (e.g. set hugepage properties or use a hw:mem_page_size setting)​ and CPU pinning as appropriate. For example, a flavor ocp.master with 4 vCPUs, 16GB RAM, and a flavor ocp.worker with similar or higher specs for workers.
  2. Create VLAN Provider Network: Set up the network that the OpenShift cluster will use. In this scenario, we use a VLAN-based provider network (layer-2 network on a physical VLAN). You can use an existing provider network or create a new one

    • Decide on a VLAN ID and physical network mapping that’s available in RHOSP. For example, VLAN 120 on physical network datacentre (the name depends on your OpenStack configuration).
    • Create the network and subnet in OpenStack. For example, to create a provider network named ocp-provider on physical network datacentre using VLAN 30, and an IPv4 subnet:

      openstack network create ocp-provider --provider-network-type vlan --provider-physical-network datacentre --provider-segment 30 --external
      openstack subnet create ocp-provider-subnet --network ocp-provider --subnet-range 192.168.30.0/24 --gateway 192.168.30.1 --allocation-pool start=192.168.30.50,end=192.168.30.200 --dns-nameserver 192.168.30.1 ocp-provider-subnet

      This creates a VLAN 30 network on physnet “datacentre”, marks it external (so it can be used for floating IPs or provider), and defines a subnet 192.168.30.0/24. Adjust IP ranges as needed. The allocation pool is set to avoid certain IPs (for example, reserving .30.10 and .3.11 for special use like VIPs or gateway). Note: The OpenShift installer will not create the network for you in a provider network scenario, so doing this beforehand is crucial.

    • Ensure DHCP is enabled on the subnet (it is by default in the command above). OpenShift nodes rely on DHCP to get their IP addresses on the machine network. The installer will expect to get IPs for bootstrap and nodes via DHCP on this subnet.
    • Connectivity: Make sure this provider network has connectivity to the necessary endpoints:
      • If this network is intended to provide external access (for cluster API and apps), it should be accessible from your corporate network or the internet (depending on your requirements). Often a provider network is already routable within the data center. If not, you may need to attach it to an OpenStack router with an external gateway.
      • Ensure the network can reach the OpenStack metadata service (169.254.169.254). In many cases, provider networks get a static host route to the metadata IP (Neutron does this when the subnet is created on an external network).Double-check that your instances will be able to fetch user-data from the metadata server
  3. DNS Setup for OpenShift: Configure DNS entries for your cluster’s base domain. You will need:

    • An API record: api.. resolving to the OpenShift API endpoint.
    • A Wildcard apps record: *.apps.. for application routes.
    • For an IPI install on OpenStack, there are two scenarios:
      • If using OpenStack’s load balancer/floating IP (the default when using a tenant network): you would point the DNS to the floating IP or VIP that OpenStack provides.
      • If using a provider network with pre-allocated VIPs (our VLAN provider network case): we will specify static IPs in that network for API and Ingress. You must create DNS entries mapping those IPs. For example, if we plan for api.ocp4.example.com to be 192.168.30.10 and *.apps.ocp4.example.com to use 192.168.30.11, add those in DNS. (Make sure 192.168.30.10 and .11 are within the ocp-provider-subnet and not in DHCP pool so they remain free for VIP use.)
  4. OpenStack Credentials and Cloud Config: Ensure you have an OpenStack clouds.yaml or environment variables set for the installer to use

    • The installer uses a clouds.yaml entry (usually openstack by default) for authentication. Verify that your clouds.yaml (typically in ~/.config/openstack/clouds.yaml) has an entry for your RHOSP 17.2 environment, including auth URL, region, project (tenant) name/ID, user, and password (or token). Also ensure the correct domain names for user and project if using keystone v3.
    • If not using clouds.yaml, you can set OS_ environment variables for auth, but using clouds.yaml is easier.
    • The OpenStack user should have enough privileges in the project to create networks, ports, security groups, and instances. Usually a member role is sufficient for tenant networks. For provider networks marked –external, note that Nova might treat them specially: ensure the network is owned by the same project (not just by admin) or is shared, otherwise Nova may prevent instance port attachment. The safest approach is to create the provider network in the same project where you deploy OpenShift, or have your admin grant your project ownership of that network.
  5. Security Groups and Firewall: OpenShift installer will create security groups in OpenStack for the cluster (like allowing SSH, API, etc). Make sure your OpenStack project allows creating security groups (it should). Also ensure any external firewall in your environment allows access to the needed ports:

    • Port 6443 (API server) – for users and cluster components.
    • Port 22623 (Machine Config Server) – the bootstrap and nodes use this.
    • Ports 80/443 (Ingress) – for applications.
    • Also, external DNS (53) and any proxy if needed for the nodes to reach the internet for image pulls. These are typically handled by OpenShift security groups and a cloud router, but good to be aware of in advance.

2. OpenShift Install Configuration (IPI on OpenStack)

With the environment ready, we can create the OpenShift cluster configuration. The OpenShift installer (openshift-install) uses an install-config.yaml to define how to create the cluster. We will generate and then modify this config to account for our custom network (VLAN provider) and credentials.

  1. Generate a baseline install-config:
    Run the OpenShift installer in interactive mode to create a starting config.

    $ openshift-install create install-config --dir=ocp4-cluster

    This will generate install-config.yaml file inside ocp4-cluster directory. It will prompt for following

    • Platform: choose openstack.
    • OpenStack Cloud: the cloud name from your clouds.yaml (e.g. openstack if configured as such).
    • External Network: Leave this blank (hit Enter) if you plan to use a provider network for primary connectivity. We will not use OpenStack’s default external network for floating IPs in this scenario (since we have our own VLAN network and will assign VIPs manually). If you intend to use floating IPs (not our case here), you would specify the external network name.
    • APIVIP / IngressVIP: The installer might ask for API VIP and Ingress VIP if no external network was provided (starting with newer versions, if using provider networks). Provide the IPs you reserved on the provider network (e.g. 192.168.30.10 for API, 192.168.30.11 for Ingress).
    • Flavor for control plane: e.g. ocp.master (make sure it exists in OpenStack).
    • Flavor for workers: e.g. ocp.worker.
    • Base Domain: the DNS base (e.g. example.com if full cluster domain is ocp4.example.com).
    • Cluster Name: an identifier (e.g. ocp4).
    • Pull Secret: the secret from cloud.redhat.com (paste it in full JSON).
  2. Edit install-config.yaml for Provider Network:
    Open the install-config.yaml in a text editor. We need to ensure the network settings match our provider network and VLAN environment

    • Under platform.openstack, add or verify the following
      platform:
      openstack:
       cloud: openstack
       region: <YourRegionIfApplicable>
       # Leave externalNetwork unspecified to indicate provider network usage
       # externalNetwork:  # (omit this for provider network as primary)
       machinesSubnet: <UUID-of-ocp-provider-subnet>
       apiVIPs:
         - 192.168.30.10    # API VIP on provider network (unassigned IP from that subnet)
       ingressVIPs:
         - 192.168.30.11    # Ingress VIP on provider network
      networking:
      machineNetwork:
       - cidr: 192.168.30.0/24   # Subnet of the provider network
      clusterNetwork:
       - cidr: 10.128.0.0/14
         hostPrefix: 23
      serviceNetwork:
       - 172.30.0.0/16
      networkType: OVNKubernetes

      Key points:

      • Set machinesSubnet to the UUID of the provider network’s subnet. This tells the installer to use that existing subnet for all node IPs.
      • Set apiVIPs and ingressVIPs to the static IPs you reserved. (Note: As of OCP 4.12+, these are arrays; older versions used singular apiVIP/ingressVIP fields​.) These IPs must lie within the machineNetwork CIDR and not be handed out by DHCP​. The masters will configure these VIPs via keepalived.
      • The machineNetwork.cidr must exactly match the subnet’s range​. The installer will validate this. In our case, 192.0.2.0/24.
      • Ensure externalNetwork is not set (and similarly remove or don’t set externalDNS). When using a provider network for the primary interface, you cannot specify an external network for floating IPs in install-config (it would cause conflicts)​.
      • The rest of networking (clusterNetwork, serviceNetwork) can remain default unless you need to customize pod/service CIDRs.
    • Under compute (workers) and controlPlane, ensure the platform.openstack.type is set to the correct flavor names.
      controlPlane:
      platform: {}
      name: master
      replicas: 3
      compute:
      - platform:
       openstack:
         type: ocp.worker
      replicas: 3
      name: worker

      Masters in IPI are typically created by installer as controlPlane machines with the flavor specified in the wizard (the above shows platform as {} meaning it will use the default from platform.openstack.computeFlavor if set; you could alternatively specify it similarly to compute).

    • Save the file. Double-check indentation and syntax (YAML is sensitive). The final install-config should incorporate the provider network settings.

      [!TIP]
      If you wanted to attach an additional network (secondary interface) to the nodes (for example, another provider network used only by workloads), you can use the additionalNetworkIDs field in install-config. This takes a list of Neutron network UUIDs to attach to each node as a secondary interface.

    At this point, we have a finalized install-config.yaml customized for our environment. We are ready to deploy the cluster.

    3. Deploying the Openshift 4.18 Cluster on Openstack

    With the install configuration prepared, use the OpenShift installer to create the cluster resources in OpenStack.

  3. Create the Cluster
    Run the installer

    $ openshift-install create cluster --dir=ocp4-cluster --log-level=INFO

    Replace ocp4-cluster with your directory containing install-config if different. The installer will connect to OpenStack and start provisioning resources.

    • It uploads the RHCOS image to Glance (if not already present) or uses the existing image.
    • Creates a private networking (if we had not specified machinesSubnet in our case, since we did specify one, it will use the existing provider network and not create a new one​.
    • Creates ports for the bootstrap, master, and worker nodes on the machinesSubnet (our provider network). The master ports will have the allowed_address_pairs set for the API and Ingress VIPs if port security is on.
    • Boots a temporary bootstrap VM and the master VMs. The bootstrap will run the initial control plane provisioning.
    • Sets up a load balancer or keepalived VIP. In our provider network scenario, the installer will configure keepalived on masters for the API/Ingress IPs (since we provided static VIPs). If we had used the default tenant network scenario, the installer would typically create an Octavia load balancer with those ports.
    • It also creates other needed infrastructure (security groups, ignition secrets, etc.) behind the scenes.

    You can monitor progress via the console output and also by watching OpenStack.

    openstack server list
    openstack port list --server <server_id>
    openstack loadbalancer list

    For provider network with VIPs, you might not see a load balancer resource (since masters handle VIP). Masters will acquire the apiVIP and ingressVIP on their network interfaces (check with openstack port list to see which master has which VIP in allowed address pairs).

  4. Wait for bootstrap to complete
    The installer will indicate when the bootstrap phase is done. You can also manually check.

    $ openshift-install wait-for bootstrap-complete --dir=ocp4-cluster --log-level=INFO

    This waits until the bootstrap node’s temporary control plane is up and the real masters have formed a cluster. Once complete, the bootstrap VM is no longer needed. The installer will delete the bootstrap VM automatically after bootstrap completes.

  5. Wait for cluster completion
    The masters will bring up the OpenShift API and etcd, and then the installer will scale up worker nodes (if any workers are defined in the install-config). By default, 3 workers were specified. Monitor progress.

    $ openshift-install wait-for install-complete --dir=ocp4-cluster --log-level=INFO

    This can take a while as workers join and the cluster finishes deploying the default components (ingress router, registry, etc.). During this time, components like Machine Config Operator apply initial configs (including our cgroup v1 setting, causing one reboot of nodes early on if that was applied).

  6. Post-install Verification
    Once the installer reports success, test the cluster:

    ...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/ashisha/ocp4-cluster/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp4.example.com
    INFO Login to the console with user: "kubeadmin", and password: "GR4Zf-9i6fL-3MkeD-L9SvA"
    INFO Time elapsed: 36m22s
    • Use the generated kubeconfig in ocp4-cluster/auth/kubeconfig
      export KUBECONFIG=ocp4-cluster/auth/kubeconfig
      oc get nodes

      At this stage, you have a running OpenShift 4.18 cluster on OpenStack VMs, using the VLAN provider network for all node connectivity. The cluster’s control plane is up and configured.

Conclusion

By following this guide, you set up OpenShift 4.18 on RHOSP 17.2 in a way that leverages your infrastructure’s VLAN networking. This provides a foundation for running network-intensive workloads (such as telco or high-performance applications) on OpenShift with minimal network overhead, while still benefiting from OpenShift’s orchestration and management.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.