Deploying Apstra 4.2.0 on Proxmox VE
Apstra appliances
Juniper is not distributing appliances per se for KVM deployments, but OS disk images in QCOW2 format:
aos_server_4.2.0-236.qcow2
Main Apstra server appliance, also used for optional workers for the scaleout setup: you use the same image for the worker nodes (didn’t find this mentioned in the documentation but was mentioned by a Juniper guy)
apstra-ztp-4.2.0-34.qcow2
Zero Touch provisioning appliance. This is an optional ZTP appliance running PXE+TFTP. It can be used as pure TFTP with a pre-existing DHCP server.
Installation
Overview
For the VM sizing, the vendor has a recommendated sizing table:
Resource | Value |
vCPU | 8 |
RAM | 64GB + 500MB per installed offbox agent |
Disk | 160GB |
In my experience, that many resources are not needed, especially the RAM. For my test environment, I’ll slash RAM to 32GB and most probably that’s not supported, I can live with that, YMMV.
We’ll create the empty virtual machines in Proxmox VE 7.4-15, importing afterwards the QCOW2 disks as needed. The procedure asumes you have downloaded all the files to a working directory (curl/wget are your friends) in a PVE node.
Apstra OS
VM creation
We’ll create a virtual machine with CLI, with 4 vCPUs and 32GB of RAM.
The network is being served via the default bridge and storage via local ZFS repository. Options are self explanatory, adjust as needed.
Importing disk
Now, we’ll be importing the QCOW2 file provided by Juniper.
Additional disk
According to the documentation, the OS disk is 80GB only and optionally you could add a second 80GB. Given it’s not onerous and I won’t be using additional worker nodes, I’ll add the second disk.
ZTP appliance
DHCP & TFTP are not that demanding, you can go with 2 vCPUs and 4GB of RAM.
The network is being served via the default bridge and storage via local ZFS repository. Options are self explanatory, adjust as needed.
Importing disk
Now, we’ll be importing the QCOW2 file provided by Juniper.
Initial Configuration
AOS setup
- Search for the IP of the appliance in your DHCP leases and connect through SSH with admin/admin.
- On first login set new password as requested.
- On the forced assistant, also set the Apstra UI password as proposed. Cancel at the main menu after setting up the password.
- Change hostname to something that makes sense for your environment. With admin user run:
- Adjust timezone to match your environment.
- As admin, update the operating system. Juniper most probably will tell you that this is not supported. The only issue I had in the past was 4.1.0 breaking because there was a system-wide Juniper library that broke a containerized application (go figure) and required manual dependancies update to fix it. It has been flawless for 4.1.2 and now 4.2.0, pam.d files will be mentioned, keep the locally modified files.
- Install QEMU guest agent
- Reboot after full system update sudo shutdown -r now
ZTP appliance setup
- Search for the IP of the appliance in your DHCP leases and connect through SSH with admin/admin.
- Set hostname
- Adjust timezone to match your environment.
- Clean messed-up config structure: For some reason, the Juniper guys thought it was a good idea to create a directory where a file should be and that breaks Ubuntu upgrade process. We clean that up (what gives Juniper?!)
- Update operating system. Again, probably Juniper will tell you it’s not supported.
- At the Apstra Web UI, create a user with device_ztp role. We’ll use it in the next step.
- Back to the ZTP appliance, configure the application to report to the AOS server. For some reason, it doesn’t work with FQDN but only with IP.
- Install QEMU guest agent
- Reboot virtual machine
Migration from 4.1.2
If you already have a working environment, most probably you would like to migrate all the configuration to your new installation, there is a script you should run in the new Apstra Server.
Take into account that the script will fail if there are any uncommited changes or if devices are not reachable. Make sure the running state of the source environment is clean (no errors).
Note:
- The configuration migration would also replace the web UI password with the configuration from the original server.