OpenVMS 9.2-3 Community Edition setup on KVM on Fedora/41

In January 2025, I got the update to the OpenVMS 9.2-3 Community Edition.

Actually I got two updates, there was a problem with the first update.   This article describes a script that I developed for quicky getting a new or updated disk image for the community edition installed on a KVM host to the point where you can control it from DECnet or SSH.

It appears by default the VM will boot up with a DHCP assigned address with SSH enabled.
So if you can force the DHCP assigned address, or can easily find out what it is, you can use that for automating the configuration.

I want to avoid the password setting dialog, and I need DECnet IV for my home network, and I can not lookup a DHCP address easily to find the IP address.

So first I will describe using the first OpenVMS 9.2-3 update which I used for debugging the script.  Then I will describe using it again for the second update.

It turns out that my previous install using virt-install tool to create the OpenVMS 9.2-2 edition was a mistake because unexpectedly converted the qcow2 file to be a raw format file.

I also learned that that system disk was way too small.  I want to have a page file that is big enough to hold a crash dump.

Since VSI makes you replace the system disk with every update, you need to put your customized files on a second disk, which is another change that I will be making from the previous install.

The first OpenVMS 9.2-3 update to the OpenVMS 9.3 community edition showed up while I was developing a script to automatically setup the OpenVMS 9.2-2 disk.  I got that script to the point where it could install and get DECnet IV access working.

Before running the script the VM needs to be created.   I modified the VM definition using the virsh edit and the virt-manager utilities to match the disk configurations will be described below.

I saved the XML file for use as a template for deploying additional VMs which will need only small small edits that can be automated.

I ended up with 4 disks set arbitrarily to 50 Gb each.

  • dka0: Current system disk
  • dka100: Local data disk
  • dka200: For backup of system disk
  • dka300: Previous system disk.
I ran into a small problem with the converting the first update community edition VMDK files to qcow2 that
was easily fixed based on the error messages by adding a symbolic link.   Renaming the extracted file could also have fixed the problem.

$ unzip community_2025.zip

Archive:  community_2025.zip

   creating: community_2025/

  inflating: community_2025/community-flat.vmdk

  inflating: community_2025/community.vmdk


$ cd community_2025

$ ln -s community-flat.vmdk X86_V923-community-flat.vmdk

$ cd ..


I store what I call "base" images in the local /var/lib/libvirt/images directory.   A base image is used used with overlay files.   The actual VMs will use an the overlay file in their configuration.
The ".qcow2" format is set up for "Copy on Write".   The base image is only read from and any
changes are written to the overlay file.   This can save a lot of disk space.

The /mnt/public/vms is my central storage for common read only access to files used by all the systems on my network.

$ sudo qemu-img convert -cpf vmdk -O qcow2 /mnt/public/vms/disks/community_2025/community.vmdk /var/lib/libvirt/images/community-flat_v923.qcow2

    (100.00/100%)


The actual disks used by the VM will be stored in a libvirt storage pool that is local to the system.

$ sudo qemu-img create -F qcow2 -b /var/lib/libvirt/images/community-flat_v923.qcow2 -f qcow2 /data/libvirt_pools/main/robin_vms923.qcow2 50G

Formatting '/data/libvirt_pools/main/robin_vms923.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=53687091200 backing_file=/var/lib/libvirt/images/community-flat_v923.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16


Converting to the second OpenVMS 9.2-3 script was done by first unpacking it on the storage host and renaming it to indicate that it was for 2025.

$ unzip X86_V923_community-1.zip
$ mv X86_V923_community X86_V923_community_2025

Then on the KVM host, I renamed the qcow2 files for temporary storage before recreating them.

$ sudo mv /var/lib/libvirt/images/community-flat_v923.qcow2 /var/lib/libvirt/images/community-flat_v923_old.qcow2

$ sudo mv /data/libvirt_pools/main/robin_vms923.qcow2 /data/libvirt_pools/main/robin_vms923_old.qcow2

$ sudo qemu-img convert -cpf vmdk -O qcow2 /mnt/public/vms/disks/X86_V923_community_2025/X86_V923-community.vmdk /var/lib/libvirt/images/community-flat_v923.qcow2
    (100.00/100%)

$ sudo qemu-img create -F qcow2 -b /var/lib/libvirt/images/community-flat_v923.qcow2 -f qcow2 /data/libvirt_pools/main/robin_vms923.qcow2 50G
Formatting '/data/libvirt_pools/main/robin_vms923.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=53687091200 backing_file=/var/lib/libvirt/images/community-flat_v923.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16

The setup_vms_community_kvm.py script was setup quickly and could use some changes to make it more generic.  For now I have to edit it for the specific system.

Creating the script required a lot of test runs to get the data needed to identify the unique prompts.

This is an expect type script and there are some general notes about them.
  • Note when creating this type of script, or using an "expect" type tool, the command termination character is the "\r" or carriage return character.   Most of the examples I have found online incorrectly show using the "\n" or line-feed character.   The "\n" will work in most, but not all cases, but the "\r" is the code actually sent by someone doing a manual configuration, so will be more reliable to use.

    I have had to spend a a lot of time in the past debugging unneeded complexity in expect script automation because of this when the previous programmer encountered a device that would not work with '\n'.

  • These automation scripts work best with a terminal set to as "dumb" as possible, and with echo disabled.  I was unable to control those settings until I was able to get to a DCL prompt.  My experiments with changing the terminal settings seemed to cause more problems than they solved.
It turns out that the DCL default dollar prompt has two different sequences to test for.   There is a null byte that precedes the prompt and there is optionally a line-feed character that may be present.
Detecting a "Username:" was an issue, it shows up too often in console messages, so I had to detect and set a flag to do the reboot.

Also I have to reboot the system after setting up DECNET and TCP/IP as I could not get the TCP/IP that was running to shutdown with our a reboot.

It took a bit of debugging to get to get the script to run.  I needed to start over from a fresh disk each time, which is actually easy with this setup.

$ sudo virsh destroy robin
  Domain 'robin' destroyed

$ sudo rm /data/libvirt_pools/main/robin_vms923.qcow2
$ sudo qemu-img create -F qcow2 -b /var/lib/libvirt/images/community-flat_v923.qcow2 -f qcow2 /data/libvirt_pools/main/robin_vms923.qcow2 50G
Formatting '/data/libvirt_pools/main/robin_vms923.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=53687091200 backing_file=/var/lib/libvirt/images/community-flat_v923.qcow2 backing_fmt=qcow2 lazy_refcounts=off refcount_bits=16


The NET$CONFIGURE.COM adds an additional prompt if it is run a second time, so is easy to detect and avoid re-running it, so you can get away with out resetting the system disk back to the beginning.

For the TCPIP$CONFIG, the prompts change after the first run is done.   I did not find a way to detect this.

The setup_vms_community_kvm.py script is expecting a VMS_PASSWORD environment variable to be set before it is run.  It also is expecting the VM to be in a off state as it takes care of all the boot menu setup.

The script first sets the BOOTMGR settings to do a conversational boot, where it sets up the SYSTEM password and the minimum information needed for DECnet.

It then reboots the system and configures DECnet Phase IV.   DECnet is configured first because it changes the MAC address of the network interface, so must be done before starting up the TCP/IP and SSH.

The next setup is to run the TCPIP$CONFIG.COM script to setup minimal TCPIP networking.   I have the script hard coded to set the first device it finds to use DHCP to configure it.   I setup some other stuff for routing and bind/DNS which appear to be ignored for a DHCP configuration.

Then a second reboot, and SSH is started.  Apparently adding the DECNET startup seems to have
disabled SSH from being started.   Once I replace the systartup_vms.com and other files with my standard set, that will get fixed.

The rest of the setup will need to be automated as a later step.  The SSH access should allow me to try to use tools like Ansible to do the management, and I have the option of DECnet.





Comments

Popular posts from this blog

D-RATS from Python2 to Python3

OpenVMS 9.2-2 Community Edition setup on KVM on Fedora/41