-
Notifications
You must be signed in to change notification settings - Fork 4
How to build and run Secure VM using Ultravisor on a OpenPOWER machine for Eurosys
At a high level, the following steps must be performed in order to run Ultravisor-secured guests on an OpenPOWER machine:
Task A
-
- Build a PNOR image with Ultravisor enabled Firmware.
-
- Install the PNOR image.
-
- Install OS on the Host.
-
- Build and Install NUVOTON TPM enabled Kernel.
-
- Build and install QEMU with support for Secure VM.
Task B
-
- Install and configure a VM.
-
- Configure Secure Memory to enable PEF/Ultravisor.
-
- Build svm-tools and svm-password-agent rpm.
-
- Convert the VM to a secure VM.
-
- Validate the confidentiality of the secure VM.
Task C
Prerequisites: Protected Execution Framework(PEF) capable Open-power platform, with a Nuvuton TPM.
Witherspoon and Mihawk platforms from IBM, have this capability.
Step by step description of each task follows.
[Eurosys artifact note: We have setup two machines.
(machine 1) etna-yz.res.ibm.com - This machine is PEF configured, enabled and has a Secure VM already configured. Use this machine to only run benchmarks (Task C), or to convert a VM to SVM (Task B) followed by running benchmarks (Task C). For Task (C) use the ready made fed32-svm VM. For Task (B) you have the option of skipping some steps and just execute substeps ix and x. If you choose to skip then use the readymade fed32-for-converting-to-SVM VM. Also this machine can be used to build PNOR firmware.
(machine 2) danby-yz.res.ibm.com - This machine is PEF capable, but is not PEF enabled. Use this machine to execute Tasks A, B and C.]
The instructions on how to update just the ultravisor firmware, is captured in the last section.
Video showing the steps to build the Ultravisor enabled PNOR.
NOTE: The steps below are verified to work on Ubuntu 18.04, Ubuntu 20.04 and on RHEL8.3.
These steps do not work on Fedora32 and above, because some packages fail to compile; the compiler seems to be enforcing stricter rules.
[Eurosys artifact note: Login into etna-yz.res.ibm.com as eurosys_r1
or eurosys_r2. ssh [email protected]
.
We have setup a ubuntu18.04 VM on that machine. Login into that VM.
ssh ubuntu18.04
and follow the steps below.
If the VM is not active, start it using sudo virsh start ubuntu18.04 --console
.
The root disk passphrase is captured in the README file in the home directory in etna-yz]
[Eurosys artifact note: If you choose to skip this step, you can use a ready-made pnor made available in your home directory on etna-yz. Name of the pnor file is witherspoon.uv.pnor.squashfs.tar ]
1. git clone https://github.com/rampai/op-build.git
2. cd op-build
3. git submodule init && git submodule update
#install all the dependencies to build the pnor.
4. bash dependency_install.sh
**[Eurosys artifact NOTE: your password to get sudo privileges is in the README **
**file in your home directory on etna-yz]**
#build the pnor for witherspoon.
5. ./op-build witherspoon_ultravisor_defconfig && ./op-build
This takes about two hours to complete. FYI
The pnor is generated in the file witherspoon.pnor.squashfs.tar in the directory output/images
Video showing steps to Flash Ultravisor enabled PNOR on POWER9 Mihawk
[Eurosys artifact note: These steps have to be executed on the BMC of danby-yz.res.ibm.com. ]
Step 1) Flash the new PNOR that contains the Ultravisor firmware:
[Eurosys artifact notes: The BMC's web interface is not directly accessible from the internet. Please setup up a ssh tunnel to the BMC. From your machine run the following command in a separate terminal
ssh [email protected] -L 2201:192.168.2.21:443 -L 2203:192.168.2.21:2200
and keep this terminal active till the PNOR is successfully flashed]
- Login into the BMC web interface
[Eurosys artifact notes: in a browser connect to https://localhost:2201 . It will warn you of invalid certificate. Ignore it for now, and proceed. login credentials will be made available in the README file in the home directory of etna-yz]
-
Click on 'Server Configuration'
-
Click on 'Firmware'
-
Scroll down to 'Server images'
-
Under there, click on the button 'Choose file'
-
Choose the firmware file on you local disk. The one that was created using PNOR tools.
-
Click 'Upload firmware'
-
After about 2 min, the firmware files will be available as one of the firmware options under 'Server Image':
-
Click 'Activate'
-
Click 'ACTIVATE FIRMWARE FILE WITHOUT REBOOTING SERVER'
-
Click 'Continue'
-
The image state will turn into 'Activating' first and after about 3-4 minutes, will turn into 'Active'
-
Click on 'Server power' near the top right corner.
-
Click on 'Orderly - OS shuts down - then server reboots'
-
Click on the 'Reboot' button.
-
It will ask you to confirm. Click on the 'Reboot' button.
Step 4) Boot up the server:
-
Connect to the console of the machine, to confirm that it boots up correctly:
$ ssh localhost -p 2203 -l root
[Eurosys artifact note: use the BMC root password captured in the README file in the home directory on etna]
Video showing Fedora33 Install on the POWER9 Mihawk
The steps below help install Fedora33.
[Eurosys artifact note: We are installing Fedora33 on the Host. Later we will be installing Fedora32 in the guest. We could install Fedora32 on the host aswell. But doing so will need an additional step of updating the libvirt packages to some non-standard ones. Lets avoid that].
-
On the petiboot menu, type 'n' for a new menu entry. A new dialog screen called 'Petiboot Option Editor' pops up.
-
Select 'Specify paths/URLs manually'
-
in the Kernel field, fill in http://download.fedoraproject.org/pub/fedora-secondary/releases/33/Server/ppc64le/os/ppc/ppc64/vmlinuz
-
in the Initrd field, fill in http://download.fedoraproject.org/pub/fedora-secondary/releases/33/Server/ppc64le/os/ppc/ppc64/initrd.img
-
in the Boot arguments field, fill in ip=<host ip address>::<gateway ip address>:<netmask>:<hostname>:<ethernet interface name>:none nameserver=<nameserver ip> nameserver=<second name server ip> inst.vnc inst.stage2=http://mirrors.rit.edu/fedora/fedora-secondary/releases/33/Server/ppc64le/os/
[Eurosys artifact notes: use the following ip=129.34.18.24::129.34.18.2:129.34.18.255:danby-yz.res.ibm.com:enP52p1s0f2:none nameserver=129.34.20.80 nameserver=129.34.20.80 inst.vnc inst.stage2=http://mirrors.rit.edu/fedora/fedora-secondary/releases/33/Server/ppc64le/os/
_If the petiboot networking is not up, configure it by _ (a) select 'System Configuration' (b) select 'Static IP configuration' (c) select 'enP52p1s0f2' (d) In IP/mask, fill in 129.34.18.24/24 (e) In Gateway, fill in 129.34.18.2 (f) in DNS Server(s): Fill in 129.34.20.80 (g) Select OK.
Feel free to wipeout/overwrite the current OS installation on this machine.
]
-
Select OK.
-
Follow the on screen instructions to install the OS.
[Eurosys artifact notes: Please document the root password of this machine in a file danby_info.txt in your home directory on the machine etna-yz. This will enable us to support you in case help is needed.]
Video showing kernel build with enablement for Nuvoton TPM driver
[Eurosys artifact notes: login into the newly installed host danby-yz.res.ibm.com. ssh danby-yz.res.ibm.com ]
-
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
-
cd linux-2.6
-
git checkout v5.11 -b 5.11
(any kernel version above 5.9 is fine). -
get a good config file for your target machine. Generally its found in /boot/config* file on the target machine. Copy the content of that file to .config file in the local directory.
cp /boot/config-5.9.12-200.fc33.ppc64le .config
-
enable NUVOTON TPM driver in the .config file, by setting
CONFIG_TCG_TIS_I2C_NUVOTON=m
-
make oldconfig
-
make && make modules_install && make install
-
reboot the machine and boot up on the newly built kernel.
Video showing QEMU build with Secure VM capability
[Eurosys artifact notes: login into the host danby-yz.res.ibm.com. ssh danby-yz.res.ibm.com ]
Build a QEMU that has the Secure VM capability. This step will not be needed once upstream QEMU has this capability integrated.
-
Run the following commands on the host OS:
$ git clone https://git.qemu.org/git/qemu.git $ git checkout v5.2.0 -b 5.2.0-pef $ wget https://github.com/farosas/qemu/commit/e25370c503ecde1698bbfaff2f965c5b43e8bef6.patch \ -O 0001-spapr-Add-capability-for-Secure-PEF-VMs.patch $ git am 0001-spapr-Add-capability-for-Secure-PEF-VMs.patch $ ./configure --target-list=ppc64-softmmu $ make -j $(nproc) $ sudo make install
-
All the QEMU binaries will be installed in
/usr/local/bin
-
Disable SELinux: edit
/etc/selinux/config
and set it to disabled and reboot the machine. Or configure it to allow access to files in /usr/local/bin. Without this change, libvirt fails to launch QEMU from /usr/local/bin.
Step 1) Make a virtual machine and install Fedora32 in it:
-
Launch virt-install:
$ /usr/bin/virt-install --connect=qemu:///system \ --hvm --accelerate \ --name 'fed32' \ --machine pseries \ --memory=22000000 \ --vcpu=8,maxvcpus=8,sockets=1,cores=8,threads=1 \ --location https://dl.fedoraproject.org/pub/fedora-secondary/releases/32/Everything/ppc64le/os/ \ --nographics \ --serial pty \ --memballoon model=virtio \ --controller type=scsi,model=virtio-scsi \ --disk path=/var/lib/libvirt/images/fedora32-secure.qcow2,bus=scsi,size=30,format=qcow2 \ --network=bridge=virbr0,model=virtio,mac=52:54:00:0a:90:bc \ --mac=52:54:00:0a:90:bc \ --noautoconsole \ --boot emulator=/usr/local/bin/qemu-system-ppc64\ --extra-args="console=tty0 inst.text console=hvc0" && \ virsh console fed32
-
It will ask for a VNC password. Provide the VNC password of your choice and confirm it again. It will then provide you with an IP address and the VNC port to connect.
-
In my case I am given 'Please manually connect your vnc client to 192.168.122.133:1 to begin the install.'
-
In a separate terminal, create an ssh tunnel to that port:
$ ssh danby-yz.res.ibm.com -L 2202:192.168.122.133:5901
-
Connect to the VNC port from your local machine using
vncviewer localhost:2202
-
Type in the VNC password that was created in earlier step and start the installation of Fedora.
-
During installation select 'encrypt my data' to encrypt the root disk and use a suitable disk password.
-
Once installation is complete the VM is shut down. Restart it and make sure it is good and healthy:
$ virsh start fed32 --console
-
It should ask for the root disk password, and then boot up and you should be able to login.
-
Now shutdown the VM:
$ virsh shutdown fed32
Step 2) Add a TPM device to the VM and add secure capability:
-
Using
virsh edit fed32
, Add theqemu
namespace to the kvm domain node:<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
-
And add the following the XML excerpt in the devices section:
<tpm model='spapr-tpm-proxy'> <backend type='passthrough'> <device path='/dev/tpmrm0'/> </backend> </tpm>
-
And finally add the following in a separate section:
<qemu:commandline> <qemu:arg value='-M'/> <qemu:arg value='pseries,cap-svm=off'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-scsi-pci.disable-legacy=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-scsi-pci.disable-modern=off'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-scsi-pci.iommu_platform=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-blk-pci.disable-legacy=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-blk-pci.disable-modern=off'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-blk-pci.iommu_platform=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-net-pci.disable-legacy=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-net-pci.disable-modern=off'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-net-pci.iommu_platform=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-serial-pci.disable-legacy=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-serial-pci.disable-modern=off'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-serial-pci.iommu_platform=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-balloon-pci.disable-legacy=on'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-balloon-pci.disable-modern=off'/> <qemu:arg value='-global'/> <qemu:arg value='virtio-balloon-pci.iommu_platform=on'/> </qemu:commandline>
' pseries,cap-svm=off ' option in the qemu commandline primes the SVM capability in the VM, but does not enable it. All the other virtio related args must be explicitly specified as above. There are patches from David Gibson which will un-necessitate these options. Once the patches are merged, the options can be deleted.
-
Save the XML file and exit the editor.
-
Here is a sample XML file in full. ```
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>fed32</name>
<uuid>49c508c1-b664-4ea0-b260-a9e85dcfd694</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://fedoraproject.org/fedora/33"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>8</vcpu>
<os>
<type arch='ppc64le' machine='pseries-5.2'>hvm</type>
<boot dev='hd'/>
</os>
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>POWER9</model>
<topology sockets='1' dies='1' cores='8' threads='1'/>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/local/bin/qemu-system-ppc64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/fedora32-secure.qcow2'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</controller>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<model name='spapr-pci-host-bridge'/>
<target index='0'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:0a:90:bc'/>
<source bridge='virbr0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</interface>
<serial type='pty'>
<target type='spapr-vio-serial' port='0'>
<model name='spapr-vty'/>
</target>
<address type='spapr-vio' reg='0x30000000'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
<address type='spapr-vio' reg='0x30000000'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<tpm model='spapr-tpm-proxy'>
<backend type='passthrough'>
<device path='/dev/tpmrm0'/>
</backend>
</tpm>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</rng>
<panic model='pseries'/>
</devices>
<qemu:commandline>
<qemu:arg value='-M'/>
<qemu:arg value='pseries,cap-svm=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-scsi-pci.disable-legacy=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-scsi-pci.disable-modern=off'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-scsi-pci.iommu_platform=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-blk-pci.disable-legacy=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-blk-pci.disable-modern=off'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-blk-pci.iommu_platform=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-net-pci.disable-legacy=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-net-pci.disable-modern=off'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-net-pci.iommu_platform=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-serial-pci.disable-legacy=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-serial-pci.disable-modern=off'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-serial-pci.iommu_platform=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-balloon-pci.disable-legacy=on'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-balloon-pci.disable-modern=off'/>
<qemu:arg value='-global'/>
<qemu:arg value='virtio-balloon-pci.iommu_platform=on'/>
</qemu:commandline>
</domain>
Video showing steps to enable Ultravisor and enable secure-capability in the VM
-
The following command on the host, displays the size of secure-memory configured:
$ sudo nvram -p ibm,skiboot --print-config
-
To change the amount of secure memory configured to 64 GB do the following:
$ sudo nvram -p ibm,skiboot --update-config smf_mem_amt=0x1000000000
(Recommend to configure atleast about 16GB of secure memory)
-
To verify the change
$ sudo nvram -p ibm,skiboot --print-config "ibm,skiboot" Partition -------------------------- smf_mem_amt=0x1000000000
-
Shutdown the machine:
$ sudo shutdown now
-
Powercycle the machine
ssh into the BMC and invoke the following command:
$ ssh root@localhost -p 2201 use the same root/password credentials captured in the README file residing in the home directory on etna-yz.res.ibm.com. $ root@witherspoon:~# obmcutil --wait poweroff && obmcutil --wait poweron This can be done through the web console too.
-
On reboot, the presence of the file
/sys/firmware/ultravisor/msglog
indicates that the Ultravisor is enabled. -
[Eurosys artifact note: Clone the VM to create a new VM
fed32-svm
using
virt-clone --original fed32 --name fed32-svm --auto-clone]
-
Enable the cap-svm capability of the fed32-svm VM.
virsh edit fed32-svm
and change the line corresponding to cap-svm as shown below.
<qemu:arg value='pseries,cap-svm=on'/>
Video showing the steps to create the svm-tools and svm-password-agent rpm
[Eurosys notes: we have placed two rpms in your home directory. You can install those rpms and skip these steps. svm-password-agent-0.0.1-0.fc32.ppc64le.rpm svm-tool-0.1.0-1.noarch.rpm ]
-
This step can be skipped, if you have already installed the rpms.
-
install rpm-build and poetry rpms
dnf install poetry rpm-build
-
clone the svm-tools repository
git clone https://github.com/open-power/svm-tools.git
-
make the rpms
cd svm-tools make rpm
The the svm-tools and svm-password agent rpm is now ready under RPMDIR directory.
Video showing the steps to convert a VM to secure VM
-
Boot up the fed32-svm VM and login into the VM.
-
Install the svm-password-agent rpm. This agent enables VM to boot without user interaction
$ sudo dnf install svm-password-agent-0.1.0-1.noarch.rpm [NOTE: the password agent depends on the availability of the nc command. Make sure it is installed. If not install it] $ sudo dnf install nmap-ncat
-
set an environment variable KERN_VER, capturing the version of the kernel.
```export KERN_VER=5.8.15-301.fc33.ppc64le```
[ please set the KERN_VER to the correct version, depending on the version of the kernel installed in your VM ]
-
Regenerate the initramfs to absorb the SVM password agent.
mkinitrd /boot/initramfs-${KERN_VER}.img ${KERN_VER}
[ use --force option if it complains ]
-
Install the svm-tool RPM. The tools help create digital-data to secure boot the VM. The digital data is called Enter-Secure-Mode blob, also known as ESM-blob.
$ sudo dnf install svm-tool-0.1.0-1.noarch.rpm
-
Create a directory called 'key'
$ mkdir key; cd key
-
Copy the kernel image into the directory.
$ cp /boot/vmlinuz-${KERN_VER} .
-
Copy the initramfs image into the directory.
$ cp /boot/initramfs-${KERN_VER}.img .
-
Collect the kernel command line parameters in a file name 'cmd' and append svm related options.
$ cat /proc/cmdline > cmd $ sed -ie 's/$/ svm=on xive=off swiotlb=262144/g' cmd
[NOTE: xive is currently not supported in SVMs, so explicitly switch it off. Also I/O in secure VM have to be bounced through a bounce buffer. Ensure enough number of bounce buffers are available to enable I/O intensive workload using the option swiotlb=262144 ]
-
Get the host TPM's public wrapping key
Run the following command on the host:
$ sudo tssreadpublic -ho 81800001 -opem tpmrsapubkey.pem
copy the file tpmrsapubkey.pem to the 'key' directory on the guest.
-
create a file key_file.txt and capture the root disk passphrase in that file.
$ echo "root disk passphrase" > key_file.txt eg: if your root disk passphrase is "abc123": echo "abc123" > key_file.txt
-
Generate the owner's public/private key. This is a one-time step taken by the owner of the VM. This generates two files
rsaprivkey
andrsapubkey
:$ svm-tool esm generate -p rsapubkey -s rsaprivkey [ NOTE: these two keys can be reused for rekeying other VMs owned by the owner]
-
Create an
svm_blob.yml
file and fill in the following contents:[ replace ${KERN_VER} with the correct value . Also replace ${cmd} with the contents of the file cmd] - origin: pubkey: "rsapubkey" seckey: "rsaprivkey" - recipient: comment: "HOSTNAME TPM" pubkey: "tpmrsapubkey.pem" - file: name: "rootd" path: "key_file.txt" - digest: args: "${cmd}" initramfs: "initramfs-${KERN_VER}.img" kernel: "vmlinuz-${KERN_VER}"
-
Here's what the fields in the file mean:
- origin/pubkey: is the file containing the owner's public key.
- origin/seckey: is the file containing the owners private key.
- recipient/comment: is a comment that can be anything.
- recipient/pubkey: is the file containing the public wrapping key of the host TPM.
-
digest/args: is the kernel command line of the kernel running in the guest. It generally is the output of the command
cat /proc/cmdline
- Of course you have to make sure that
svm=on xive=off swiotlb=262144
is added to the cmdline.
- Of course you have to make sure that
- digest/initramfs: is the file containing the initramfs of the kernel that you want to boot securely.
- digest/kernel: is the file containing the the kernel that you want to boot securely.
- file: this section is optional. This section captures any secrets to be made available to the SVM. There has to be one file section per secret
- file/name : the name of the secret. This name acts a handle for the SVM to procure the value of the secret. For root disk passphrase, "rootd" is a reserved name. Use it to capture root disk passphrase.
- file/path : the name of the file holding the secret. The secret cannot be larger than 64bytes.
Here is an example svm_blob.yml
file
- origin:
pubkey: "rsapubkey"
seckey: "rsaprivkey"
- recipient:
comment: "etna TPM"
pubkey: "tpmrsapubkey.pem"
- file:
name: "rootd"
path: "key_file.txt"
- digest:
args: "BOOT_IMAGE=/vmlinuz-5.10.15-100.fc32.ppc64le root=/dev/mapper/ff
edora-root ro rd.lvm.lv=fedora/root rd.luks.uuid=luks-2b39ebb5-2910-421a-a8aa-a44
e9bd4457da rd.lvm.lv=fedora/swap console=hvc0 svm=on xive=off swiotlb=262144"
initramfs: "initramfs-5.10.15-100.fc32.ppc64le.img"
kernel: "vmlinuz-5.10.15-100.fc32.ppc64le"
-
Generate the ESM blob (ESM stands for Enter Secure Mode). [make sure binutils package is installed on the system
dnf install binutils
]$ svm-tool esm make -b test_esmb.dtb -y svm_blob.yml
- This will generate the ESM blob in the file `test_esmb.dtb`. - NOTE: ESM blob is sometimes referred to as ESM-operand.
-
Add the ESM blob to the initramfs and generate a new initramfs:
$ svm-tool svm add -i initramfs-${KERN_VER}.img -b test_esmb.dtb -f esmb-initrd.img This will generate a new initramfs in the file `esmb-initrd.img`.
-
Edit the file
/etc/default/grub
to addsvm=on xive=off swiotlb=262144
to theGRUB_CMDLINE_LINUX
variable.This is my grub file: ``` GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="ofconsole" GRUB_CMDLINE_LINUX="rd.luks.uuid=luks-0219595c-4d6b-43e0-93b6-323be505d6c0 console=hvc0 svm=on xive=off swiotlb=262144" GRUB_DISABLE_RECOVERY="true" GRUB_ENABLE_BLSCFG=true GRUB_TERMINFO="terminfo -g 80x24 console" GRUB_DISABLE_OS_PROBER=true ```
-
Replace the initramfs in the /boot directory: . Make a copy of the original initramfs first and then replace it with the newly generated initramfs.
$ sudo cp /boot/initramfs-${KERN_VER}.img/boot/initinitramfs-${KERN_VER}.img.orig
$ sudo cp esmb-initrd.img /boot/initramfs-${KERN_VER}.img
-
Regenerate the GRUB config:
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
-
Reboot the fed32-svm VM.
. On next reboot, the VM should boot in secure mode.
. Note: There is a bug in the host kernel, where sometimes the VM will exit with error -4. This happens when the VM is in the process of switching from normal-mode to secure-mode.
. Don't panic. Restart the VM with
virsh start fed32-svm --console
and it should boot up. -
Login and verify if it is a secure VM with:
[root@localhost ~]# cat /sys/devices/system/cpu/svm . The output must be '1'
Video showing steps to validate the confidentiality of the Secure VM.
[NOTE: This is a optional step.]
[NOTE: the dump operation on SVM of size larger than 4GB, can hang the system. There is a known memory exhaustion bug in the ultravisor. To validate this step, change the size of the SVM to 4G and try the steps below].
-
Take a dump of the Secure VM, and check for any plain text strings.
. Note this not a perfect secrecy checker. We have to build better ways of validating this.
$ virsh dump fed32-svm /home/f32.secure.dump --memory-only --reset
$ strings /home/f32.secure.dump | grep netdev
- Clone the ultravisor repository.
git clone https://github.com/open-power/ultravisor.git
cd ultravisor
- Checkout the correct version of the ultravisor. The commit-id used in the PNOR is 52857eb. The commit-id that fixes the H_CEDE hcall bug is e9c930f.
git checkout e9c930f -b eurosys-eval
- compile the ultravisor
make -j$(nproc)
A ultravisor binary file ultra.lid.xz.stb is generated.
- Setup the tunnel to the BMC of the danby-yz.res.ibm.com. Keep this tunnel active.
ssh [email protected] -L 2222:192.168.2.21:22
- In a separate terminal, copy the ultravisor binary file ultra.lid.xz.stb to the BMC.
scp ultra.lid.xz.stb root@localhost:/usr/local/share/pnor/UVISOR -P 2222
[ The root password of the BMC is available in the README file in the home directory on the machine etna-yz. ]
- Login into the BMC
ssh root@localost -p 2222
- Restart the server
obmcutil chassisoff && sleep 10 && systemctl restart mboxd.service && obmcutil chassison && sleep 10 && obmcutil poweron
you should start seeing the bootup messages on the console.
- Once the machine is fully booted, verify the version of the active ultravisor.
head -1 /sys/firmware/ultravisor/msglog