Discussions
Categories
- 17.9K All Categories
- 3.4K Industry Applications
- 3.4K Intelligent Advisor
- 75 Insurance
- 537.7K On-Premises Infrastructure
- 138.7K Analytics Software
- 38.6K Application Development Software
- 6.1K Cloud Platform
- 109.6K Database Software
- 17.6K Enterprise Manager
- 8.8K Hardware
- 71.3K Infrastructure Software
- 105.4K Integration
- 41.6K Security Software
Deploying Oracle Linux Hyperconverged Infrastructure

This document is still under review. Sections of this document could change and further enhancements and/or options can be introduced on the same.
Oracle Linux Virtualization Manager has been integrated with Gluster 6, an open source scale-out distributed file system, to provide a hyperconverged solution where both compute and storage are provided from the same hosts. Gluster volumes residing on the hosts are used as storage domains in the Manager to store the virtual machine images. Oracle Linux Virtualization Manager is run as a self-hosted engine within a virtual machine on these hosts.
For more information on Gluster 6, see Gluster documentation.
Pre-requisites
- There must be minimum 3 latest Oracle Linux 7 Servers with Minimal Installation.Follow the instructions in the Oracle® Linux 7: Installation Guide
- Ensure that the firewalld service is enabled and started.For more information about firewalld, see Controlling the firewalld Firewall Service in the Oracle® Linux 7: Administrator's Guide.
- (Optional) If you are using a proxy server for Internet access, configure Yum with the proxy server settings. For more information, see Configuring Use of a Proxy Server in the Oracle® Linux 7: Administrator's Guide.
- Please make sure that all three OL7 servers have hostname/FQDN which can be resolved in the environment using DNS or through the /etc/hosts file on the OL7 servers.
- You must have a fully qualified domain name prepared for your Self Hosted-Engine . Forward and reverse lookup records must both be set in the DNS. The Engine should use the same subnet as the management network.
- You must have configured password-less ssh between the first server to itself and the other 2 servers. This is needed for Gluster deployment.
On the first host (from which deployment will run), Run the following commands: # ssh-keygen # ssh-copy-id [email protected] # ssh-copy-id [email protected] # ssh-copy-id [email protected]
Installing the Required Packages
- Install the Oracle Linux Virtualization Manager Release 4.3.10 package. For more information refer Oracle® Linux Virtualization Manager: Getting Started Guide
# yum install oracle-ovirt-release-el7
- On all 3 hosts, install the following packages:
cockpit-ovirt-dashboard (provides a UI for the installation
vdsm-gluster (plugin to manage gluster services)# yum install cockpit-ovirt-dashboard vdsm-gluster
- On the first host, install the following packages:
ovirt-engine-appliance (for the Engine virtual machine installation)
gluster-ansible-roles (For deploying, configuring, maintaining GlusterFs clusters)# yum install ovirt-engine-appliance gluster-ansible-roles
Setting up the hyperconverged environment
- Log into the Cockpit UI
In Browser, Log in to Cockpit management interface of the first Oracle Linux 7 host, for example, https://gluster-host1-address:9090/ - Start the deployment wizard
Click Virtualization > Hosted Engine, Click Start button under Hyperconverged. - Run Gluster Wizard
- Host Selection
Enter 3 Server's FQDN/IP address for deploying gluster. - Package selection
This is an optional step - to install additional packages required on all hosts. Since we already installed required packages, we can skip this step. - Volume tab
This step in the wizard defines the gluster volumes that need to be created.
These gluster volumes will later in the wizard, be used to create storage domains in Oracle Linux Virtualization Manager.
The first volume in the list is used to host the Hosted Engine virtual disk. As a guidance, we ask to create 2 additional gluster volumes.- vmstore : Hosting the OS disks of the virtual machines.
- data : Hosting the data disks of the virtual machines.
The volumes are separated as above to ease backup, assuming only data volume will need to be backed up. All 3 gluster volumes will be created as Data storage domains.
A gluster volume can be created as an Arbiter type volume, to save on storage capacity. In this case, the 3rd host will not need the same capacity as the first two hosts.
- Brick setup tab
The Bricks tab configures the devices to use for the gluster volumes defined in the previous step.
If the devices used for bricks are configured as RAID devices, provide the information in the RAID information section. These parameters are used to create the optimal alignment values for the LVM and filesystem layers created on the device.
Brick configuration allows for per-host definition of bricks. This is useful in case the device names are not uniform across the hosts- LV Name : name used for the logical LV created on brick. This is read-only and based on the gluster volumes defined in previous step
- Device Name: name of device to be used to create the brick. Either the same device or different devices can be used for different gluster volumes. For instance, engine can use device sdb while vmstore and data can use device sdc
- Size (GB): Size of the LV in Gigabytes
- Thinp: Checkbox indicating if the LV should be thinly provisioned or not. Thin provisioned LVs are not supported with dedupe & compression on device
- Mount point: Path where the brick is mounted. Determined from the brick directory provided in previous step
- Enable Dedupe & Compression: Checkbox indicating if de-duplication and compression should be turned on for the device. Dedupe and compression is provided at the device layer using the VDO module available since Oracle Linux 7.5. VDO layer will introduce a performance overhead, so it is advised to enable this if you’re using SSD devices
- Configure LV Cache: Use this option to provide an SSD based lvmcache layer if your brick LVs are on spinning devices.
- Review and Deploy
Now, we can review and deploy the Gluster
Once Gluster deployment is finished, click the “Continue to Hosted Engine Deployment” button to begin configuring your hosted engine.
Hosted engine setup
This section shows you how to deploy the Hosted Engine using the Cockpit UI. Following this process results in Oracle Linux Virtualization Manager running as a virtual machine on the first physical machine in your deployment. It also configures a Default cluster comprised of the three physical machines, and enables Gluster Storage functionality and the virtual-host tuned performance profile for each machine in the cluster.
Provide hostname, domain, network configuration, password, and, if desired, ssh key information for your hosted engine virtual machine.
Then, we’ll answer a set of questions related to the virtual machine that will serve the oVirt engine application. First, we tell the installer to use the oVirt Engine Appliance image that gdeploy installed for us. Then, we configure cloud-init to customize the appliance on its initial boot, providing various VM configuration details covering networking, VM RAM and storage amounts, and authentication. Enter the details appropriate to your environment.
Next, supply an admin password for the engine instance, and customize your notification server details.
Now, Review the guest VM configuration and Click on Prepare VM button. This will create local hosted engine VM.
Now, Configure the Storage domain that will be used to host the Self-hosted engine disk.
Please review the configuration and Click on Finish Deployment button. Now, the local self-hosted engine will be transferred to configured Gluster storage domain.
Now, wait for deployment to complete. This will take some time (about 30 minutes). When deployment is completed, you will see following screen:
Now, we can access Oracle Linux Virtualization Manager.
Access Oracle Linux Virtualization Manager
After deploying Self-hosted engine, open a web browser and open your OLVM administration portal at the address of your hosted engine VM. Log in with the user name admin and the password that you chose during setup.
Comments
-
The gluster volumes, are these setup with 'optimised for VM storage' (glusters sharding?). We seem not to be able to start VMs if the are on sharded volumes (disks in VM are only as large as the shard-block-size).
-
I recommend to add a step before you login on Cockpit GUI, start cockpit service and enable cockpit on firewalld are needed.
# systemctl enable --now cockpit# firewall-cmd --permanent --add-service=cockpit
# firewall-cmd --reload
-
My test setup shows:
[[email protected] ~]# gluster volume info | grep shard
features.shard: on
features.shard: on
features.shard: on
-
What is the process, guidelines and requirements to expand my OLVM HCI infrastructure.
Please guide or provide the document where we can find more information.