In Progress

OpenStack is an open-source cloud operating system for many kinds of resources, from virtual machines, containers, to bare-metal servers. However, OpenStack’s control plane mainly targets Linux. FreeBSD is only unofficially supported as a guest operating system. Users can spawn FreeBSD instances on the open cloud platform, but it is not currently possible for administrators or operators to set up OpenStack deployments running on FreeBSD hosts. Given the increasingly important role of cloud-based deployments, and the popularity of OpenStack with various cloud providers, the FreeBSD Foundation has contracted Chin-Hsin Chang to port OpenStack components, so that OpenStack can be run on FreeBSD hosts.

The main goal of the project is to port Linux-based OpenStack components to FreeBSD/amd64. These components include, but are not limited to:

– Keystone (identity & service catalog)
– Ironic (bare-metal provisioning)
– Nova (instance lifecycle management)
– Neutron (overlay network management).

The components will be added to the FreeBSD ports tree, documented in the FreeBSD Handbook and wiki, and the work will be described in a FreeBSD Journal article.

The work will also involve constructing three FreeBSD-based OpenStack clusters. Cluster 1 will be for the CHERI team at University of Cambridge to manage CHERI-enabled Morello boards and to coordinate access requests from developers. Clusters 2 and 3 will be deployed in the main FreeBSD.org cluster. Cluster 2 will be a conversion of the resource management system of the netperf cluster. Cluster 3 will be constructed as a mini cloud of reference machines for different development branches and architectures. Cluster 3 will enable developers to spawn VMs they can fully control for their porting or system development needs. Each of these OpenStack deployments will be tested by OpenStack Tempest and Rally to ensure correctness.

Details

The project will include three phases. Phase 1 consists of getting essential OpenStack components up and running on amd64 FreeBSD machines as cluster control planes. This will involve making OpenStack the lifecycle manager of CHERI-enabled Morello boards. The essential Linux-based components, Identity, Keystone, and Ironic, will be ported to FreeBSD. Though Ironic can be set up as a standalone service, meaning there is no need for a service catalogue and integrated authentication provided by Keystone, it is preferred to have both Keystone and Ironic ported in this phase. Ironic uses BMC (Baseboard Management Controller) to achieve out-of-band control. There are plenty of existing protocols and solutions such as IPMI, Redfish, Dell iDRAC, and HPEiLO. However, the Morello development boards do not come with BMCs. To make Ironic manage these ARM boards, specific Ironic drivers may have to be developed and tested in order to successfully do the provisioning, managing, and cleanup jobs.

The netperf cluster provides various specifications of machines and lets developers do network functionality and performance testing. Currently, the cluster is available on a check-out basis and the machines are managed by the cluster administration team. Though it has a certain level of automation, provisioning and recycling via PXE, IPMI, and various scripts, it will be better if there is a complete system with a dashboard that can take care of the cluster in terms of hardware inventory, resource management, and allocation. Before phase 2 begins, a netperf cluster will be built. The major difference between the CHERI cluster and the netperf cluster is the managed hardware. The CHERI cluster is arm64 and the nerperf cluster will be amd64-based hardware with BMCs for out-of-band control. The difference in BMCs between the clusters may require some additional work.

Phase 2 will involve setting up a mini OpenStack cluster to manage several bare-metal servers and virtual machines in the FreeBSD.org cluster. There are advantages to using a mini OpenStack cluster in the main FreeBSD.org cluster.

– self-service to users of the cluster
– enhanced efficiency of lifecycle management to both physical and virtual resources
– assured network connectivity of servers
– assigning root privileges is simpler and risk is mitigated

To bring these features to the FreeBSD.org cluster, other crucial OpenStack components must be ported to FreeBSD including the instance lifecycle management service and the overlay networking service.

In phase 3 a more general cluster setup with tenant-aware networking, i.e., network isolation, will be implemented. At least one Neutron ML2 Modular Layer 2 driver will be ported in order to provide this functionality. Working with OpenStack SIGs (Special Interest Groups), work will be contributed back to the upstream projects. Another goal will be to establish a FreeBSD version testing pipeline in those projects.