Initiative to strengthen the infrastructure of the FreeBSD Project, improve its capabilities, and provide better services to its users

Contact: Joseph Mingrone <jrm@freebsdfoundation.org> and Philip Paeps <phil@freebsd.org>

The FreeBSD Foundation invested over $100,000 to install a server cluster in Chicago. This investment aims to strengthen the FreeBSD Project’s infrastructure, improve its capabilities, and provide better services to its users. To support this expansion, the Foundation has partnered with NYI, which generously contributed four racks in their Chicago facility.

The new cluster configuration is designed to optimize the FreeBSD Project’s operational efficiency and includes:

  • Two routers: For directing network traffic.
  • Five package builders: Aimed at accelerating the package release process.
  • Three general-purpose servers: These will enhance the availability and performance of the FreeBSD Project’s public and developer-facing services (Bugzilla, Git, Phabricator, Wiki, etc.).
  • There are two package mirrors: one hosted in the new cluster in Chicago and one hosted by ISC in California. These are part of the FreeBSD Project’s growing network of pkg.FreeBSD.org and download.FreeBSD.org servers, strategically positioned worldwide to offer faster package downloads. 
  • Two CI servers: To improve the speed and efficacy of automated code testing.
  • One admin bastion: A secure entry point for managing the cluster, which runs the Cluster Administration team (clusteradm) tools, cluster DNS, monitoring, and other services needed to administer the systems.

This hardware setup is expected to significantly improve the processing capabilities and service responsiveness of the FreeBSD Project.

The FreeBSD clusteradm team played a crucial role in the integration phase of the new cluster.

  • Hardware compatibility and firmware debugging: Several initial hurdles had to be overcome to ensure the server firmware was compatible with FreeBSD. The cluster relies on being able to netboot machines and requires reliable out-of-band management.
  • Network configuration and automation: After the servers could reliably boot, the network was configured, including the cluster-internal DNS, packet filter rules, and BGP sessions to the Internet.
  • Automation and system provisioning: The team’s automation tools largely automate installing and configuring servers. After overcoming some bootstrapping issues using a temporary FreeBSD installation, the servers were netbooted into the cluster installation image and installed using the standard cluster builds.
  • Monitoring and management integration: The team installed and configured a monitoring proxy on the admin server, integrating the new site into the Project’s central monitoring system. This allowed for efficient management and troubleshooting of the cluster, ensuring stability and performance.
  • Final system installation and network services setup: The team concluded the integration work by reinstalling the admin server using the tooling automation, setting up routing and firewall configurations, and bringing up the BGP sessions on the fibre uplinks. This setup ensured the new cluster was operational and optimized for performance and security.