Skip to content

Latest commit

 

History

History
188 lines (146 loc) · 8.18 KB

README.md

File metadata and controls

188 lines (146 loc) · 8.18 KB

The Lustre Buildbot Configuration

Welcome, this is the buildbot configuration and infrastructure used by the Lustre Buildbot at http://build.lustre.org. It's used to automate the process of testing patch sets submitted to Lustre project on [Gerrit] (http://review.whamcloud.com). If you would like to contribute to improving our testing infrastructure please open a pull request against this Github repository. If you have any questions or feedback, feel free to contact us at [email protected].

Build and Test Strategy

Patch Sets

The Lustre project relies on Gerrit to track proposed changes. Changes submitted to Gerrit are called patch sets. Each patch set submitted will be automatically tested by the buildbot. As you fix the code and push new changes to Gerrit, each patch set is queued to be built. Results of the build are submitted to Gerrit once a build completes. Build products (tarball, srpm, rpms) of a successful build will be available for two weeks.

Tags

The Lustre project is periodically tagged. The Lustre git repository is polled every hour for new or modified tags. If a new/modified tag is found, a change is submitted to the build master and a full build is performed. Once a tag is successfully built, build products (tarball, srpm, rpms) will be available indefinitely.

Builder Types

When a new patch set is submitted, it is queued up for testing on all of the available builders. There is currently only one type of builder:

  • BUILD: These builders are responsible for verifying that a change doesn't break the build on a given platform. Every patch set is built and reported on individually. This helps guarantee that developers never accidentally break the build.

    To maximize coverage builders have been created for most major Linux distributions. This allows us to catch distribution specific issues and to verify the build on a wide range of commonly deployed kernels.

    Additional builders are maintained to test alternate architectures. If you're interested in setting up a builder for your distribution or architecture see the 'Adding a Builder' section below.

    No elevated permissions are required for this type of builder. However, it is assumed that all required development tools and headers are already installed on the system.

Build Steps and the runurl Utility

The Lustre Buildbot makes extensive use of the runurl utility. This small script takes the URL of a script to execute as its first argument followed by all arguments to pass to the script. This allows us to configure a build step which references a trusted URL with the desired script. This means the logic for a particular build step can be separated from the master.cfg which has some advantages:

  • Minimizes the disruption caused by restarting the buildbot to make changes live. This is only required when modifying the master.cfg. For example, when adding/removing a builder or adding a test suite.

  • Build and tests scripts can be run independently making it easy for developers to locally test proposed changes before submitting them.

  • Allows for per-builder customization via the environment. Each script can optionally source the following files to influence its behavior.

    • /etc/buildslave - This file is dynamically generated by the bb-bootstrap.sh script and is run at boot time by the ec2 user data facility. It includes all the information required to configure and start a latent buildslave. Most importantly for scripts this includes the BB_NAME variable which is set to the build slave name.
  • Provides a consistent way to trap and handle signals from the buildbot master. This is particularly helpful when attempting to collect debug information prior to terminating an instance.

Configuring the Master

Important Files

The Lustre Buildbot configuration is broken out in a few different files. Below is a list of important files and a brief description of what is contained within them:

  • master/master.cfg - Core configuration file for the Lustre Buildbot.

  • master/password.py.sample - Sample credentials file used to create a master/password.py file. This file contains build slave passwords, various user passwords, and EC2 credentials.

  • master/lustrebuildslave.py - Contains custom BuilderConfig and EC2LatentBuildSlave classes used by the Lustre Buildbot. If new types of build slaves need to be created, please define them in this file.

  • master/lustrefactory.py - Contains core and supporting functions that construct BuildFactories for Lustre Buildbot builders. If new types of build factories need to be created, please define them in this file.

  • master/lustregittagpoller.py - Contains a subclass of buildbot's GitPoller class called LustreTagPoller. The class is designed to poll a git repository for changes in tags. If a new or modified tag is found, a change is then submitted to the build master.

Credentials

The master/passwords.py file contains the credentials required for the buildbot to interact with ec2. It stores static passwords for non-ec2 build slaves, the web interface and buildbot try. See the master/passwords.py.sample file for a complete description.

Adding a Builder

The process for adding a new builder varies slightly depending if it's a standard or latent builder. In all cases the process begins by adding the new builder and slave to the master.cfg file. One important thing to be aware of is that each builder can potentially have multiple build slaves.

The first step is to determine what kind of slave you're setting up. Both standard and latent build slaves are supported.

Once you've added your slaves, be sure to add them to the all_slaves list at the bottom of the BUILDSLAVES section in master.cfg. Classes have been provided with suitable default values for slaves. Below are examples on how to add various types of slaves.

  • Linux based m3.large EC2 slave
newslaves = [                                                                                     
     LustreEC2Slave(
         name="slavename",
         ami="ami-xxxxxxxx"
     ),
     ....
]
  • Suse based m3.large EC2 slave
newslaves = [                                                                                     
     LustreEC2SuseSlave(
         name="slavename",
         ami="ami-xxxxxxxx"
     ),
     ....
]

Now that your build slaves have been added, a builder needs to be created which owns them. Jump down to the the BUILDERS section and add a LustreBuilderConfig entry to the appropriate list. Each builder must have a unique name. Set the factory to build_factory for the builder. Then set properties and tags options as appropriate.

Finally, you must restart the build master to make it aware of your new builder. It's generally a good idea to run buildbot checkconfig to verify your changes. Then wait until the buildbot is idle before running buildbot restart in order to avoid killing running builds.

Updating an EC2 Build Slave to Use a Different AMI

New AMIs for the latest release of a distribution are frequently published for ec2. These updated AMIs can be used by replacing the current AMI identifier used by the build slave with the new AMI identifier. All build slaves are listed in the BUILDSLAVES section of the master.cfg file. Remember the buildbot will need to be restarted to pick up the change.

Running a Private Master

The official Lustre Buildbot can be accessed by everyone at http://build.lustre.org/ and it leverages Gerrit's stream-events functionality to queue changes to test. Developers are encouraged to use this infrastructure when working on a change. However, this code can be used as a basis for building a private build and test environment. This may be useful when working on extending the testing infrastructure itself. We also maintain our own fork of the open source Buildbot project which can be found at https://github.com/opensfs/buildbot.

Generally speaking to do this you will need to create a password.py file with your credentials, then list your builders in the master.cfg file, and finally start the builder master. It's assumed you're already familiar with Amazon ec2 instances and their terminology.