Setting Up an External OpenStack Testing System – Part 2

In this third article in the series, we discuss adding one or more Jenkins slave nodes to the external OpenStack testing platform that you (hopefully) set up in the second article in the series. The Jenkins slave nodes we create today will run Devstack and execute a set of Tempest integration tests against that Devstack environment.

Add a Credentials Record on the Jenkins Master

Before we can add a new slave node record on the Jenkins master, we need to create a set of credentials for the master to use when communicating with the slave nodes. Head over to the Jenkins web UI, which by default will be located at http://$MASTER_IP:8080/. Once there, follow these steps:

  1. Click the Credentials link on the left side panel
  2. Click the link for the Global domain:
    credentials-list
  3. Click the Add credentials link
  4. Select SSH username with private key from the dropdown labeled “Kind”
  5. Enter “jenkins” in the Username textbox
  6. Select the “From a file on Jenkins master” radio button and enter /var/lib/jenkins/.ssh/id_rsa in the File textbox:
    add-credentials
  7. Click the OK button

Construct a Jenkins Slave Node

We will now install Puppet and the software necessary for running Devstack and Jenkins slave agents on a node.

Slave Node Requirements

On the host or virtual machine that you have selected to use as your Jenkins slave node, you will need to ensure, like the Jenkins master node, that the node has the following:

  • These basic packages are installed:
    • wget
    • openssl
    • ssl-cert
    • ca-certificates
  • Have the SSH keys you use with GitHub in ~/.ssh/. It also helps to bring over your ~/.ssh/known_hosts and ~/.ssh/config files as well.
  • Have at least 40G of available disk space
  • .

IMPORTANT NOTE: If you were considering using LXC containers for your Jenkins slave nodes (as I originally struggled to use)…. Use a KVM or other non-shared-kernel virtual machine for the devstack-running Jenkins slaves. Bugs like the inability to run open-iscsi in an LXC container make it impossible to run devstack inside an LXC container.

Download Your Config Data Repository

In the second article in this series, we went over the need for a data repository and, if you followed along in that article, you created a Git repository and stored an SSH key pair in that repository for Jenkins to use. Let’s get that data repository onto the slave node:

git clone $YOUR_DATA_REPO data

Install the Jenkins software and pre-cache OpenStack/Devstack Git Repos

And now, we install Puppet and have Puppet set up the slave software:

wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_slave.sh
bash install_slave.sh

Puppet will run for some time, installing the Jenkins slave agent software and necessary dependencies for running Devstack. Then you will see output like this:

Running: ['git', 'clone', 'https://git.openstack.org/openstack-dev/cookiecutter', 'openstack-dev/cookiecutter']
Cloning into 'openstack-dev/cookiecutter'...
...

Which indicates Puppet done and a set of Nodepool scripts are running to cache upstream OpenStack Git repositories on the node and prepare Devstack. Part of the process of preparing Devstack involves downloading images that are used by Devstack for testing. Note that this step takes a long time! Go have a beer or other beverage and work on something else for a couple hours.

Adding a Slave Node on the Jenkins Master

In order to “register” our slave node with the Jenkins master, we need to create a new node record on the master. First, go to the Jenkins web UI, and then follow these steps:

  1. Click the Manage Jenkins link on the left
  2. Scroll down and click the Manage Nodes link
  3. Click the New Node link on the left:
    manage-nodes
  4. Enter “devstack_slave1” in the Node name textbox
  5. Select the Dumb Slave radio button:
    add-node
  6. Click the OK button
  7. Enter 2 in the Executors textbox
  8. Enter “/home/jenkins/workspaces” in the Remote FS root textbox
  9. Enter “devstack_slave” in the Labels textbox
  10. Enter the IP Address of your slave host or VM in the Host textbox
  11. Select jenkins from the Credentials dropdown:
    new-node-screen
  12. Click the Save button
  13. Click the Log link on the left. The log should show the master connecting to the slave, and at the end of the log should be: “Slave successfully connected and online”:
    slave-log

Test the dsvm-tempest-full Jenkins job

Now we are ready to have our Jenkins slave execute the long-running Jenkins job that uses Devstack to install an OpenStack environment on the Jenkins slave node, and run a set of Tempest tests against that environment. We want to test that the master can successfully run this long-running job before we set the job to be triggered by the upstream Gerrit event stream.

Go to the Jenkins web UI, click on the dsvm-tempest-full link in the jobs listing, and then click the Build Now link. You will notice an executor start up and a link to a newly-running job will appear in the Build History box on the left:

Build History panel in Jenkins

Build History panel in Jenkins

Click on the link to the new job, then click Console Output in the left panel. You should see the job executing, with Bash output showing up on the right:

Manually running the dsvm-tempest-full Jenkins job

Manually running the dsvm-tempest-full Jenkins job

Troubleshooting

If you see errors pop up, you will need to address those issues. In my testing, issues generally were around:

  • Firewall/networking issues: Make sure that the Jenkins master node can properly communicate over SSH port 22 to the slave nodes. If you are using virtual machines to run the master or slave nodes, make sure you don’t have any iptables rules that are preventing traffic from master to slave.
  • Missing files like “No file found: /opt/nodepool-scripts/…”: Make sure that the install_slave.sh Bash script completed successfully. This script takes a long time to execute, as it pulls down a bunch of images for Devstack caching.
  • LXC: See above about why you cannot currently use LXC containers for Jenkins slaves that run Devstack
  • Zuul processes borked: In order to have jobs triggered from upstream, both the zuul-server and zuul-merge processes need to be running, connecting to Gearman, and firing job events properly. First, make sure the right processes are running:
    # First, make sure there are **2** zuul-server processes and
    # **1** zuul-merger process when you run this:
    ps aux | grep zuul
    # If there aren't, do this:
    sudo rm -rf /var/run/zuul/*
    sudo service zuul start
    sudo service zuul-merger start
    

    Next, make sure that the Gearman service has registered queues for all the Jenkins jobs. You can do this using telnet (4730 is the default port for Gearman):

    ubuntu@master:~$ telnet 127.0.0.1 4730
    Trying 127.0.0.1...
    Connected to 127.0.0.1.
    Escape character is '^]'.
    status
    build:noop-check-communication:master	0	0	2
    build:dsvm-tempest-full	0	0	1
    build:dsvm-tempest-full:devstack_slave	0	0	1
    merger:merge	0	0	1
    zuul:enqueue	0	0	1
    merger:update	0	0	1
    zuul:promote	0	0	1
    set_description:master	0	0	1
    build:noop-check-communication	0	0	2
    stop:master	0	0	1
    .
    ^]
    
    telnet> quit 
    Connection closed.
    

Enabling the dsvm-tempest-full Job in the Zuul Pipelines

Once you’ve successfully run the dsvm-tempest-full job manually, you should now enable this job in the appropriate Zuul pipelines. To do so, on the Jenkins master node, you will want to edit the etc/zuul/layout.yaml file in your data repository (don’t forget to git commit your changes after you’ve made them and push the changes to the location of your data repository’s canonical location).

If you used the example layout.yaml from my data repository and you’ve been following along this tutorial series, the projects section of your file will look like this:

projects:
  - name: openstack-dev/sandbox
    check:
      # Remove this after successfully verifying communication with upstream
      # and seeing a posted successful review.
      - noop-check-communication
      # Uncomment this job when you have a jenkins slave running and want to
      # test a full Tempest run within devstack.
      #- dsvm-tempest-full
    gate:
      # Remove this after successfully verifying communication with upstream
      # and seeing a posted successful review.
      - noop-check-communication
      # Uncomment this job when you have a jenkins slave running and want to
      # test a full Tempest run within devstack.
      #- dsvm-tempest-full

To enable the dsvm-tempest-full Jenkins job to run in the check pipeline when a patch is received (or recheck comment added) to the openstack-dev/sandbox project, simply uncomment the line:

      #- dsvm-tempest-full

And then reload Zuul and Zuul-merger:

sudo service zuul reload
sudo service zuul-merger reload

From now on, new patches and recheck comments on the openstack-dev/sandbox project will fire the dsvm-tempest-full Jenkins job on your devstack slave node. :) If your test run was successful, you will see something like this in your Jenkins console for the job run:

\o/ Steve Holt!

\o/ Steve Holt!

And you will note that on the patch that triggered your Jenkins job will show a successful comment, and a +1 Verified vote:

A comment showing external job successful runs

A comment showing external job successful runs

What Next?

From here, the changes you make to your Jenkins Job configuration files are up to you. The first place to look for ideas is the devstack-vm-gate.sh script. Look near the bottom of that script for a number of environment variables that you can set in order to tinker with what the script will execute.

If you are a Cinder storage vendor looking to test your hardware and associated Cinder driver against OpenStack, you will want to either make changes to the example dsvm-tempest-full or create a copy of that example job definition and customize it to your needs. You will want to make sure that Cinder is configured to use your storage driver in the cinder.conf file. You may want to create some script that copies most of what the devstack-vm-gate.sh script does, and call the devstack ini_set function to configure your storage driver, and then run devstack and tempest.

Publishing Console and Devstack Logs

Finally, you will want to get the log files that are collected by both Jenkins and the devstack run published to some external site. Folks at Arista have used dropbox.com to do this. I’ll leave it up to an exercise for the reader to set this up. Hint: that you will want to set the PUBLISH_HOST variable in your data repository’s vars.sh to a host that you have SCP rights to, and uncomment the publishers section in the example dsvm-tempest-full job:

#    publishers:
#      - devstack-logs  # In macros.yaml from os-ext-testing
#      - console-log  # In macros.yaml from os-ext-testing

Final Thoughts

I hope this three-part article series has been helpful for you to understand the upstream OpenStack continuous integration platform, and instructional in helping you set up your own external testing platform using Jenkins, Zuul, and Jenkins Job Builder, and Devstack-Gate. Please do let me know if you run into issues. I will post some updates to the Troubleshooting section above when I hear from you and (hopefully help you resolve any problems).

Setting Up an External OpenStack Testing System – Part 1

This post is intended to walk somone through the process of establishing an external testing platform that is linked with the upstream OpenStack continuous integration platform. If you haven’t already, please do read the first article in this series that discusses the upstream OpenStack CI platform in detail. At the end of the article, you should have all the background information on the tools needed to establish your own linked external testing platform.

EXTREMELY IMPORTANT NOTE: The upstream Puppet modules used in this article have changed dramatically since writing this. I am in the process of updating this blog entry, but at this time, some important steps do not work properly!

What Does an External Test Platform Do?

In short, an external testing platform enables third parties to run tests — ostensibly against an OpenStack environment that is configured with that third party’s drivers or hardware — and report the results of those tests on the code review of a proposed patch. It’s easy to see the benefit of this real-time feedback by taking a look at a code review that shows a number of these external platforms providing feedback. In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by external Neutron vendor test platforms on a proposed patch to Neutron:

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Each of these systems, when adding a Verified label to a review does so by adding a comment to the review. These comments contain links to artifacts from the external testing system’s test run for this proposed patch, as shown here:

Comments added to a review by the vendor testing platforms

Comments added to a review by the vendor testing platforms

The developer submitting a patch can use those links to investigate why their patch has caused test failures to occur for that external test platform.

Why Set Up an External Test Platform?

The benefits of external testing integration with upstream code review are numerous:

A tight feedback loop
The third party gets quick notifications that a proposed patch to the upstream code has caused a failure in their driver or configuration. The tighter the “feedback loop”, the faster fixes can be identified
Better code coverage
Drivers and plugins that may not be used in the default configuration for a project can be tested with the same rigor and frequency as drivers that are enabled in the upstream devstack VM gate tests. This prevents bitrot and encourages developers to maintain code that is housed in the main source trees.
Increased consistency and standards
Determining a standard set of tests that prove a driver implements the full or partial API of a project means that drivers can be verified to work with a particular release of OpenStack. If you’ve ever had a conversation with a potential deployer of OpenStack who wonders how they know that their choice of storage or networking vendor, or underlying hypervisor, actually works with the version of OpenStack they plan to deploy, then you know why this is a critical thing!

Why might you be thinking about how to set up an external testing platform? Well, a number of OpenStack projects have had discussions already about requirements for vendors to complete integration of their testing platforms with the upstream OpenStack CI platform. The Neutron developer community is ahead of the game, with more than half a dozen vendors already providing linked testing that appears on Neutron code reviews.

The Cinder project also has had discussions around enforcing a policy that any driver that is in the Cinder source tree have tests run on each commit to validate the driver is working properly. Similarly, the Nova community has discussed the same policy for hypervisor drivers in that project’s source tree. So, while this may be old news for some teams, hopefully this post will help vendors that are new to the OpenStack contribution world get integrated quickly and smoothly.

The Tools You Will Need

The components involved in building a simple linked external testing system that can listen to and notify the upstream OpenStack continuous integration platform are as follows:

Jenkins CI
The server that is responsible for executing jobs that run tests for a project
Zuul
A system that configures and manages event pipelines that launch Jenkins jobs
Jenkins Job Builder (JJB)
Makes construction/maintenance of Jenkins job config XML files a breeze
Devstack-Gate and Nodepool Scripts
A collection of scripts that constructs an OpenStack environment from source checkouts

I’ll be covering how to install and configure the above components to build your own testing platform using a set of scripts and Puppet modules. Of course, there are a number of ways that you can install and configure any of these components. You can manually install it somewhere by following the install instructions in the component’s documentation. However, I do not recommend that. The problem with manual installation and configuration is two-fold:

  1. If something goes wrong, you have to re-install everything from scratch. If you haven’t backed up your configuration somewhere, you will have to re-configure everything from memory.
  2. You cannot launch a new configuration or instance of your testing platform easily, since you have to manually set everything up again.

A better solution is to use a configuration management system, such as Puppet, Chef, Ansible or SaltStack to manage the deployment of these components, along with a Git repository to store configuration data. In this article, I will show you how to install an external testing system on multiple hosts or virtual machines using a set of Bash scripts and Puppet modules I have collected into a source repository on GitHub. If you don’t like Puppet or would just prefer to use a different configuration management tool, that’s totally fine. You can look at the Puppet modules in this repo for inspiration (and eventually I will write some Ansible scripts in the OpenStack External Testing project, too).

Preparation

Before I go into the installation instructions, you will need to take care of a few things. Follow these detailed steps and you should be all good.

Getting an Upstream Service Account

In order for your testing platform to post review comments to Gerrit code reviews on openstack.org, you will need to have a service account registered with the OpenStack Infra team. See this link for instructions on getting this account.

Don’t have an SSH key pair for your Gerrit service account? You can create one like so:

ssh-keygen -t rsa -b 1024 -N '' -f gerrit_key

The above will produce the key pair: a pair of files called gerrit_key and gerrit_key.pub. Copy the text of the gerrit_key.pub into the email you send to the OpenStack Infra mailing list. Keep both the files handy for use in the next step.

Create a Git Repository to Store Configuration Data

When we install our external testing platform, the Puppet modules are fed a set of configuration options and files that are specific to your environment, including the SSH private key for the Gerrit service account. You will need a place to store this private configuration data, and the ideal place is a Git repository, since additions and changes to this data will be tracked just like changes to source code.

I created a source repository on GitHub that you can use as an example. Instead of forking the repository, like you might would normally do, I recommend instead just git clone’ing the repository to some local directory, and making it your own data repository:

git clone git@github.com:jaypipes/os-ext-testing-data ~/mydatarepo
cd mydatarepo
rm -rf .git
git init .
git add .
git commit -a -m "My new data repository"

Now you’ve got your own data repository to store your private configuration data and you can put it up in some private location somewhere — perhaps in a private organization in GitHub, perhaps on a Git server you have somewhere.

Put Your Gerrit Service Account Private Key Into the Data Repository

The next thing you will want to do is add your SSH key pair to the repository that you used in the step above that had you register an upstream Gerrit service account.

If you created a new key pair using the ssh-keygen command above. You would copy the gerrit_key file into your data repository.

If you did not create a new key pair (you used an existing key pair) or you created a key pair that wasn’t called gerrit_key, simply copy that key pair into the data repository, then open up the file called vars.sh, and change the following line in it:

export UPSTREAM_GERRIT_SSH_KEY_PATH=gerrit_key

And change gerrit_key to the name of your SSH private key.

Set Your Gerrit Account Username

Next, open up the file vars.sh in your data repository (if you haven’t already), and change the following line in it:

export UPSTREAM_GERRIT_USER=jaypipes-testing

And replace jaypipes-testing with your Gerrit service account username.

Set Your Vendor Name in the Test Jenkins Job

Next, open up the file etc/jenkins_jobs/config/projects.yaml in your data repository. Change the following line in it:

  vendor: myvendor

Change myvendor to your organization’s name.

(Optional) Create Your Own Jenkins SSH Key Pair

I have a private/public SSH key pair (named jenkins_key[.pub] in the example data repository. Due to the fact that I’ve put the private key in there, it’s no longer useful as anything other than an example, so you may want to recreate your own. Do so like so:

cd $DATA_DIRECTORY
ssh-keygen -t rsa -b 1024 -N '' -f jenkins_key
git commit -a -m "Changed jenkins key to a new private one"

Save Changes in Your Data Repository

OK, we’re done with the first changes to your data repository and we’re ready to install a Jenkins master node. But first, save your changes and push your commit to wherever you are storing your data repository privately:

git add .
git commit -a -m "Added Gerrit SSH key and username"
git push

Requirements for Nodes

On the nodes (hosts, virtual machines, or LXC containers) that you are going to install Jenkins master and slaves into, you will want to ensure the following:

  • These basic packages are installed:
    • wget
    • openssl
    • ssl-cert
    • ca-certificates
  • Have the SSH keys you use with GitHub in ~/.ssh/. It also helps to bring over your ~/.ssh/known_hosts and ~/.ssh/config files as well.

Setting up Your Jenkins Master Node

On the host or virtual machine (or LXC container) you wish to run the Jenkins Master node on, run the following:

git clone $YOUR_DATA_REPO data
wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_master.sh
bash install_master.sh

The above should create an SSL self-signed certificate for Apache to run Jenkins UI with, and then install Jenkins, Jenkins Job Builder, Zuul, Nodepool Scripts, and a bunch of support packages.

Important Note: Since publishing this article, the upstream Zuul system underwent a bit of a refactoring, with the Zuul git-related activities being executed by a separate Zuul worker process called zuul-merger. I’ve updated the Puppet modules in the os-ext-testing repository accordingly, but if you had installed the Jenkins master with Zuul from the Puppet modules before Tuesday, February 18th, 2014, you will need to do the following on your master node to get all reconfigured properly:

# NOTE: This is only necessary if you installed a Jenkins master from the
# os-ext-testing repository before Tuesday, February 18th, 2014!
sudo service zuul stop
sudo rm -rf /var/log/zuul/* /var/run/zuul/*
sudo -i
# As root...
cd /root/config; git pull /root/config
exit
cd os-ext-testing; git pull; cd ../
cp os-ext-testing/puppet/install_master.sh .
bash install_master.sh

Troubleshooting note:: There is a bug in the upstream openstack-infra/config project (with a patch submitted) that may cause Puppet to error with the following:

Duplicate declaration: A2mod[rewrite] is already declared in file /home/vagrant/os-ext-testing/puppet/modules/os_ext_testing/manifests/master.pp at line 34; cannot redeclare at /root/config/modules/zuul/manifests/init.pp:236 on node master

To fix this issue, open the /root/config/modules/zuul/manifests/init.pp file and comment out these lines:

  a2mod { 'rewrite':
    ensure => present,
  }
  a2mod { 'proxy':
    ensure => present,
  }
  a2mod { 'proxy_http':
    ensure => present,
  }

When Puppet completes, go ahead and open up the Jenkins web UI, which by default will be at http://$HOST_IP:8080. You will need to enable the Gearman workers that Zuul and Jenkins use to interact. To do this:

  1. Click the `Manage Jenkins` link on the left
  2. Click the `Configure System` link
  3. Scroll down until you see “Gearman Plugin Config”. Check the “Enable Gearman” checkbox.
  4. Click the “Test Connection” button and verify Jenkins connects to Gearman.
  5. Scroll down to the bottom of the page and click `Save`
  6. Note: Darragh O’Reilly noticed when he first did this on his machine, that the Gearman plugin was not actually enabled (though it was installed). He mentioned that simply restarting the Jenkins service fixed this problem, and the Gearman Plugin Config section then appeared on the Manage Jenkins -> Configure System page.

    Once you are done with that, it’s time to load up your Jenkins jobs and start Zuul:

    sudo jenkins-jobs --flush-cache update /etc/jenkins_jobs/config/
    sudo service zuul start
    sudo service zuul-merger start
    

    If you refresh the main Jenkins web UI front page, you should now see two jobs show up:

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Testing Communication Between Upstream and Your Master

    Congratulations. You’ve successfully set up your Jenkins master. Let’s now test connectivity between upstream and our external testing platform using the simple sandbox-noop-check-communication job. By default, I set this Jenkins job to execute on the master node for the openstack-dev/sandbox project [1]. Here is the project configuration in the example data repository’s etc/jenkins_jobs/config/projects.yaml file:

    - project:
        name: sandbox
        github-org: openstack-dev
        node: master
    
        jobs:
            - noop-check-communication
            - dsvm-tempest-full:
                node: devstack_slave
    

    Note that the node is master by default. The sandbox-dsvm-tempest-full Jenkins Job is configured to run on a node labeled devstack_slave, but we will cover that later when we bring up our Jenkins slave.

    In our Zuul configuration, we have two pipelines: check and gate. There is only a single project listed in the layout.yaml Zuul project configuration file, the openstack-dev/sandbox project:

    projects:
        - name: openstack-dev/sandbox
          check:
            - sandbox-noop-check-communication
    

    By default, the only job that is enabled is the sandbox-noop-check-communication Jenkins job, and it will get run whenever a patchset is created in the upstream openstack-dev/sandbox project, as well as any time someone adds a comment with the words “recheck no bug” or “recheck bug XXXXX”. So, let us create a sample patch to that project and check to see if the sandbox-noop-check-communication job fires properly.

    Before we do that, let’s go ahead and tail the Zuul debug log, grepping for the term “sandbox”. This will show messages if communication is working properly.

    sudo tail -f /var/log/zuul/debug.log | grep sandbox
    

    OK, now create a simple test patch in sandbox. Do this on your development workstation, not your Jenkins master:

    git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    cd /tmp/sandbox
    git checkout -b testing-ext
    touch mytest
    git add mytest
    git commit -a -m "Testing comms"
    git review
    

    Output should look like so:

    jaypipes@cranky:~$ git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    Cloning into '/tmp/sandbox'...
    remote: Reusing existing pack: 13, done.
    remote: Total 13 (delta 0), reused 0 (delta 0)
    Receiving objects: 100% (13/13), done.
    Resolving deltas: 100% (4/4), done.
    Checking connectivity... done
    jaypipes@cranky:~$ cd /tmp/sandbox
    jaypipes@cranky:/tmp/sandbox$ git checkout -b testing-ext
    Switched to a new branch 'testing-ext'
    jaypipes@cranky:/tmp/sandbox$ touch mytest
    jaypipes@cranky:/tmp/sandbox$ git add mytest
    jaypipes@cranky:/tmp/sandbox$ git commit -a -m "Testing comms"
    [testing-ext 51f90e3] Testing comms
     1 file changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 mytest
    jaypipes@cranky:/tmp/sandbox$ git review
    Creating a git remote called "gerrit" that maps to:
    	ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
    Your change was committed before the commit hook was installed.
    Amending the commit to add a gerrit change id.
    remote: Processing changes: new: 1, done    
    remote: 
    remote: New Changes:
    remote:   https://review.openstack.org/73631
    remote: 
    To ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
     * [new branch]      HEAD -> refs/publish/master/testing-ext
    

    Keep an eye on your tail’d Zuul debug log file. If all is working, you should see something like this:

    2014-02-14 16:08:51,437 INFO zuul.Gerrit: Updating information for 73631,1
    2014-02-14 16:08:51,629 DEBUG zuul.Gerrit: Change  status: NEW
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Done adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Run handler awake
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Fetching trigger event
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Processing trigger event 
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Starting queue processor: check
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Finished queue processor: check (changed: False)
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Starting queue processor: gate
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Finished queue processor: gate (changed: False)
    

    If you go to the link to the code review in Gerrit (the link that output after you ran git review), you will see your Gerrit testing account has added a +1 Verified vote in the code review:

    Successful communication between upstream and our external system

    Successful communication between upstream and our external system

    Congratulations. You now have an external testing platform that is receiving events from the upstream Gerrit system, triggering Jenkins jobs on your master Jenkins server, and writing reviews back to the upstream Gerrit system. The next article goes over adding a Jenkins slave to your system, which is necessary to run real Jenkins jobs that run devstack-based gate tests. Please do let me know what you think of both this article and the source repository of scripts to set things up. I’m eager for feedback and critique. :)

    [1]— The OpenStack Sandbox project is a project that can be used for testing the integration of external testing systems with upstream. By creating a patch against this project, you can trigger the Jenkins jobs that are created during this tutorial.

Understanding the OpenStack CI System

This post describes in detail the upstream OpenStack continuous integration platform. In the process, I’ll be describing the code flow in the upstream system — from the time the contributor submits a patch to Gerrit, all the way through the creation of a devstack environment in a virtual machine, the running of the Tempest test suite against the devstack installation, and finally the reporting of test results and archival of test artifacts. Hopefully, with a good understanding of how the upstream tooling works, setting up your own linked external testing platform will be easier.

Some History and Concepts

Over the past four years, there has been a steady evolution in the way that the source code of OpenStack projects is tested and reviewed. I remember when we used Bazaar for source control and Launchpad merge proposals for code review. There was no automated or continuous testing to speak of in those early days, which put pressure on core reviewers to do testing of proposed patches locally. There was also no standardized integration test suite, so often a change in one project would inadvertantly break another project.

Thanks to the work of many contributors, particularly those patient souls in the OpenStack Infrastructure team, today there is a robust platform supporting continuous integration testing for OpenStack and Stackforge projects. At the center of this platform are the Jenkins CI servers, the Gerrit git and patch review server, and the Zuul gating system.

The Code Review System

When a contributor submits a patch to one of the OpenStack projects, one pushes their code to the git server managed by Gerrit running on review.openstack.org. Typically, contributors use the git-review Git plugin, which simplifies submitting to a git server managed by Gerrit. Gerrit controls which users or groups are allowed to propose code, merge code, and administer code repositories under its management. When a contributor pushes code to review.openstack.org, Gerrit creates a Changeset representing the proposed code. The original submitter and any other contributors can push additional amendments to that Changeset, and Gerrit collects all of the changes into the Changeset record. Here is a shot of a Changeset under review. You can see a number of patches (changes) listed in the review screen. Each of those patches was an amendment to the original commit.

Individual patches amend the changeset

Individual patches amend the changeset

For each patch in Gerrit, there are three sets of “labels” that may be applied to the patch. Anyone can comment on a Changeset and/or review the code. A review is shown on the patch in the Code-Review column in the patch “labels matrix”:

The "label matrix" on a Gerrit patch

The “label matrix” on a Gerrit patch

Non-core team members may give the patch a Code-Review label of +1 (Looks good to me), 0 (No strong opinion), or -1 (I would prefer you didn’t merge this). Core team members can give any of those values, plus +2 (Looks good to me, approved) and -2 (Do not submit).

The other columns in the label matrix are Verified and Approved. Only non-interactive users of Gerrit, such as Jenkins, are allowed to add a Verified label to a patch. The external testing platform you will set up is one of these non-interactive users. The value of the Verified label will be +1 (check pipeline tests passed), -1 (check pipeline tests failed), +2 (gate pipeline tests passed), or -2 (gate pipeline tests failed).

Only members of the OpenStack project’s core team can add an Approved label to a patch. It is either a +1 (Approved) value or not, appearing as a check mark in the Approved column of the label matrix:

An approved patch.

An approved patch.

Continuous Integration Testing

Continuous integration (CI) testing is the act of running tests that validate a full application environment on a continual basis — i.e. when any change is proposed to the application. Typically, when talking about CI, we are referring to tests that are run against a full, real-world installation of the project. This type of testing, called integration testing, ensures that proposed changes to one component do not cause failures in other components. This is especially important for complex multi-project systems like OpenStack, with non-trivial dependencies between subsystems.

When code is pushed to Gerrit, a series of jobs are triggered that run a series of tests against the proposed code. Jenkins is the server that executes and manages these jobs. It is a Java application with an extensible architecture that supports plugins that add functionality to the base server.

Each job in Jenkins is configured separately. Behind the scenes, Jenkins stores this configuration information in an XML file in its data directory. You may manually edit a Jenkins job as an administrator in Jenkins. However, in a testing platform as large as the upstream OpenStack CI system, doing so manually would be virtually impossible and fraught with errors. Luckily, there is a helper tool called Jenkins Job Builder (JJB) that constructs these XML configuration files after reading a set of YAML files and job templating rules. We will describe JJB later in the article.

The “Gate”

When we talk about “the gate”, we are talking about the process by which code is kept out of a set of source code branches if certain conditions are not met.

OpenStack projects use a method of controlling merges into certain branches of their source trees called the Non-Human Gatekeeper model [1]. Gerrit (the non-human) is configured to allow merges by users in a group called “Non-Interactive Users” to the master and stable branches of git repositories under its control. The upstream main Jenkins CI server, as well as Jenkins CI systems running at third party locations, are the users in this group.

So, how do these non-interactive users actually decide whether to merge a proposed patch into the target branch? Well, there is a set of tests (different for each project) — unit, functional, integration, upgrade, style/linting — that is marked as “gating” that particular project’s source trees. For most of the OpenStack projects, there are unit tests (run in a variety of different supported versions of Python) and style checker tests for HACKING and PEP8 compliance. These unit and style tests are run in Python virtualenvs managed by the tox testing utility.

In addition to the Python unit and style tests, there are a number of integration tests that are executed against full installations of OpenStack. The integration tests are simply subsets of the Tempest integration test suite. Finally, many projects also include upgrade and schema migration tests in their gate tests.

How Upstream Testing Works

Graphically, the upstream continuous integration gate testing system works like this:

gerrit-zuul-jenkins-flow

We step through this event flow in detail below, referencing the numbered steps in bold.

The Gerrit Event Stream and Zuul

After a contributor has pushed (1a) a new patch to a changeset or a core team member has reviewed the patch and added an Approved +1 label (1b), Gerrit pushes out a notification event to its event stream (2). This event stream can have a number of subscribers, including the Gerrit Jenkins plugin and Zuul. Zuul was developed to manage the many complex graphs of interdependent branch merge proposals in the upstream system. It monitors in-progress jobs for a set of related patches and will pre-emptively cancel any dependent test jobs that would not succeed due to a failure in a dependent patch [2].

In addition to this dependency monitoring, Zuul is responsible for constructing the pipelines of jobs that should be executed on various events. One of these pipelines is called the “gate” pipeline, appropriately named for the set of jobs that must succeed in order for a proposed patch to be merged into a target branch.

Zuul’s pipelines are configured in a single file called layout.yaml in the OpenStack-Infra config project. Here’s a snippet from that file that constructs the gate pipeline:

  - name: gate
    description: Changes that have been approved by core developers...
    failure-message: Build failed. For information on how to proceed...
    manager: DependentPipelineManager
    precedence: low
    trigger:
      gerrit:
        - event: comment-added
          approval:
            - approved: 1
        - event: comment-added
          comment_filter: (?i)^\s*reverify( (?:bug|lp)[\s#:]*(\d+))\s*$
    start:
      gerrit:
        verified: 0
    success:
      gerrit:
        verified: 2
        submit: true
    failure:
      gerrit:
        verified: -2

Zuul listens to the Gerrit event stream (3), and matches the type of event to one or more pipelines (4). The matching conditions for the gate pipeline are configured in the trigger:gerrit: section of the YAML snippet above:

    trigger:
      gerrit:
        - event: comment-added
          approval:
            - approved: 1
        - event: comment-added
          comment_filter: (?i)^\s*reverify( (?:bug|lp)[\s#:]*(\d+))\s*$

The above indicates that Zuul should fire the gate pipeline when it sees reviews with an Approved +1 label, and any comment to the review that contains “reverify” with or without a bug identifier. Note that there is a similar pipeline that is fired when a new patchset is created or when a review comment is made with the word “recheck”. This pipeline is called the check pipeline. Look in the layout.yaml file for the configuration of the check pipeline.

Once the appropriate pipeline is matched, Zuul executes (5) that particular pipeline for the project that had a patch proposed.

But wait, hold up…“, you may be asking yourself, “how does Zuul know which Jenkins jobs to execute for a particular project and pipeline?“. Great question! :)

Also in the layout.yaml file, there is a section that configures which Jenkins jobs should be run for each project. Let’s take a look at the configuration of the gate pipeline for the Cinder project:

  - name: openstack/cinder
    template:
      - name: python-jobs
...snip...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm

Each of the lines in the gate: section indicate a specific Jenkins job that should be run in the gate pipeline for Cinder. In addition, there is the python-jobs item in the template: section. Project templates are a way that Zuul consolidates configuration of many similar jobs into a simple template configuration. The project template definition for python-jobs looks like this (still in layout.yaml:

project-templates:
  - name: python-jobs
...snip...
    gate:
      - 'gate-{name}-docs'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python26'
      - 'gate-{name}-python27'

So, on determing which Jenkins jobs should be executed for a particular pipeline, Zuul sees the python-jobs project template in the Cinder configuration and expands that to execute the following Jenkins jobs:

  • gate-cinder-docs
  • gate-cinder-pep8
  • gate-cinder-python26
  • gate-cinder-python27

Jenkins Job Creation and Configuration

I previously mentioned that the configuration of an individual Jenkins job is stored in a config.xml file in the Jenkins data directory. Now, at last count, the upstream OpenStack Jenkins CI system has just shy of 2,000 jobs. It would be virtually impossible to manage the configuration of so many jobs using human-based processes. To solve this dilemma, the Jenkins Job Builder (JJB) python tool was created. JJB consumes YAML files that describe both individual Jenkins jobs as well as templates for parameterized Jenkins jobs, and writes the config.xml files for all Jenkins jobs that are produced from those templates. Important: Note that Zuul does not construct Jenkins jobs. JJB does that. Zuul simply configures which Jenkins jobs should run for a project and a pipeline.

There is a master projects.yaml file in the same directory that lists the “top-level” definitions of jobs for all projects, and it is in this file that many of the variables that are used in job template instantiation are defined (including the {name} variable, which corresponds to the name of the project.

When JJB constructs the set of all Jenkins jobs, it reads the projects.yaml file, and for each project, it sees the “name” attribute of the project, and substitutes that name attribute value wherever it sees {name} in any of the jobs that are defined for that project. Let’s take a look at the Cinder project’s definition in the projects.yaml file here:

- project:
    name: cinder
    github-org: openstack
    node: bare-precise
    tarball-site: tarballs.openstack.org
    doc-publisher-site: docs.openstack.org

    jobs:
      - python-jobs
      - python-grizzly-bitrot-jobs
      - python-havana-bitrot-jobs
      - openstack-publish-jobs
      - gate-{name}-pylint
      - translation-jobs

You will note one of the items in the jobs section is called python-jobs. This is actually not a single Jenkins job, but actually a job group. A job group definition is merely a list of jobs or job templates. Let’s take a look at the definition of the python-jobs job group:

- job-group:
    name: python-jobs
    jobs:
      - '{name}-coverage'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python26'
      - 'gate-{name}-python27'
      - 'gate-{name}-python33'
      - 'gate-{name}-pypy'
      - 'gate-{name}-docs'
      - 'gate-{name}-requirements'
      - '{name}-tarball'
      - '{name}-branch-tarball'

Each of the items listed in the jobs section of the python-jobs job group definition above is a job template. Job templates are expanded in the same way as Zuul project templates and JJB job groups are expanded. Let’s take a look at one such job template in the list above, called gate-{name}-python27.

(Hint: all Jenkins jobs for any OpenStack or Stackforge project are described in the OpenStack-Infra Config project’s modules/openstack_projects/files/jenkins_jobs/config/ directory).

The python-jobs.yaml file in the modules/openstack_project/files/jenkins_job_builder/config directory contains the definition of common Python project Jenkins job templates. One of those job templates is gate-{name}-python27:

- job-template:
    name: 'gate-{name}-python27'
... snip ...
    builders:
      - gerrit-git-prep
      - python27:
          github-org: '{github-org}'
          project: '{name}'
      - assert-no-extra-files

    publishers:
      - test-results
      - console-log

    node: '{node}'

Looking through the above job template definition, you will see a section called “builders“. The builders section of a job template lists (in sequential order of expected execution) the executable sections or scripts of the Jenkins job. The first executable section in the gate-{name}-python27 job template is called “gerrit-git-prep“. This executable section is defined in macros.yaml, which contains a number of commonly-run scriptlets. Here’s the entire gerrit-git-prep macro definition:

- builder:
    name: gerrit-git-prep
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/gerrit-git-prep.sh https://review.openstack.org http://zuul.openstack.org git://git.openstack.org"

So, gerrit-git-prep is simply executing a Bash script called “gerrit-git-prep.sh” that is stored in the /usr/local/jenkins/slave_scripts/ directory. Let’s take a look at that file. You can find it in the /modules/jenkins/files/slave_scripts/ [3]
directory in the same OpenStack Infra Config project:

#!/bin/bash -e
 
GERRIT_SITE=$1
ZUUL_SITE=$2
GIT_ORIGIN=$3
 
# ... snip ...
 
set -x
if [[ ! -e .git ]]
then
    ls -a
    rm -fr .[^.]* *
    if [ -d /opt/git/$ZUUL_PROJECT/.git ]
    then
        git clone file:///opt/git/$ZUUL_PROJECT .
    else
        git clone $GIT_ORIGIN/$ZUUL_PROJECT .
    fi
fi
git remote set-url origin $GIT_ORIGIN/$ZUUL_PROJECT
 
# attempt to work around bugs 925790 and 1229352
if ! git remote update
then
    echo "The remote update failed, so garbage collecting before trying again."
    git gc
    git remote update
fi
 
git reset --hard
if ! git clean -x -f -d -q ; then
    sleep 1
    git clean -x -f -d -q
fi
 
if [ -z "$ZUUL_NEWREV" ]
then
    git fetch $ZUUL_SITE/p/$ZUUL_PROJECT $ZUUL_REF
    git checkout FETCH_HEAD
    git reset --hard FETCH_HEAD
    if ! git clean -x -f -d -q ; then
        sleep 1
        git clean -x -f -d -q
    fi
else
    git checkout $ZUUL_NEWREV
    git reset --hard $ZUUL_NEWREV
    if ! git clean -x -f -d -q ; then
        sleep 1
        git clean -x -f -d -q
    fi
fi
 
if [ -f .gitmodules ]
then
    git submodule init
    git submodule sync
    git submodule update --init
fi

The purpose of the script above is simple: Check out the source code of the proposed Gerrit changeset and ensure that the source tree is clean of any cruft from a previous run of a Jenkins job that may have run in the same Jenkins workspace. The concept of a workspace is important. When Jenkins runs a job, it must execute that job from within a workspace. The workspace is really just an isolated shell environment and filesystem directory that has a set of shell variables export’d inside it that indicate a variety of important identifiers, such as the Jenkins job ID, the name of the source code project that has triggered a job, the SHA1 git commit ID of the particular proposed changeset that is being tested, etc [4].

The next builder in the job template is the “python27” builder, which has two variables injected into itself:

      - python27:
          github-org: '{github-org}'
          project: '{name}'

The github-org variable is a string of the already existing {github-org} variable value. The project variable is populated with the value of the {name} variable. Here’s how the python27 builder is defined (in macros.yaml:

- builder:
    name: python27
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/run-unittests.sh 27 {github-org} {project}"

Again, just a wrapper around another Bash script, called run-unittests.sh in the /usr/local/jenkins/slave_scripts directory. Here’s what that script looks like:

version=$1
org=$2
project=$3
 
# ... snip ...
 
venv=py$version
 
# ... snip ...
 
source /usr/local/jenkins/slave_scripts/select-mirror.sh $org $project
 
tox -e$venv
result=$?
 
echo "Begin pip freeze output from test virtualenv:"
echo "======================================================================"
.tox/$venv/bin/pip freeze
echo "======================================================================"
 
if [ -d ".testrepository" ] ; then
# ... snip ...
    .tox/$venv/bin/python /usr/local/jenkins/slave_scripts/subunit2html.py ./subunit_log.txt testr_results.html
    gzip -9 ./subunit_log.txt
    gzip -9 ./testr_results.html
# ... snip ...
fi
 
# ... snip ...

In short, for the Python 2.7 builder, the above runs the command tox -epy27 and then runs a prettifying script and gzips up the results of the unit test run. And that’s really the meat of the Jenkins job. We will discuss the publishing of the job artifacts a little later in this article, but if you’ve gotten this far, you have delved deep into the mines of the OpenStack CI system. Congratulations!

Devstack-Gate and Running Tempest Against a Real Environment

OK, so unit tests running in a simple Jenkins slave workspace are one thing. But what about Jenkins jobs that run integration tests against a full set of OpenStack endpoints, interacting with real database and message queue services? For these types of Jenkins jobs, things are more complicated. Yes, I know. You probably think things have been complicated up until this point, and you’re right! But the simple unit test jobs above are just the tip of the proverbial iceberg when it comes to the OpenStack CI platform.

For these complex Jenkins jobs, an additional set of tools are added to the mix:

  • Nodepool — Provides virtual machine instances to Jenkins masters for running complex, isolation-sensitive Jenkins jobs
  • Devstack-Gate — Scripts that create an OpenStack environment with Devstack, run tests against that environment, and archive logs and results

Assignment of a Node to Run a Job

Different Jenkins jobs require different workspaces, or environments, in which to run. For basic unit or style-checking test jobs, like the gate-{name}-python27 job template we dug into above, not much more is needed than a tox-managed virtualenv running in a source checkout of the project with a proposed change. However, for Jenkins jobs that run a series of integration tests against a full OpenStack installation, a workspace with significantly more resources and isolation is necessary. For these latter types of jobs, the upstream CI platform uses a pool of virtual machine instances. This pool of virtual machine instances is managed by a tool called nodepool. The virtual machines run in both HP Cloud and Rackspace Cloud, who graciously donate these instances for the upstream CI system to use. You can see the configuration of the Nodepool-managed set of instances here.

Instances that are created by Nodepool run Jenkins slave software, so that they can communicate with the upstream Jenkins CI master servers. A script called prepare_node.sh runs on each Nodepool instance. This script just git clones the OpenStack Infra config project to the node, installs Puppet, and runs a Puppet manifest that sets up the node based on the type of node it is. There are bare nodes, nodes that are meant to run Devstack to install OpenStack, and nodes specific to the Triple-O project. The node type that we will focus on here is the node that is meant to run Devstack. The script that runs to prepare one of these nodes is prepare_devstack_node.sh, which in turn calls prepare_devstack.sh. This script caches all of the repositories needed by Devstack, along with Devstack itself, in a workspace cache on the node. This workspace cache is used to enable fast reset of the workspace that is used during the running of a Jenkins job that uses Devstack to construct an OpenStack environment.

Devstack-Gate

The Devstack-Gate project is a set of scripts that are executed by certain Jenkins jobs that need to run integration or upgrade tests against a realistic OpenStack environment. Going back to the Cinder project configuration in the Zuul layout.yaml file:

  - name: openstack/cinder
    template:
      - name: python-jobs
... snip ...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm
... snip ...

Note the highlighted line. That Jenkins job template is one such job that needs an isolated workspace that has a full OpenStack environment running on it. Note that “dsvm” stands for “Devstack virtual machine”.

Let’s take a look at the JJB configuration of the gate-tempest-dsvm-full job:

- job-template:
    name: '{pipeline}-tempest-dsvm-full{branch-designator}'
    node: '{node}'
... snip ...
    builders:
      - devstack-checkout
      - shell: |
          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh
      - link-logs

    publishers:
      - devstack-logs
      - console-log

The devstack-checkout builder is simply a Bash script macro that looks like this:

- builder:
    name: devstack-checkout
    builders:
      - shell: |
          #!/bin/bash -xe
          if [[ ! -e devstack-gate ]]; then
              git clone git://git.openstack.org/openstack-infra/devstack-gate
          else
              cd devstack-gate
              git remote set-url origin git://git.openstack.org/openstack-infra/devstack-gate
              git remote update
              git reset --hard
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              git checkout master
              git reset --hard remotes/origin/master
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              cd ..
          fi

All the above is doing is git clone’ing the devstack-gate repository into the Jenkins workspace, and if the devstack-gate repository already exists, checks out the latest from the master branch.

Returning to our gate-tempest-dsvm-full JJB job template, we see the remaining part of the builder is a Bash scriptlet like so:

          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh

Not all that complicated. It exports some environment variables and copies the devstack-vm-gate-wrap.sh script out of the devstack-gate repo that was clone’d in the devstack-checkout macro to the work directory and then runs that script.

The devstack-vm-gate-wrap.sh script is responsible for setting even more environment variables and then calling the devstack-vm-gate.sh script, which is where the real magic happens.

Construction of OpenStack Environment with Devstack

The devstack-vm-gate.sh script is responsible for constructing a full OpenStack environment and running integration tests against that environment. To construct this OpenStack environment, it uses the excellent Devstack project. Devstack is an elaborate series of Bash scripts and functions that clones each OpenStack project source code into /opt/stack/new/$project [5]— , runs python setup.py install in each project checkout, and starts each relevant OpenStack service (e.g. nova-compute, nova-scheduler, etc) in a separate Linux screen session.

Devstack’s creation script (stack.sh) is called from the script after creating the localrc file that stack.sh uses when constructing the Devstack environment.

Execution of Integration Tests Against an OpenStack Environment

Once the OpenStack environment is constructed, the devstack-vm-gate.sh script continue on to run a series of integration tests:

    cd $BASE/new/tempest
    if [[ "$DEVSTACK_GATE_TEMPEST_ALL" -eq "1" ]]; then
        echo "Running tempest all test suite"
        sudo -H -u tempest tox -eall -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?
    elif [[ "$DEVSTACK_GATE_TEMPEST_FULL" -eq "1" ]]; then
        echo "Running tempest full test suite"
        sudo -H -u tempest tox -efull -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?

You will note that the $DEVSTACK_GATE_TEMPEST_FULL Bash environment variable was set to “1” in the gate-tempest-dsvm-full Jenkins job builder scriptlet.

sudo -H -u tempest tox -efull triggers the execution of Tempest’s integration test suite. Tempest is the collection of canonical OpenStack integration tests that are used to validate that OpenStack APIs work according to spec and that patches to one OpenStack service do not inadvertently cause failures in another service.

If you are curious what actual commands are run, you can check out the tox.ini file in Tempest:

[testenv:full]
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepostiory bug: https://bugs.launchpad.net/testrepository/+bug/1208610
commands =
  bash tools/pretty_tox.sh '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|thirdparty|cli)) {posargs}'

In short, the above runs the Tempest API, scenario, CLI, and thirdparty tests.

Archival of Test Artifacts

The final piece of the puzzle is archiving all of the artifacts from the Jenkins job execution. These artifacts include log files from each individual OpenStack service running in Devstack’s screen sessions, the results of the Tempest test suite runs, as well as echo’d output from the devstack-vm-gate* scripts themselves.

These artifacts are gathered together by the devstack-logs and console-log JJB publisher macros:

- publisher:
    name: console-log
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              copy-console: true
              copy-after-failure: true


- publisher:
    name: devstack-logs
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              source: 'logs/**'
              keep-hierarchy: true
              copy-after-failure: true

Conclusion

I hope this article has helped you understand a bit more how the OpenStack continuous integration platform works. We’ve stepped through the flow through the various components of the platform, including which events trigger what actions in each components. You should now have a good idea how the various parts of the upstream CI infrastructure are configured and where to go look in the source code for more information.

The next article in this series discusses how to construct your own external testing platform that is linked with the upstream OpenStack CI platform. Hopefully, this article will provide you most of the background information you need to understand the steps and tools involved in that external testing platform construction.


[1]— The link describes and illustrates the non-human gatekeeper model with Bazaar, but the same concept is applicable to Git. See the OpenStack GitWorkflow pages for an illustration of the OpenStack specific model.
[2]— Zuul really is a pretty awesome bit of code kit. Jim Blair, the author, does an excellent job of explaining the merge proposal dependency graph and how Zuul can “trim” dead-end branches of the dependency graph in the Zuul documentation.
[3]— Looking for where a lot of the “magic” in the upstream gate happens? Take an afternoon to investigate the scripts in this directory. :)
[4]— Gerrit Jenkins plugin and Zuul export a variety of workspace environment variables into the Jenkins jobs that they trigger. If you are curious what these variables are, check out the Zuul documentation on parameters.
[5]— The reason the projects are installed into /opt/stack/new/$project is because the current HEAD of the target git branch for the project is installed into /opt/stack/old/$project. This is to allow an upgrade test tool called Grenade to test upgrade paths.

Testing Essex RC1 with Devstack and Tempest

This past week, the first release candidates of a number of OpenStack projects was released. From this point until the OpenStack Design Summit, we are pretty much focused on testing the release candidates. One way you can help out is to test the release candidate code, and this article will walk you through doing that with the Devstack and Tempest projects.

Setting up an OpenStack Essex RC1 Environment with Devstack

Before you test, you need an OpenStack environment. The easiest way to get an OpenStack environment up and running on a single machine [1] is to use Devstack. To get a version of Devstack that is designed to run against the release candidate branches of the OpenStack projects, simply clone the main repo of Devstack, like so:

git clone git://github.com/openstack-dev/devstack

Setting Up Your Devstack stackrc to pull RC1 branches of OpenStack Projects

Devstack contains a file called stackrc that is sourced by the main stack.sh script to create your OpenStack environment. The stackrc file contains environment variables that tell stack.sh which repositories and branches of OpenStack projects to clone. We will need to change the branches in the stackrc file from master to milestone-proposed to grab the release candidate branches. You will want to change the target branches for Nova, Glance, Keystone, and their respective client libraries. Here is what the default stackrc will look like:

stackrc before...

And here is what it should look like after you change the master branches to milestone-proposed branches appropriately:

stackrc after...

Setting Up Your Devstack localrc for Running Tempest

There are a couple things you will want to put into your Devstack’s localrc file before actually creating your OpenStack environment for testing with Tempest. So, open up your editor of choice and make sure that you have at least the following in your localrc file in Devstack’s root source directory. (If the file does not exist, simply create it)

API_RATE_LIMIT=False
MYSQL_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
ADMIN_PASSWORD=pass
SERVICE_TOKEN=servicetoken

The first line instructs devstack to disable the default ratelimit middleware in the Nova API server. We need to do this because Tempest issues hundreds of API requests in a short amount of time as it runs its tests. If we don’t do this, Tempest will take a much longer time to run and you will likely get test failures with a bunch of overLimitFault messages.

The other lines simply set the various passwords to an easy-to-remember password “pass” for testing. And the final line is needed currently to set up some services but should be deprecated fairly soon…

Installing the OpenStack RC1 Environment

Now that you’ve got your devstack scripts cloned and your localrc installed, it’s time to run the main stack.sh script that will install OpenStack on your test machine. It’s as easy as running the stack.sh script. After running &mdash and be patient, on a first run the script can take ten or more minutes to complete — you will see a bunch of output and then something like this:

$> ./stack.sh
<snip lots and lots of output...>
The default users are: admin and demo
The password: pass
This is your host ip: 192.168.1.98
stack.sh completed in 517 seconds.

At this point, feel free to run the info.sh script to verify all went well:

jpipes@librebox:~/repos/devstack$ ./tools/info.sh 
git|glance|milestone-proposed[f4a7035]
git|horizon|milestone-proposed[97fc4f8]
git|keystone|milestone-proposed[f3ce326]
git|nova|milestone-proposed[4e02ba1]
git|noVNC|master[22b9a75]
git|python-keystoneclient|milestone-proposed[bf13df1]
git|python-novaclient|milestone-proposed[aa0e87f]
os|vendor=Ubuntu
os|release=11.10
pkg|pep8|0.6.1-2ubuntu1
pkg|pylint|0.23.0-1
pkg|python-pip|1.0-1
<snip a whole bunch of package versions...>
pip|pika|0.9.5
localrc|API_RATE_LIMIT=False
localrc|HOST_IP=192.168.1.98
localrc|SERVICE_TOKEN=servicetoken

At this point, you have a fully functioning OpenStack RC1 environment. You can do the following to check into the logs (actually just the daemon output in a screen window):

$> screen -x

Switch screen windows using the <Ctrl>-a NUM key combination, where NUM is the number of the screen window you see at the bottom of your console. Type <Ctrl>-a d to detach from your screen session. The screenshot below shows what your screen session may look like. In the screenshot, I’ve hit <Ctrl>-a 4 to switch to the n-api screen window which is showing the Nova API server daemon output.

screen session showing the Nova API server window...

Testing the OpenStack Essex RC1 Environment with Tempest

The Tempest project is an integration test suite for the OpenStack projects. Personally, in my testing setup at home, I run Tempest from a different machine on my local network than the machine that I run devstack on. However, you are free to run Tempest on the same machine you just installed Devstack on.

Grab Tempest by cloning the canonical repo:

$> git clone git://github.com/openstack/tempest

Once cloned, change directory into tempest.

Creating Your Tempest Configuration File

Tempest needs some information about your OpenStack environment to run properly. Because Tempest executes a series of API commands against the OpenStack environment, it needs to know where to find the main Compute API endpoint or where it can find the Keystone server that can return a service catalog. In addition, Tempest needs to know the UUID of the base image(s) that Devstack downloaded and installed in the Glance server.

Create the tempest configuration file by copying the sample config file included in Tempest $tempest_dir/etc/tempest.conf.sample.

$> cp etc/tempest.conf.sample etc/tempest.conf

Next, you will want to query the Glance API server to get the UUID of the base AMI image used in testing. To do this, issue a call like so:

jpipes@uberbox:~/repos/tempest$ glance -I admin -K pass -T admin -N http://192.168.1.98:5000/v2.0 -S keystone index | grep ami | cut -f1 | awk '{print $1}'
99a48bc4-d356-4b4d-95d4-650f707699c2

Of course, you will want to replace the appropriate parts of the call above with your own environment. In my case above, my devstack environment is running on a host 192.168.1.98 and I’m accessing Glance with an “admin” user in an “admin” tenant with a password of “pass”. Copy the UUID identifier of the image that is returned from the command above (in my case, that UUID is 99a48bc4-d356-4b4d-95d4-650f707699c2).

Now go ahead and open up the configuration file you just created by copying the tempest.conf.sample file. You will see something like this:

[identity]
use_ssl=False
host=127.0.0.1
port=5000
api_version=v2.0
path=tokens
user=admin
password=admin-password
tenant_name=admin-project
strategy=keystone

[compute]
# Reference data for tests. The ref and ref_alt should be
# distinct images/flavors.
image_ref=e7ddc02e-92fa-4f82-b36f-59b39bf66a67
image_ref_alt=346f4039-a81e-44e0-9223-4a3d13c92a07
flavor_ref=1
flavor_ref_alt=2
ssh_timeout=300
build_interval=10
build_timeout=600
catalog_type=compute
create_image_enabled=true
resize_available=true

[image]
username=admin
password=********
tenant=admin
auth_url=http://localhost:5000/v2.0

You will want to replace the various configuration option values with ones that correspond to your environment. For the image_ref and image_ref_alt values in the [compute] scetion of the config file, use the UUID you copied from above.

Here is what my fully-replaced config file looks like. Keep in mind, my Devstack environment is running on 192.168.1.98. I’ve highlighted the values different from the sample config…

[identity]
use_ssl=False
host=192.168.1.98
port=5000
api_version=v2.0
path=tokens
user=demo
password=pass
tenant_name=demo
strategy=keystone

[compute]
# Reference data for tests. The ref and ref_alt should be
# distinct images/flavors.
image_ref=99a48bc4-d356-4b4d-95d4-650f707699c2
image_ref_alt=99a48bc4-d356-4b4d-95d4-650f707699c2
flavor_ref=1
flavor_ref_alt=2
ssh_timeout=300
build_interval=10
build_timeout=600
catalog_type=compute
create_image_enabled=true
resize_available=true

[image]
username = demo
password = pass
tenant= demo
auth_url=http://192.168.1.98:5000/v2.0

Fire Away

The only thing left to do is fire Tempest at your OpenStack environment. Below, I’m executing Tempest in verbose mode. Nosetests is our standard test runner.

jpipes@uberbox:~/repos/tempest$ nosetests -v tempest
All public and private addresses for ... ok
Providing a network type should filter ... ok
<snip a whole bunch of tests>
An access IPv6 address must match a valid address pattern ... ok
Use an unencoded file when creating a server with personality ... ok
Create a server with name parameter empty ... ok

----------------------------------------------------------------------
Ran 131 tests in 798.125s

OK (SKIP=5)

After you’re done running Tempest — and hopefully everything runs OK — feel free to hit your Devstack Horizon dashboard and log in as your demo user. Unless you made some changes when installing Devstack above, your Horizon dashboard will be available at http://$DEVSTACK_HOST_IP.

If you encounter any test failures or issues, please be sure to log bugs for the appropriate project!

Known Issues

You can try running Tempest with the --processes=N option which uses the nosetest multiprocessing plugin. You might get a successful test run … but probably not :)

Likely, you will hit two issues: the first is that you will likely hit the quote limits for your demo user because multiple processes will be creating instances and volumes. You can remedy this by altering the quotas for the tenant you are running the compute tests with.

Secondly, you may run into error output that looks like this:

======================================================================
ERROR: An image for the provided server should be created
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/jpipes/repos/tempest/tempest/tests/test_images.py", line 38, in test_create_delete_image
    self.servers_client.wait_for_server_status(server['id'], 'ACTIVE')
  File "/home/jpipes/repos/tempest/tempest/services/nova/json/servers_client.py", line 147, in wait_for_server_status
    raise exceptions.BuildErrorException(server_id=server_id)
BuildErrorException: Server 2e0b78fc-cc98-485b-8471-753778bee472 failed to build and is in ERROR status

And in your nova-compute log (or screen output) you might notice something like this:

libvirtError: operation failed: cannot restore domain 'instance-0000001f' uuid 2453b24b-87e1-4f85-9c25-ce3706a8c1d1 from a file which belongs to domain 'instance-0000001f' uuid deb8e941-4693-4768-90cc-03ad98444c85

I’ve not quite gotten to the bottom of this, but there seems to be a race condition that gets triggered in the Compute API when a similar request is received nearly simulataneously. I can reliably reproduce the above error by simply adding --processes=2 to my invocation of Tempest. I believe there is an issue with the seeding of identifiers, but it’s just a guess. I still have to figure it out. But in the meantime, be aware of the issue.

1. You can certainly use devstack to install a multi-node OpenStack environment, but this tutorial sticks to a single-node environment for simplicity reasons.

Looking for a Few Good Engineers

Do you know Python? Do you get a thrill breaking other people’s code? Do you have experience with Chef, Puppet, Cobbler, Orchestra, or Jenkins? Have you ever deployed or worked on highly distributed systems? Do you understand virtualization technologies like KVM, Xen, ESX or Hyper-V?

If you answered “Yes!” to any of the questions above and are interested in working in a distributed, high-energy engineering team on solving complex problems with cloud infrastructure software, I want to hear from you. Experience with OpenStack is a huge plus.

I’m looking for QA software engineers, software deployment and/or automation engineers and software developers that can hit the ground running and make a big impact from Day One. Feel free to email me at REVERSE('moc.liamg@sepipyaj').

Diagnose and fix PEP8 issues during code review

I figured I’d write a quick post about how to deal with “pep8 issues” that come up during code reviews on OpenStack core projects. These issues come up often for new contributors, and it can be a source of frustration until the contributor understands how to diagnose and fix the issues that come up.

PEP8 is the Python PEP that deals with a recommended code style. All core (and periphery Python) OpenStack projects validate that new code pushed to the source tree is “pep8-compliant”. When a new patchset is pushed from code review to Jenkins for the set of automated pre-merge tests, the pep8 command-line tool is run against the new source tree to ensure it meets PEP8 code style standards.

If this PEP8 Jenkins job fails, the code submitter will see a notification that the job failed, and the contributor must fix up any pep8 issues and push those fixes up for review again. Typically, this notification looks something like this:

Change subject: Added Keypair extension (os-keypairs) client and tests LP#900139
......................................................................


Patch Set 2: I would prefer that you didn't submit this

Build Unstable

https://jenkins.openstack.org/job/gate-tempest-pep8/38/ : UNSTABLE
https://jenkins.openstack.org/job/gate-tempest-merge/78/ : SUCCESS

--
To view, visit https://review.openstack.org/3179
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I34c7e9aa6a1796b8d4c3ac9b3b69438796752866
Gerrit-PatchSet: 2
Gerrit-Project: openstack/tempest
Gerrit-Branch: master
Gerrit-Owner: kavan-patil
Gerrit-Reviewer: Brian Waldon 
Gerrit-Reviewer: Jay Pipes 
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: kavan-patil

There are a couple ways you can diagnose what style points your code violated. Probably the easiest and fastest is to just follow the link in the notification email to the Jenkins job. Clicking the link above, I get to the Jenkins job page, which looks like this:

Clicking on the graph, I get to a details screen showing the source files and lines of code that violated pep8 rules:

I can then go to line 86 of tempest/openstack.py and investigate the code style

Alternately, I could run the pep8 CLI tool on my local branch, which will tell me the pep8 violations and what to fix, as shown here:

jpipes@uberbox:~/repos/tempest$ pep8 --repeat tempest
tempest/openstack.py:86:73: W292 no newline at end of file

There we are… the tempest/openstack.py file doesn’t end with a newline. An easy fixup. :)

Presentation: OpenStack QA – Walkthrough of Processes, Tools and Code

Last night I gave a short webinar to some folks about the basics of contributing to the Tempest project, which is the OpenStack integration test suite. It was the first time I’d used Google Docs to create and give a presentation and I must say I was really impressed with the ease-of-use of Google Docs Presentation. Well done, Google.

Anyway, I’ve uploaded a PDF of the presentation to this website and provided a link to the Google Docs presentation along with a brief overview of the topics covered in the slides below. As always, I love to get feedback on slides. Feel free to leave a comment here, email me or find me on IRC. Enjoy!

Google Presentation (HTML)
PDF slides


Topics included in the slides:

  • OpenStack Contribution Process
  • Running Devstack Locally
  • Running Tempest against an Environment
  • Walkthrough the Tempest Source Code
  • Progressively improving a test case
  • Common Scenarios in Code Review and Submission

OpenStack Dev Tip — Easily Pull a Review Branch

Just a quick tip for developers working on OpenStack projects that work on multiple development machines or want to pull a colleague’s code from the Gerrit review system and test it locally.

If you have followed the instructions about setting up a development environment successfully, you will have installed the git-review tool that Jim Blair and Monty Taylor maintain. The git-review tool has a nice little feature that enables you to easily pull any branch that anyone has pushed up to code review:

$> git review -d $REVIEW_NUM

The $REVIEW_NUM variable should be replaced with the identifier of the review branch in Gerrit.

For example, I developed some code on my laptop that I now want to pull to my beefier work machine. The original branch is failing a few tests in Jenkins and I want to diagnose what’s going on. The review branch is here: https://review.openstack.org/#change,1656. The review number (ID) is 1656.

To grab that branch into my local environment and check it out, I do:

jpipes@uberbox:~/repos/glance$ git review -d 1656
Downloading refs/changes/56/1656/2 from gerrit into review/jay_pipes/bug/850377

Doing a git status, you’ll note that I am now in the local branch called review/jay_pipes/bug/850377:

jpipes@uberbox:~/repos/glance$ git status
# On branch review/jay_pipes/bug/850377
# Your branch and 'gerrit/master' have diverged,
# and have 1 and 2 different commit(s) each, respectively.
#
nothing to commit (working directory clean)

I can now run tests, diagnose the issue(s), fix code up and do a:

$> git commit -a --amend
$> git review

And my changes will be pushed up to the original review in Gerrit for others to look at.

Essex Design Summit — QA Sessions to Note

Essex Design Summit

There are quite a few folks interested in QA coming to the OpenStack
Essex Design Summit
next week. I wanted to give you all a heads-up on
the sessions that may be of interest to you.

Here they are:

Monday, Oct 3rd:

09:30-10:25 – Essex Release Cycle

Thierry Carrez, our illustrious release manager, will do a post-mortem
on the Diablo release cycle and discuss potential changes for the
Essex release cycle. I know almost all QAers have expressed desires to
have maintenance branches managed by the QA team and I’ve heard
suggestions about various QA-centric freeze points. Those interested
in advocating for these things should plan to attend this session.

14:00-14:45 – Stable Release Updates

Dave Walker from Canonical plans to outline some possibilities for how
to maintain and update stable releases of OpenStack projects.

15:00-15:45 – Separating API from Implementation of the API

Total self-promotion of a session I’ve proposed… I think anyone
interested in stabilizing the OpenStack APIs and having OpenStack APIs
become the open standards for the cloud computing industry should
attend.

16:30-17:15 – OpenStack Compute API 2.0

Glen Campbell will be leading a discussion about how to improve the
Compute (Nova) API for a 2.0 API series. I think it’s important that a
number of folks on the QA team attend this session and get an idea of
the things that we will be looking at in the future regarding the
Compute API. Personally, I’m definitely planning on attending this
one.

17:30 – 17:55 – NetStack Continuous Integration Planning

Personally, I will not be at this session as I have another session to
lead. However, I think it is important that a number of people from
the QA team attend this session, listen to the needs of the NetStack
contributors, voice our support for their projects, explain what the
goals of our team are, and enable some cross-team collaborative
efforts around CI and QA.

Tuesday, Oct 4th:

09:30 – 09:55 – Documentation Strategies for OpenStack

Anne Gentle will be leading a discussion about documentation of
OpenStack projects. One of the deliverables of the OpenStack QA team
is clearly to identify areas where specifications don’t match
behaviour, so I think it’s pretty critical that the Doc Team and the
QA team be on the same page when it comes to how to coordinate
communication of documentation discrepancies.

09:30 – 09:55 – VM Disk Management in Nova

At the same time as the documentation session, Paul Voccio is leading
a discussion about VM disk management in Nova. Those QAers focusing on
disk/volume management may want to attend this session to ensure the
QA team has a good grasp of changes coming in this arena.

10:00 – 10-25 – OpenStack Common

Brian Lamar will be leading a discussion on getting serious about the
potential of an openstack-common Python library of common code shared
amongst many OpenStack projects. Hey, it’s a heck of a lot easier to
QA code that’s in one location than the same code, written with slight
differences, spread across many projects… seems like a no-brainer
for the QA team to attend and support this idea. :)

11:00 – 11:25 – Monitoring in Swift

John Dickinson will be leading a session to discuss what things should
be monitored across a Swift cluster, and what tools are available for
monitoring. I think this discussion will be valuable for those of us
interested in long-running production integration tests where Swift is
one of the components of a full OpenStack test cluster.

12:00 – 12:25 – Integration Test Suites and Gating Trunk

A no brainer… in this session we will talk about the various
integration test suites for Nova/Glance/Keystone and discuss the
effort already underway to combine them. In addition, we will talk
about what policies to recommend for OpenStack projects regarding what
level of passing integration tests should hold up a gated trunk.

15:30 – 15:55 – Making VM State Handling More Robust

Phil Day is leading a discussion about ways in which the handling of
VM state transitions can be inconsistent and confusing. Since the QA
team is responsible for documenting just such inconsistencies and
building tests cases for such inconsistent behaviour, I think this
session would be good to hang around in and listen/take notes.

16:30 – 17:25 – OpenStack Faithful Implementation Test Suites (FITS)

Josh McKenty will be talking about certain proposals regarding a FITS
for OpenStack APIs. Should be an interesting session :)

Wednesday, Oct 5th:

09:30 – 10:25 – XenServer/KVM Feature Parity Plan

This session should be good for those QAers interested in identifying
areas where feature parity between hypervisors is lacking, and
discussing ways in which the QA team can document these disparities
and produce tests for identifying future disparity among hypervisors.

11:00 – 11:45 – Glance Throughput Improvements

This session is being led by Tim Reddins, from HP, who (along with his
team) have done some analysis on ways to improve Glance’s throughput.
QAers interested in stress, capacity, and parallelism testing should
definitely attend!

11:30 – 11:55 – Nova Upgrades

Ray Hookway will talk about ways that Nova’s update process can be
made more robust. I imagine that the talk’s recommendations will be
generally applicable to many OpenStack projects, not just Nova. I also
think that some members of the QA team should attend — we should be
able to create functional tests for upgrade processes for all
OpenStack projects…

14:30 – 14:55 – Git/Gerrit Best Practices

Monty Taylor is leading this session on Gerrit/Git best practices. I
recommend everyone go, if only to see the fireworks.

15:30 – 15:55 – Quality Assurance in OpenStack

Uhm, duh, you should all be at this one. :) We’ll discuss how to
divide the voluminous amount of work among our members, talk about
which projects (and components within certain projects) are
high-priority items, the ways we should communicate and track
progress, etc

17:00 – 17:25 – Internal Service Communication

Brian Waldon is leading a session on internal service communication
that should be quite interesting. The integration testing coverage of
major internal service components of Nova is currently light, and is
one of those areas I think should be carefully picked over by our QA
team.

OK, that’s the recommendations from me, but of course, feel free to
attend whatever sessions are of most interest to you. I’m very much
looking forward to meeting all of you (we’re up to 28 members as of
this writing).

Cheers, and see you tomorrow!
-jay