Setting Up an External OpenStack Testing System – Part 2

In this third article in the series, we discuss adding one or more Jenkins slave nodes to the external OpenStack testing platform that you (hopefully) set up in the second article in the series. The Jenkins slave nodes we create today will run Devstack and execute a set of Tempest integration tests against that Devstack environment.

Add a Credentials Record on the Jenkins Master

Before we can add a new slave node record on the Jenkins master, we need to create a set of credentials for the master to use when communicating with the slave nodes. Head over to the Jenkins web UI, which by default will be located at http://$MASTER_IP:8080/. Once there, follow these steps:

  1. Click the Credentials link on the left side panel
  2. Click the link for the Global domain:
    credentials-list
  3. Click the Add credentials link
  4. Select SSH username with private key from the dropdown labeled “Kind”
  5. Enter “jenkins” in the Username textbox
  6. Select the “From a file on Jenkins master” radio button and enter /var/lib/jenkins/.ssh/id_rsa in the File textbox:
    add-credentials
  7. Click the OK button

Construct a Jenkins Slave Node

We will now install Puppet and the software necessary for running Devstack and Jenkins slave agents on a node.

Slave Node Requirements

On the host or virtual machine that you have selected to use as your Jenkins slave node, you will need to ensure, like the Jenkins master node, that the node has the following:

  • These basic packages are installed:
    • wget
    • openssl
    • ssl-cert
    • ca-certificates
  • Have the SSH keys you use with GitHub in ~/.ssh/. It also helps to bring over your ~/.ssh/known_hosts and ~/.ssh/config files as well.
  • Have at least 40G of available disk space
  • .

IMPORTANT NOTE: If you were considering using LXC containers for your Jenkins slave nodes (as I originally struggled to use)…. Use a KVM or other non-shared-kernel virtual machine for the devstack-running Jenkins slaves. Bugs like the inability to run open-iscsi in an LXC container make it impossible to run devstack inside an LXC container.

Download Your Config Data Repository

In the second article in this series, we went over the need for a data repository and, if you followed along in that article, you created a Git repository and stored an SSH key pair in that repository for Jenkins to use. Let’s get that data repository onto the slave node:

git clone $YOUR_DATA_REPO data

Install the Jenkins software and pre-cache OpenStack/Devstack Git Repos

And now, we install Puppet and have Puppet set up the slave software:

wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_slave.sh
bash install_slave.sh

Puppet will run for some time, installing the Jenkins slave agent software and necessary dependencies for running Devstack. Then you will see output like this:

Running: ['git', 'clone', 'https://git.openstack.org/openstack-dev/cookiecutter', 'openstack-dev/cookiecutter']
Cloning into 'openstack-dev/cookiecutter'...
...

Which indicates Puppet done and a set of Nodepool scripts are running to cache upstream OpenStack Git repositories on the node and prepare Devstack. Part of the process of preparing Devstack involves downloading images that are used by Devstack for testing. Note that this step takes a long time! Go have a beer or other beverage and work on something else for a couple hours.

Adding a Slave Node on the Jenkins Master

In order to “register” our slave node with the Jenkins master, we need to create a new node record on the master. First, go to the Jenkins web UI, and then follow these steps:

  1. Click the Manage Jenkins link on the left
  2. Scroll down and click the Manage Nodes link
  3. Click the New Node link on the left:
    manage-nodes
  4. Enter “devstack_slave1” in the Node name textbox
  5. Select the Dumb Slave radio button:
    add-node
  6. Click the OK button
  7. Enter 2 in the Executors textbox
  8. Enter “/home/jenkins/workspaces” in the Remote FS root textbox
  9. Enter “devstack_slave” in the Labels textbox
  10. Enter the IP Address of your slave host or VM in the Host textbox
  11. Select jenkins from the Credentials dropdown:
    new-node-screen
  12. Click the Save button
  13. Click the Log link on the left. The log should show the master connecting to the slave, and at the end of the log should be: “Slave successfully connected and online”:
    slave-log

Test the dsvm-tempest-full Jenkins job

Now we are ready to have our Jenkins slave execute the long-running Jenkins job that uses Devstack to install an OpenStack environment on the Jenkins slave node, and run a set of Tempest tests against that environment. We want to test that the master can successfully run this long-running job before we set the job to be triggered by the upstream Gerrit event stream.

Go to the Jenkins web UI, click on the dsvm-tempest-full link in the jobs listing, and then click the Build Now link. You will notice an executor start up and a link to a newly-running job will appear in the Build History box on the left:

Build History panel in Jenkins

Build History panel in Jenkins

Click on the link to the new job, then click Console Output in the left panel. You should see the job executing, with Bash output showing up on the right:

Manually running the dsvm-tempest-full Jenkins job

Manually running the dsvm-tempest-full Jenkins job

Troubleshooting

If you see errors pop up, you will need to address those issues. In my testing, issues generally were around:

  • Firewall/networking issues: Make sure that the Jenkins master node can properly communicate over SSH port 22 to the slave nodes. If you are using virtual machines to run the master or slave nodes, make sure you don’t have any iptables rules that are preventing traffic from master to slave.
  • Missing files like “No file found: /opt/nodepool-scripts/…”: Make sure that the install_slave.sh Bash script completed successfully. This script takes a long time to execute, as it pulls down a bunch of images for Devstack caching.
  • LXC: See above about why you cannot currently use LXC containers for Jenkins slaves that run Devstack
  • Zuul processes borked: In order to have jobs triggered from upstream, both the zuul-server and zuul-merge processes need to be running, connecting to Gearman, and firing job events properly. First, make sure the right processes are running:
    # First, make sure there are **2** zuul-server processes and
    # **1** zuul-merger process when you run this:
    ps aux | grep zuul
    # If there aren't, do this:
    sudo rm -rf /var/run/zuul/*
    sudo service zuul start
    sudo service zuul-merger start
    

    Next, make sure that the Gearman service has registered queues for all the Jenkins jobs. You can do this using telnet (4730 is the default port for Gearman):

    ubuntu@master:~$ telnet 127.0.0.1 4730
    Trying 127.0.0.1...
    Connected to 127.0.0.1.
    Escape character is '^]'.
    status
    build:noop-check-communication:master	0	0	2
    build:dsvm-tempest-full	0	0	1
    build:dsvm-tempest-full:devstack_slave	0	0	1
    merger:merge	0	0	1
    zuul:enqueue	0	0	1
    merger:update	0	0	1
    zuul:promote	0	0	1
    set_description:master	0	0	1
    build:noop-check-communication	0	0	2
    stop:master	0	0	1
    .
    ^]
    
    telnet> quit 
    Connection closed.
    

Enabling the dsvm-tempest-full Job in the Zuul Pipelines

Once you’ve successfully run the dsvm-tempest-full job manually, you should now enable this job in the appropriate Zuul pipelines. To do so, on the Jenkins master node, you will want to edit the etc/zuul/layout.yaml file in your data repository (don’t forget to git commit your changes after you’ve made them and push the changes to the location of your data repository’s canonical location).

If you used the example layout.yaml from my data repository and you’ve been following along this tutorial series, the projects section of your file will look like this:

projects:
  - name: openstack-dev/sandbox
    check:
      # Remove this after successfully verifying communication with upstream
      # and seeing a posted successful review.
      - noop-check-communication
      # Uncomment this job when you have a jenkins slave running and want to
      # test a full Tempest run within devstack.
      #- dsvm-tempest-full
    gate:
      # Remove this after successfully verifying communication with upstream
      # and seeing a posted successful review.
      - noop-check-communication
      # Uncomment this job when you have a jenkins slave running and want to
      # test a full Tempest run within devstack.
      #- dsvm-tempest-full

To enable the dsvm-tempest-full Jenkins job to run in the check pipeline when a patch is received (or recheck comment added) to the openstack-dev/sandbox project, simply uncomment the line:

      #- dsvm-tempest-full

And then reload Zuul and Zuul-merger:

sudo service zuul reload
sudo service zuul-merger reload

From now on, new patches and recheck comments on the openstack-dev/sandbox project will fire the dsvm-tempest-full Jenkins job on your devstack slave node. :) If your test run was successful, you will see something like this in your Jenkins console for the job run:

\o/ Steve Holt!

\o/ Steve Holt!

And you will note that on the patch that triggered your Jenkins job will show a successful comment, and a +1 Verified vote:

A comment showing external job successful runs

A comment showing external job successful runs

What Next?

From here, the changes you make to your Jenkins Job configuration files are up to you. The first place to look for ideas is the devstack-vm-gate.sh script. Look near the bottom of that script for a number of environment variables that you can set in order to tinker with what the script will execute.

If you are a Cinder storage vendor looking to test your hardware and associated Cinder driver against OpenStack, you will want to either make changes to the example dsvm-tempest-full or create a copy of that example job definition and customize it to your needs. You will want to make sure that Cinder is configured to use your storage driver in the cinder.conf file. You may want to create some script that copies most of what the devstack-vm-gate.sh script does, and call the devstack ini_set function to configure your storage driver, and then run devstack and tempest.

Publishing Console and Devstack Logs

Finally, you will want to get the log files that are collected by both Jenkins and the devstack run published to some external site. Folks at Arista have used dropbox.com to do this. I’ll leave it up to an exercise for the reader to set this up. Hint: that you will want to set the PUBLISH_HOST variable in your data repository’s vars.sh to a host that you have SCP rights to, and uncomment the publishers section in the example dsvm-tempest-full job:

#    publishers:
#      - devstack-logs  # In macros.yaml from os-ext-testing
#      - console-log  # In macros.yaml from os-ext-testing

Final Thoughts

I hope this three-part article series has been helpful for you to understand the upstream OpenStack continuous integration platform, and instructional in helping you set up your own external testing platform using Jenkins, Zuul, and Jenkins Job Builder, and Devstack-Gate. Please do let me know if you run into issues. I will post some updates to the Troubleshooting section above when I hear from you and (hopefully help you resolve any problems).

Setting Up an External OpenStack Testing System – Part 1

This post is intended to walk somone through the process of establishing an external testing platform that is linked with the upstream OpenStack continuous integration platform. If you haven’t already, please do read the first article in this series that discusses the upstream OpenStack CI platform in detail. At the end of the article, you should have all the background information on the tools needed to establish your own linked external testing platform.

EXTREMELY IMPORTANT NOTE: The upstream Puppet modules used in this article have changed dramatically since writing this. I am in the process of updating this blog entry, but at this time, some important steps do not work properly!

What Does an External Test Platform Do?

In short, an external testing platform enables third parties to run tests — ostensibly against an OpenStack environment that is configured with that third party’s drivers or hardware — and report the results of those tests on the code review of a proposed patch. It’s easy to see the benefit of this real-time feedback by taking a look at a code review that shows a number of these external platforms providing feedback. In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by external Neutron vendor test platforms on a proposed patch to Neutron:

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Each of these systems, when adding a Verified label to a review does so by adding a comment to the review. These comments contain links to artifacts from the external testing system’s test run for this proposed patch, as shown here:

Comments added to a review by the vendor testing platforms

Comments added to a review by the vendor testing platforms

The developer submitting a patch can use those links to investigate why their patch has caused test failures to occur for that external test platform.

Why Set Up an External Test Platform?

The benefits of external testing integration with upstream code review are numerous:

A tight feedback loop
The third party gets quick notifications that a proposed patch to the upstream code has caused a failure in their driver or configuration. The tighter the “feedback loop”, the faster fixes can be identified
Better code coverage
Drivers and plugins that may not be used in the default configuration for a project can be tested with the same rigor and frequency as drivers that are enabled in the upstream devstack VM gate tests. This prevents bitrot and encourages developers to maintain code that is housed in the main source trees.
Increased consistency and standards
Determining a standard set of tests that prove a driver implements the full or partial API of a project means that drivers can be verified to work with a particular release of OpenStack. If you’ve ever had a conversation with a potential deployer of OpenStack who wonders how they know that their choice of storage or networking vendor, or underlying hypervisor, actually works with the version of OpenStack they plan to deploy, then you know why this is a critical thing!

Why might you be thinking about how to set up an external testing platform? Well, a number of OpenStack projects have had discussions already about requirements for vendors to complete integration of their testing platforms with the upstream OpenStack CI platform. The Neutron developer community is ahead of the game, with more than half a dozen vendors already providing linked testing that appears on Neutron code reviews.

The Cinder project also has had discussions around enforcing a policy that any driver that is in the Cinder source tree have tests run on each commit to validate the driver is working properly. Similarly, the Nova community has discussed the same policy for hypervisor drivers in that project’s source tree. So, while this may be old news for some teams, hopefully this post will help vendors that are new to the OpenStack contribution world get integrated quickly and smoothly.

The Tools You Will Need

The components involved in building a simple linked external testing system that can listen to and notify the upstream OpenStack continuous integration platform are as follows:

Jenkins CI
The server that is responsible for executing jobs that run tests for a project
Zuul
A system that configures and manages event pipelines that launch Jenkins jobs
Jenkins Job Builder (JJB)
Makes construction/maintenance of Jenkins job config XML files a breeze
Devstack-Gate and Nodepool Scripts
A collection of scripts that constructs an OpenStack environment from source checkouts

I’ll be covering how to install and configure the above components to build your own testing platform using a set of scripts and Puppet modules. Of course, there are a number of ways that you can install and configure any of these components. You can manually install it somewhere by following the install instructions in the component’s documentation. However, I do not recommend that. The problem with manual installation and configuration is two-fold:

  1. If something goes wrong, you have to re-install everything from scratch. If you haven’t backed up your configuration somewhere, you will have to re-configure everything from memory.
  2. You cannot launch a new configuration or instance of your testing platform easily, since you have to manually set everything up again.

A better solution is to use a configuration management system, such as Puppet, Chef, Ansible or SaltStack to manage the deployment of these components, along with a Git repository to store configuration data. In this article, I will show you how to install an external testing system on multiple hosts or virtual machines using a set of Bash scripts and Puppet modules I have collected into a source repository on GitHub. If you don’t like Puppet or would just prefer to use a different configuration management tool, that’s totally fine. You can look at the Puppet modules in this repo for inspiration (and eventually I will write some Ansible scripts in the OpenStack External Testing project, too).

Preparation

Before I go into the installation instructions, you will need to take care of a few things. Follow these detailed steps and you should be all good.

Getting an Upstream Service Account

In order for your testing platform to post review comments to Gerrit code reviews on openstack.org, you will need to have a service account registered with the OpenStack Infra team. See this link for instructions on getting this account.

Don’t have an SSH key pair for your Gerrit service account? You can create one like so:

ssh-keygen -t rsa -b 1024 -N '' -f gerrit_key

The above will produce the key pair: a pair of files called gerrit_key and gerrit_key.pub. Copy the text of the gerrit_key.pub into the email you send to the OpenStack Infra mailing list. Keep both the files handy for use in the next step.

Create a Git Repository to Store Configuration Data

When we install our external testing platform, the Puppet modules are fed a set of configuration options and files that are specific to your environment, including the SSH private key for the Gerrit service account. You will need a place to store this private configuration data, and the ideal place is a Git repository, since additions and changes to this data will be tracked just like changes to source code.

I created a source repository on GitHub that you can use as an example. Instead of forking the repository, like you might would normally do, I recommend instead just git clone’ing the repository to some local directory, and making it your own data repository:

git clone git@github.com:jaypipes/os-ext-testing-data ~/mydatarepo
cd mydatarepo
rm -rf .git
git init .
git add .
git commit -a -m "My new data repository"

Now you’ve got your own data repository to store your private configuration data and you can put it up in some private location somewhere — perhaps in a private organization in GitHub, perhaps on a Git server you have somewhere.

Put Your Gerrit Service Account Private Key Into the Data Repository

The next thing you will want to do is add your SSH key pair to the repository that you used in the step above that had you register an upstream Gerrit service account.

If you created a new key pair using the ssh-keygen command above. You would copy the gerrit_key file into your data repository.

If you did not create a new key pair (you used an existing key pair) or you created a key pair that wasn’t called gerrit_key, simply copy that key pair into the data repository, then open up the file called vars.sh, and change the following line in it:

export UPSTREAM_GERRIT_SSH_KEY_PATH=gerrit_key

And change gerrit_key to the name of your SSH private key.

Set Your Gerrit Account Username

Next, open up the file vars.sh in your data repository (if you haven’t already), and change the following line in it:

export UPSTREAM_GERRIT_USER=jaypipes-testing

And replace jaypipes-testing with your Gerrit service account username.

Set Your Vendor Name in the Test Jenkins Job

Next, open up the file etc/jenkins_jobs/config/projects.yaml in your data repository. Change the following line in it:

  vendor: myvendor

Change myvendor to your organization’s name.

(Optional) Create Your Own Jenkins SSH Key Pair

I have a private/public SSH key pair (named jenkins_key[.pub] in the example data repository. Due to the fact that I’ve put the private key in there, it’s no longer useful as anything other than an example, so you may want to recreate your own. Do so like so:

cd $DATA_DIRECTORY
ssh-keygen -t rsa -b 1024 -N '' -f jenkins_key
git commit -a -m "Changed jenkins key to a new private one"

Save Changes in Your Data Repository

OK, we’re done with the first changes to your data repository and we’re ready to install a Jenkins master node. But first, save your changes and push your commit to wherever you are storing your data repository privately:

git add .
git commit -a -m "Added Gerrit SSH key and username"
git push

Requirements for Nodes

On the nodes (hosts, virtual machines, or LXC containers) that you are going to install Jenkins master and slaves into, you will want to ensure the following:

  • These basic packages are installed:
    • wget
    • openssl
    • ssl-cert
    • ca-certificates
  • Have the SSH keys you use with GitHub in ~/.ssh/. It also helps to bring over your ~/.ssh/known_hosts and ~/.ssh/config files as well.

Setting up Your Jenkins Master Node

On the host or virtual machine (or LXC container) you wish to run the Jenkins Master node on, run the following:

git clone $YOUR_DATA_REPO data
wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_master.sh
bash install_master.sh

The above should create an SSL self-signed certificate for Apache to run Jenkins UI with, and then install Jenkins, Jenkins Job Builder, Zuul, Nodepool Scripts, and a bunch of support packages.

Important Note: Since publishing this article, the upstream Zuul system underwent a bit of a refactoring, with the Zuul git-related activities being executed by a separate Zuul worker process called zuul-merger. I’ve updated the Puppet modules in the os-ext-testing repository accordingly, but if you had installed the Jenkins master with Zuul from the Puppet modules before Tuesday, February 18th, 2014, you will need to do the following on your master node to get all reconfigured properly:

# NOTE: This is only necessary if you installed a Jenkins master from the
# os-ext-testing repository before Tuesday, February 18th, 2014!
sudo service zuul stop
sudo rm -rf /var/log/zuul/* /var/run/zuul/*
sudo -i
# As root...
cd /root/config; git pull /root/config
exit
cd os-ext-testing; git pull; cd ../
cp os-ext-testing/puppet/install_master.sh .
bash install_master.sh

Troubleshooting note:: There is a bug in the upstream openstack-infra/config project (with a patch submitted) that may cause Puppet to error with the following:

Duplicate declaration: A2mod[rewrite] is already declared in file /home/vagrant/os-ext-testing/puppet/modules/os_ext_testing/manifests/master.pp at line 34; cannot redeclare at /root/config/modules/zuul/manifests/init.pp:236 on node master

To fix this issue, open the /root/config/modules/zuul/manifests/init.pp file and comment out these lines:

  a2mod { 'rewrite':
    ensure => present,
  }
  a2mod { 'proxy':
    ensure => present,
  }
  a2mod { 'proxy_http':
    ensure => present,
  }

When Puppet completes, go ahead and open up the Jenkins web UI, which by default will be at http://$HOST_IP:8080. You will need to enable the Gearman workers that Zuul and Jenkins use to interact. To do this:

  1. Click the `Manage Jenkins` link on the left
  2. Click the `Configure System` link
  3. Scroll down until you see “Gearman Plugin Config”. Check the “Enable Gearman” checkbox.
  4. Click the “Test Connection” button and verify Jenkins connects to Gearman.
  5. Scroll down to the bottom of the page and click `Save`
  6. Note: Darragh O’Reilly noticed when he first did this on his machine, that the Gearman plugin was not actually enabled (though it was installed). He mentioned that simply restarting the Jenkins service fixed this problem, and the Gearman Plugin Config section then appeared on the Manage Jenkins -> Configure System page.

    Once you are done with that, it’s time to load up your Jenkins jobs and start Zuul:

    sudo jenkins-jobs --flush-cache update /etc/jenkins_jobs/config/
    sudo service zuul start
    sudo service zuul-merger start
    

    If you refresh the main Jenkins web UI front page, you should now see two jobs show up:

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Testing Communication Between Upstream and Your Master

    Congratulations. You’ve successfully set up your Jenkins master. Let’s now test connectivity between upstream and our external testing platform using the simple sandbox-noop-check-communication job. By default, I set this Jenkins job to execute on the master node for the openstack-dev/sandbox project [1]. Here is the project configuration in the example data repository’s etc/jenkins_jobs/config/projects.yaml file:

    - project:
        name: sandbox
        github-org: openstack-dev
        node: master
    
        jobs:
            - noop-check-communication
            - dsvm-tempest-full:
                node: devstack_slave
    

    Note that the node is master by default. The sandbox-dsvm-tempest-full Jenkins Job is configured to run on a node labeled devstack_slave, but we will cover that later when we bring up our Jenkins slave.

    In our Zuul configuration, we have two pipelines: check and gate. There is only a single project listed in the layout.yaml Zuul project configuration file, the openstack-dev/sandbox project:

    projects:
        - name: openstack-dev/sandbox
          check:
            - sandbox-noop-check-communication
    

    By default, the only job that is enabled is the sandbox-noop-check-communication Jenkins job, and it will get run whenever a patchset is created in the upstream openstack-dev/sandbox project, as well as any time someone adds a comment with the words “recheck no bug” or “recheck bug XXXXX”. So, let us create a sample patch to that project and check to see if the sandbox-noop-check-communication job fires properly.

    Before we do that, let’s go ahead and tail the Zuul debug log, grepping for the term “sandbox”. This will show messages if communication is working properly.

    sudo tail -f /var/log/zuul/debug.log | grep sandbox
    

    OK, now create a simple test patch in sandbox. Do this on your development workstation, not your Jenkins master:

    git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    cd /tmp/sandbox
    git checkout -b testing-ext
    touch mytest
    git add mytest
    git commit -a -m "Testing comms"
    git review
    

    Output should look like so:

    jaypipes@cranky:~$ git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    Cloning into '/tmp/sandbox'...
    remote: Reusing existing pack: 13, done.
    remote: Total 13 (delta 0), reused 0 (delta 0)
    Receiving objects: 100% (13/13), done.
    Resolving deltas: 100% (4/4), done.
    Checking connectivity... done
    jaypipes@cranky:~$ cd /tmp/sandbox
    jaypipes@cranky:/tmp/sandbox$ git checkout -b testing-ext
    Switched to a new branch 'testing-ext'
    jaypipes@cranky:/tmp/sandbox$ touch mytest
    jaypipes@cranky:/tmp/sandbox$ git add mytest
    jaypipes@cranky:/tmp/sandbox$ git commit -a -m "Testing comms"
    [testing-ext 51f90e3] Testing comms
     1 file changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 mytest
    jaypipes@cranky:/tmp/sandbox$ git review
    Creating a git remote called "gerrit" that maps to:
    	ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
    Your change was committed before the commit hook was installed.
    Amending the commit to add a gerrit change id.
    remote: Processing changes: new: 1, done    
    remote: 
    remote: New Changes:
    remote:   https://review.openstack.org/73631
    remote: 
    To ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
     * [new branch]      HEAD -> refs/publish/master/testing-ext
    

    Keep an eye on your tail’d Zuul debug log file. If all is working, you should see something like this:

    2014-02-14 16:08:51,437 INFO zuul.Gerrit: Updating information for 73631,1
    2014-02-14 16:08:51,629 DEBUG zuul.Gerrit: Change  status: NEW
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Done adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Run handler awake
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Fetching trigger event
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Processing trigger event 
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Starting queue processor: check
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Finished queue processor: check (changed: False)
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Starting queue processor: gate
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Finished queue processor: gate (changed: False)
    

    If you go to the link to the code review in Gerrit (the link that output after you ran git review), you will see your Gerrit testing account has added a +1 Verified vote in the code review:

    Successful communication between upstream and our external system

    Successful communication between upstream and our external system

    Congratulations. You now have an external testing platform that is receiving events from the upstream Gerrit system, triggering Jenkins jobs on your master Jenkins server, and writing reviews back to the upstream Gerrit system. The next article goes over adding a Jenkins slave to your system, which is necessary to run real Jenkins jobs that run devstack-based gate tests. Please do let me know what you think of both this article and the source repository of scripts to set things up. I’m eager for feedback and critique. :)

    [1]— The OpenStack Sandbox project is a project that can be used for testing the integration of external testing systems with upstream. By creating a patch against this project, you can trigger the Jenkins jobs that are created during this tutorial.