So, What is the Core of OpenStack?

In a conversation on Twitter, Lydia Leong stated something that I’ve heard from a number of industry folks and OpenStack insiders alike:

The core needs to be small, rock-solid stable, and readily extensible.

I responded:

Depends on what you mean by “core” :) I think that term has been abused.

It’s probably worth writing down a response that spans more than 140 characters, so I decided to write a post about why the term “core” is, indeed, abused, and some of my thoughts about Lydia’s pronouncement.

First, on specificity

In my years working in the cloud space, it has dawned on me that there really is no single way of looking at the cloud development and deployment space. As soon as one person (myself included) tries to describe with any detail what cloud systems are, invariably someone else will say “no, it’s not (just) that… it’s this as well.”

For instance, if I say that cloud is on-demand computing that gives application developers tools to drive their own deployment, someone will correctly point out that cloud is also hardware that has been virtualized to fit budgetary and technological needs of an IT department.

Similarly, if I said that cloud was all about treating VMs as cattle, someone will rightly come along and say that legacy applications and “pet VMs” have as much of a right to benefit from virtualized infrastructure as those hipster-scale folks.

One man’s young dame is another’s old woman. (http://bit.ly/to-each-his-own)

And, then John Dickinson will appropriately say, “hey, Jay, cloud isn’t all about compute, you know.” And of course, he’d be totally correct.

My point is that, well, cloud means something different to everyone. And there’s really nothing wrong with that. It just means that when you express an idea about the cloud space, you should qualify exactly what it is you are applying that idea to, and conversely, what you are not applying your idea to.

And, of course, Twitter, being limited in its conversational envelope size, is hardly an ideal medium to express grand ideas about a space such as “the cloud” that already suffers from a dearth of crisp definition.

On golf balls, layercake, and taxonomy

Forrester, Gartner and other companies obsessively attempt to categorize and rank companies and products in ways that they feel are helpful to their CIO/tech buyer audience. And that’s fine; everybody’s got to make a living out here.

But lately, it seems the OpenStack developer community has gotten gung-ho about categorizing various OpenStack projects. I want to summarize here some of the existing thoughts on the matter, before delving into my own personal opinions.

Late last year, Dean Troyer originally posted his ideas about categorizing projects within the OpenStack arena using a set of layers. His ideas were centered around finding a way to describe new projects in a technical (as opposed to trademark or political) sense, in order to more easily identify where the project fit in relation to other projects. The impetus for Dean’s “OpenStack Layers” approach was his work in DevStack, in trying to detect the boundaries and dependencies between components that DevStack configures.

Sean Dague more recently expanded on Dean’s ideas, attempting to further clarify where newer (since Dean’s post) incubated and integrated projects lie in the OpenStack layers. Monty Taylor and Robert Collins followed up Sean’s post with posts of their own, each attempting to further provide ways in which OpenStack projects may be grouped together.

Dean’s model had five simple layers:

  • Layer 0: Operating Systems and Libraries — OS, database, message queue, Oslo, other libraries
  • Layer 1: The Basics — Keystone, Glance, Nova
  • Layer 2: Extending the Base — Cinder, Neutron, Swift, Ironic
  • Layer 3: All the options — Ceilometer, Horizon
  • Layer 4: Turtles all the way up — Heat, Trove, Zaqar, Designate

Sean’s model extended Dean’s, adding in a couple of the projects like Barbican that had not really been around at the time of Dean’s post, and calling the turtle layer “Consumption Services”:

Monty’s model grouped OpenStack projects in yet a different manner:

  • Layer #1 The Only Layer — Any project that is needed to spin up a VM running WordPress on it. His list here is Keystone, Glance, Nova, Cinder, Neutron and Designate. Monty believes this is where the layer model falls apart, and all other terms should be simple “tags” that describe some aspect of a project
  • Tag: Cloud Native — Any project “that provide(s) features for end user applications which could be provided by services run within VMs instead.” Examples here are Trove and Swift. Trove provides managed databases in VMs instead of databases running on bare metal. Swift provides object storage instead of the user having to run their own bare-metal machines with lots of disk space.
  • Tag: Operations — Any project that bridges functional gaps between various components for operators of OpenStack clouds. Examples here include Ceilometer and Ironic.
  • Tag: User Interface — Any project that enhances the user interface to other OpenStack services. Examples here are Horizon, Heat, and the openstack-sdk
http://bit.ly/flaming-golf-ball

http://bit.ly/flaming-golf-ball

Thierry Carrez redefined Monty’s “Layer #1″ as “Ring 0″. His depiction of OpenStack is like the construction of a golf ball, with a small rubber core and a larger plastic covering[1], rather than the layercake of Sean and Dean. In Thierry’s model, the OpenStack projects that would live in “Ring 0″ would be the projects that would need to be “tightly integrated” and “limited-by-design”. All other projects would just live outside this Ring 0, but still be “in OpenStack”. Thierry also brings up the question of what to do about the concept of Programs, which to this date, are the official way in which the OpenStack community blesses teams of people working towards common missions. At least, that’s what Programs are supposed to be. In practice, they tend to be viewed as equal to the main project that the Program began life as.

Finally, Robert Collins’ take on the layers discussion brought up a couple interesting points about the importance of APIs vs implementation (which I will cover shortly) and that the “core” of OpenStack really is smaller than Monty envisioned, but Robert ended up with a taxonomy of OpenStack projects that was broken up by functional categories. There would be a set of teams that would select which projects implemented an API that belonged to one of the following functional categories:

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product: (containers)
  • SaaS product: (storage)
  • NaaS product: (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So why do we insist on categorizing these projects?

All of the above OpenStack leaders try to get at the fundamental question of what is the core of OpenStack? But, really, what is this obsession we have with shoving OpenStack projects into these broad categories? Why do we need to constantly define what this core thing is? And what does Lydia refer to when she says that the “core needs to be small, rock-solid stable, and readily extensible”?

I am going to posit that labeling some set of projects “the OpenStack Core” is actually not useful to any of our users[2], and that replacing overloaded terms such as “core” or “integrated” or “incubated” with a set of informative tags for each project will lead to a more enlightened OpenStack user community.

Monty started his blog post with a discussion about the different types of users that we serve in the OpenStack community. I think when we answer questions about OpenStack projects, we always need to keep in mind what information is actually useful to these different user groups, and why that information is useful. When we understand the characteristics that make a certain piece of information useful to a group of users, we can emphasize those characteristics in the language we use to describe OpenStack.

Operators

Let’s take the group of OpenStack users that Monty calls “deployers”, that I like to call “operators”. These are folks that have deployed OpenStack and are running an OpenStack cloud for one or more users. What kinds of information do these users crave? I can think of a number of questions that this user group frequently asks:

  • Should I deploy OpenStack using an OpenStack distribution like RDO, or should I deploy OpenStack myself using something like DevStack or maybe the Chef cookbooks on Stackforge?
  • Is the Icehouse version of Nova more stable than Havana?
  • Does Ceilometer’s SQL driver handle 2000 or more VMs?
  • What notification queues can my Nagios NPRE plugin monitor to get an early warning sign that something is degraded?
  • Can my old Grizzly nova-network deployment be upgraded to Icehouse Neutron with no downtime to VM networking?
  • What is the best way to diagnose data plane network connectivity issues with Havana Neutron deployments that use OpenVSwitch 1.11 and the older OpenVSwitch agent with ML2?
  • Is Heat capable of supporting 100 concurrent users?

For operators, questions about OpenStack projects generally revolve around stability, performance & scalability and deployment & diagnostics. They want to know practical information on their options with regards to how they can deploy OpenStack, keep it up and running smoothly, and maintain it over time.

What does defining a set of “core OpenStack projects” give the operator user group? Nothing.

What might we be able to tag OpenStack projects with that would be of use to operators? Well, I think the answer to this question comes in the form of answers to the questions that these operators frequently ask. How’s this for a set of tags that might be useful to operators?

  • included-in-$distribution-$version: Indicates a project has been packaged for inclusion in some OpenStack distribution. Examples: included-in-rdo-icehouse, included-in-uca-trusty, included-in-mos-5.1
  • stability-$rating: Indicates the operator community’s viewpoint on the stability of a project. Examples: stability-experimental, stability-improved, stability-mature
  • driver-$driver-experimental: Indicates the developers of driver $driver consider the code to be experimental. Examples: driver-sql-experimental, driver-docker-experimental
  • puppet-tested: Indicates that Puppet modules exist in the openstack/ code namespace that are functionally tested to install and configure the service. Similar tags could exist for chef-tested or ansible-tested, etc
  • operator-docs[-$topic]: Indicates that there is Operator-specific documentation for the project, optionally with a $topic suffix. Examples: operator-docs-nagios, operator-docs-monitoring
  • rally-verified-sla: Indicates the project has one or more Rally SLA definitions included in its gate testing platform
  • upgrade-from-$version-(in-place|downtime-needed): Indicates a project can be upgraded with or without downtime from some previous $version. Examples: upgrade-from-icehouse-in-place, upgrade-from-juno-downtime-needed

I personally feel all of the above tags contain more useful information for operators than having a set of “core OpenStack projects”.

Application developers (end users and DevOps folks)

Monty calls this group of users “end users”. The end users of clouds are application developers and DevOps people who support the operation of an application on a cloud environment. Again, let’s take a look at what this group of users typically cares about, in the form of frequent questions that this user group poses:

  • Does the Fog Ruby library support connecting to HP Helion?
  • Can I use python-neutronclient to connect to RAX Cloud and if so, does it support all of Neutron’s Firewall API extensions?
  • Can I use Zaqar to implement a simple RPC mechanism for my cloud application? Do any public clouds expose Zaqar?
  • Can I develop my cloud application against a private OpenStack cloud created with DevStack and deploy the application on Amazon EC2 and S3?
  • How can I deploy a WordPress blog for my pointy-haired boss in my company’s OpenStack private cloud in less than 15 minutes to get him off my back?
  • Can I use Nagios to monitor my cloud application, or should I use Ceilometer for that?

What does defining a set of “core OpenStack projects” give the end user group? Nothing.

How about a set of informative tags instead?

  • supported-on-$publiccloud: Indicates that $publiccloud supports the service. Examples: supported-on-rax, supported-on-hp
  • cloud-native: Indicates the project implements a service designed for applications built for the cloud
  • user-docs[-$topic]: Indicates there is documentation specifically for application developers around this project, optionally for a smaller $topic. Examples: user-docs-nagios, user-docs-fog, user-docs-wordpress
  • compare-to-$subject: Indicates that the service provides similar functionality to $subject. Examples: compare-to-s3, compare-to-cloud-foundry
  • compat-with-$api: Indicates the project exposes functionality that allows $api to be used to perform some native action. Examples: compat-with-ec2-2013.10.9, compat-with-glance-v2. Optionally consider a -with-docs suffix to indicate documentation on the compatibility exists

You will probably note that many of the tags for application developers involve the existence of documentation about compatibility with, or comparison to, other APIs or services. This is because docs really matter to application developers. API docs, tutorials, usage examples, SDK documentation. All of this stuff is critical to driving strong adoption of OpenStack with cloud app developers. The move to a documentation focus can and should start with a simple set of user-focused tags that inform our application developer users about the existence of official documentation about the things they care about.

Packagers

The next group of OpenStack users are the packagers of the OpenStack projects. Monty calls this group of users the “distributors”. Folks who work on operating system packages of OpenStack projects and libraries, folks who work on the OpenStack distributions themselves, and folks who work on configuration management tool modules that install and configure an OpenStack project are all members of this user group. This group of users care about a number of things, such as:

  • Does the Juno Nova release have support for the version of libvirt on Ubuntu Trusty Tahr?
  • Does version X of the python-novaclient support version Y of the Nova REST API?
  • Can Havana Nova speak with an Icehouse Glance server?
  • Which Keystone token driver should I enable by default?
  • Have database migrations for Icehouse Neutron been tested with PostgreSQL 9.3?
  • Has the qpid message queue driver been tested with Juno Ceilometer?

What does defining a set of “core OpenStack projects” give the packager user group? Nothing.

You’ll notice that many of the concerns that packagers have revolve around two things: documentation around version dependencies and testing of various optional drivers or settings. What does a designation that a project is “in core OpenStack” answer for the packager? Really, nothing at all. A finer-grained source of information is what is desired. A set of tags would be much more useful:

  • gated-with-$thing[-$version]: Indicates that patches to the project are gated on successful functional integration testing with $thing at an optional $version. Examples: gated-with-neutron, gated-with-postgresql, gated-with-glanceclient-1.1
  • tested-with-$thing[-$version]: Indicates that functional integration tests are run post-merge against the project. Examples: tested-with-ceph, tested-with-postgresql-9.3
  • driver-$driver-(recommended|default|gated|tested): Indicates a $driver is recommended for use, is the default, or is tested with some gate or post-merge integration tests. Examples: driver-sql-recommended, driver-mongodb-gated

OpenStack Developers

OK, so the final group of OpenStack users is the developers of the OpenStack projects themselves. This is the group that does the most squealing about categorizing projects in and out of the OpenStack tent. We’ve created a governance structure and organizational model that, as Zane Bitter said, is like the reverse of Conway’s law. We tend to box ourselves in because we focus so intently on creating these categories by which we group the OpenStack projects.

We created the terms “incubated” and “integrated” to ostensibly inform ourselves of which projects have aligned with the OpenStack governance model and infrastructure tooling processes. And then we created the gate as a model of those categories, without first asking ourselves whether the terms “incubated” and “integrated” were really serving a technically sound purpose. We group all the integrated and incubated projects together, saying that all these projects must integrate with each other in one giant, complex gate testing platform.

Zane thinks that is madness, and I tend to agree.

Personally, I feel we need to stop thinking in terms of “core” or “incubated” or “integrated” and instead stick to thinking about the projects that live in the OpenStack tent[3] in terms of the soft and hard dependencies that each project has to another. In graphical representation, the current set of OpenStack projects (that aren’t software libraries) might look something like this:

OpenStack Non-library Project Dependency Graph

Converting that graphical representation to something machine-readable might produce a YAML snippet like this:

projects:
 - keystone
 - tempest
 - swift:
   soft:
   - keystone
 - glance:
   hard:
   - keystone
   soft:
   - swift
...
 - nova:
   hard:
   - keystone
   - glance
   - cinder
   soft:
   - neutron
   - ironic
   - barbican

The YAML above is nothing more than a set of tags that could be applied to OpenStack projects to inform developers about their relative dependency on other projects. This dependency graph information can then be used to construct a testing platform that should be more efficient than the current system that looks the way it does because we’ve insisted on using this single “integrated” category to describe the relationship of an OpenStack project with another.

So, my answer to Lydia

Seems I’ve yet again exceeded my 140 character limit. And I haven’t addressed Lydia’s point about the “core needs to be small, rock-solid stable, and readily extensible.” I think you can probably tell by now that I don’t think there is a useful way of denoting “a core OpenStack”. At least, not useful to our users. Does this mean that I disagree with Lydia’s sentiment about what is important for the OpenStack contributor community to focus on? Absolutely not.

So, in the spirit of specificity and useful taxonomic tagging, I will go further and make the following statements about the OpenStack community and code using Lydia’s words.

Our public REST APIs need to be designed to solve small problems

Right now, the public REST APIs of a number of OpenStack projects suffer from what I like to call “extension-itis”. Some of the REST APIs honestly feel like they were stitched together by a group of kindergartners, each trying to purposefully do something different than the last kid who attached their little piece of the quilt. APIs need to be small, simple, and clear. Why? Because they are our first impression, and OpenStack will live and die by the grace of the application developers that use our APIs.

We should have a working group composed of members of the Technical Committee, interested operators and end users, and API standards experts that control the public face of OpenStack: its HTTP REST APIs. This working group should have teeth to it, and be able to enforce a set of rules across OpenStack projects about API consistency, resource naming, clarity, discoverability and documentation

Our deployment tools need to be as rock-solid stable as any other part of our code

I’m not just talking about the official OpenStack Deployment Program here (Triple-O). I’m talking about the combination of Chef cookbooks, Puppet modules, Ansible playbooks, Salt state files, DevStack scripts, Mirantis FUEL deployment tooling, and the myriad other things that deploy and configure OpenStack services. They need to be tested in our continuous integration platform with the same zeal as everything else. As a community, we need to dedicate resources to make this a reality.

Our governance model needs to be readily extensible

Unfortunately, I believe our governance model and organizational structure has a tendency to reinforce the status quo and not adapt to changing times as quickly as it should. Examples of this rigidity are plenty. See, for example, the inability of the Technical Committee to come up with a single set of criteria for Zaqar’s graduation from incubation status. As a member of the Technical Committee, I can say I was pretty embarrassed at the way we treated the Zaqar contributor community. I’d love for the role of the Technical Committee to change from one of Judge to one of Advisor. The current Supreme OpenStack Court vision just reinforces the impression that our community cannot find a meaningful balance between progress and stability, and I for one refuse to admit that such a balance cannot be reached.

The current organization of official OpenStack Programs is something I also believe is no longer useful, and discourages our community from extending itself in the ways we should be extending. For example, we should be encouraging the sort of competition and pluralism that having multiple overlapping services like Ceilometer, Monasca, and Stacktach all under the OpenStack tent would bring.

There is no core

thereisnospoon

Finally, unless it’s not obvious, I don’t believe there is any OpenStack core. Or at least, that there’s any use to spending the effort required to concoct what it might be.
OK, I think that’s just about enough for one night of writing. Thanks very much if you made it all the way through. I look forward to your comments.

[1]OK, yes, golf balls have more than just a core and a larger plastic covering…just go with me on this and stop being so argumentative.

[2]Denoting some subset of OpenStack projects as “core” may be useful to industry pundits and marketing folks, but those aren’t OpenStack users.

Answering the existential question in OpenStack

There have been a slew of threads on the OpenStack Developer mailing list in the last few months that touch on overall governance and community issues. Thierry Carrez’s “the future of the integrated release” thread sparked discussions around how we keep up with the ever-increasing number of projects in the OpenStack ecosystem while maintaining quality and stability. Discussions around where to place the Rally program, suggestions of creating a Neutron Incubator program, and the debate around graduating Zaqar from incubation all bring up fundamental questions about how the OpenStack community is going to cope with the incredible growth it is experiencing.

Underneath these discussions, however, is an even more pivotal question that remains to be answered. Chris Dent put it well in his recent response on the Zaqar thread:

http://bit.ly/big-top-tent

http://bit.ly/big-top-tent

Which leads inevitably to the existential questions of What is OpenStack in the business of? and How do we stay sane while being in that business?

I have a few thoughts on these questions. A definitive opinion, if you will. Please permit me to wax philosophical for a bit. To be fair, my thoughts on this topic have changed pretty dramatically over the last few years, and as my role in the OpenStack world has evolved from developer to operator/deployer to developer again, from being on the Project Policy Board (remember that?), to being on the Technical Committee, off the TC, and then on it again.

What is OpenStack in the business of?

I believe OpenStack is in the business of providing an extensible cloud platform built on open standards and collaboration.

In other words, we should be a Big Tent. Our policies and our governing structure should all work to support people that collaborate with each other to enhance the experience of users of this extensible cloud platform.

What would being a Big Tent mean for the OpenStack community?

To me, being a Big Tent means welcoming projects and communities that wish to:

  • “speak” the OpenStack APIs
  • extend OpenStack’s reach, breadth, and interplay with other cloud systems
  • enhance the OpenStack user experience

Note that I am deliberately using the generic term “user” in the last bullet point above. Users include application developers writing against one or more public OpenStack APIs, developers of the OpenStack projects, deployers of OpenStack, operators of OpenStack, and anyone else who interfaces with OpenStack in any way.

What would being under the OpenStack tent mean for a project?

What would “being under the tent” mean for a project and its community? I think there’s a pretty simple answer to this as well.

A project included in the OpenStack Big Tent would mean that:

  1. The project is recognized as being a piece of code that enhances the experience of OpenStack users
  2. The project’s source code lives in the openstack/ code namespace

Clearly, I’m talking here only about projects that wish to be included in the OpenStack tent here. I’m not suggesting that the OpenStack community go out and start annexing parts of other communities and bringing their source code into the OpenStack tent willy-nilly!

…and what would being under the tent not mean for a project?

Likewise, being under the OpenStack Big Tent should not mean any of the following:

  • That the project is the only way to provide a specific function to OpenStack users
  • That the project is the best way to provide a specific function to OpenStack users
  • That the project receives some allotted portion of shared technical or management resources

Note that the first two bullet points above go towards my opinion that the OpenStack community should embrace competition both within its ecosystem as well as embrace competition in the external community by identifying ways to increase interoperability. Specifically, I don’t have a problem with having projects under the OpenStack tent that share some common functionality or purpose.

What requirements must projects meet?

Finally, there is the question of what requirements a project that wishes to be under the OpenStack tent must meet? I think minimal is better here, as it lowers the barriers to entry for interested parties. I think the below items keep the bar high enough to ensure that applicants aren’t just trying to self-promote without contributing to the common good of OpenStack.

  • Source code is licensed under the Apache License or a compatible FLOSS license and code is submitted according to a Developer Certificate of Origin
  • There should be liaisons for release management and cross-project coordination
  • There should be documentation that clearly describes the function of the project, its intended audience, any public API it exposes, and how contribution to the project is handled

And that’s it. There would be no further requirements for a project to exist under the OpenStack tent.

Predicting a few questions

I’m going to go out on a limb and try to predict some questions that might be thrown my way about my answer to the existential question of OpenStack. Not only do I believe this is a good exercise for anyone who plans to defend their thoughts in the court of public opinion, but I think that having concrete answers to these questions will help some folks recognize how this change in overall OpenStack direction would affect specific decisions and policies that have come to light in recent months.

What about the gate? How will the gate possibly keep up with all these projects?

The gate is the test platform that guards the branches of the OpenStack projects that are in the Integrated OpenStack release against bugs that result from merging flawed code in a dependent project. This test platform consists of hundreds of virtual machines that run a set of integration tests against OpenStack environments created using the Devstack tool.

http://bit.ly/pug-on-gate

http://bit.ly/pug-on-gate

The gate is also composed of a group of talented humans who each have demonstrated heroic characteristics in the past four years during periods in which the gate or its underlying components get “wedged” and needs to be “unstuck.”

These talented engineers, known collectively as the OpenStack Infra team, are a resource that is shared among projects in the OpenStack community. Currently, in the OpenStack community means the set of code that lives in the openstack/ namespace AND the stackforge namespace. However, while the OpenStack Infra team is shared among all projects in the OpenStack community, the gate platform is NOT shared by all projects in the community. Instead, the gate platform (the integrated queue) is only available to the projects in the OpenStack community that are in the incubated and integrated project statuses.

Now, it is 100% true that given the current structure of our gate test platform, for each additional project that is included in the current OpenStack Integrated Release, there is an exponential impact on the number of test machines and the amount of test code that will be run in the gate platform. I will be the first to admit that.

However, I believe the gate test platform does not need to impose this limitation on our development and test processes. I believe the gate, in its current form, produces the kind of frustration that it does because of the following reasons:

  • We don’t use fail-fast policies in the gate
  • We don’t use hierarchical test construction properly in the gate
  • We assume bi-directional test dependence when all dependencies are uni-directional
  • We run tests against code that a patch does not touch
  • We only have one gate
  • Expectations of developers trying to merge code into a tree are improperly set

By rethinking what the gate test platform is doing, and especially rethinking the policies that the gate enforces, I think we can test more projects, more effectively. I’m not going to go into minute detail for each of the above items, as I will do that in a followup post. The point here is to step back a bit and see if the policies we’ve given ourselves might just be the cause of our woes, and not the solution we think they might be.

Would there continue to be an incubation process or status?

Short answer: No.

http://bit.ly/incubate-my-chicken

http://bit.ly/incubate-my-chicken

The status of incubation within OpenStack has become the end goal for many projects over the last 3 years. The reason? Like it or not, the reason is because being incubated means the project gets to live in the openstack/ code namespace. And being in the openstack/ code namespace brings an air of official-ness to a project. It’s not rational, and it’s not sensible, but the reality is that many companies will simply not dedicate resources (money and people, which cost money) to the development of a project that does not live in the openstack/ code namespace. If our goal, as a community, is to increase the adoption of projects that enhance the user experience of OpenStack, then the incubation status, and the barrier to inclusion that comes with it, is not helping that goal.

The original aims of the incubation process and status were to push projects that wished to be called “OpenStack” to adopt a consistent release cadence, development workflow and code review system. It was assumed that by aligning these things with other projects under the OpenStack tent, that these projects would become better integrated with each other, by nature of being under the same communal constraints and having to coordinate their development through a shared release manager. These are laudable goals, but in practice, incubation has become more of a status symbol and political campaign process than something that leads to increased integration with other OpenStack projects.

Do we still want to encourage projects that live under the OpenStack tent to work with each other where appropriate? Absolutely yes. However, I don’t think the existing incubation process, which features a graduation review (or three, or four) in front of the Technical Committee, is something that is useful to OpenStack users.

Application developers want to work with solid OpenStack APIs and easy-to-use command line and library interfaces. The incubation process doesn’t address these things specifically.

Deployers of an OpenStack cloud platform want to deploy various OpenStack projects using packages or source repositories (and sometimes both). The process of incubation doesn’t address the stability or cohesiveness of operating system packages for the OpenStack project. This is something that would be better handled with a working group comprised of folks from the OpenStack and operating system distributions, not by a graduation review before the Technical Committee.

Operators of OpenStack cloud platforms want the ability to easily operate, maintain, and upgrade the OpenStack projects that they choose to utilize. Much of the operator tooling and utilities live outside of the openstack/ code namespace, either in the stackforge/ code namespace or in random repositories on GitHub. By not having OpenStack-specific operator tooling in the openstack/ code namespace, we de-legitimize these tools and make the business decision to use them harder. The incubation process, along with the Technical Committee “come before the court” bottleneck, doesn’t enable these worthy tools and utilities to be used as effectively as they could be, which ultimately means less value for our operator users.

Won’t the quality of OpenStack suffer if we let too many projects in?

No, I don’t believe it will. Or at least, I don’t believe that the quality of OpenStack as a whole is a function of how many projects we let live in the openstack/ code namespace. The quality of OpenStack projects should be judged separately, not as a whole. It’s neither fair to older, mature projects to group them with newcomers, noris it reasonable to hold new projects to the standards of stability and quality that 5+ year old software has evolved to.

I think instead of having an OpenStack Integrated Release, we should instead have tags that describe the stability and quality of a particular OpenStack project, along with much more granular integration information.

In other words, as a deployer, I couldn’t care less whether Ceilometer is “in the integrated release.”

What I do care about, however, is whether the package of Neutron I’m installing emits notifications in a format that the package of Ceilometer I’m installing understands and tracks.

As a deployer, I don’t care at all that Swift is “in the integrated release.”

What I do care about is whether my installed Glance package is able to store images in and stream images from a Swift deployment I installed last May.

As a cloud user, I don’t care at all that Nova is “in the integrated release.”

What I do care about is whether the version of Nova installed at my cloud provider will work with the version of python-novaclient I happened to install on my laptop.

So, instead of going through the arduous and less-than-useful process of graduation and incubation, I would prefer we spend our limited resources working on very specific documentation that clearly tells OpenStack users about the tested integration points between our OpenStack client, server and library/dependency projects.

Would MarconiZaqar be in the integrated OpenStack release under this scheme?

No. I don’t believe we need an integrated release. Yes, Zaqar would be under the OpenStack tent, but no, there would be no integrated release.

Would Stacktach, Monasca and Ceilometer both live in the openstack/ code namespace?

If that is something the Stacktack and Monasca communities would like, and once they meets the requirements for inclusion under the OpenStack tent, then yes. And I think that’s perfectly fine. Having Stacktach under the same tent does not mean Ceilometer isn’t a viable option for OpenStack users. Nor does it mean that Stacktach is the best solution in the telemetry space for OpenStack users.

Our decisions should be based on what is best for our OpenStack users. And I think giving our users choices in the telemetry space is what is best for our users. What works for one user may not be the best choice for another user. But, if we focus on the documentation of our projects in the OpenStack tent (see the requirements for projects to be under the tent, above), we can let these competing visions co-exist peacefully while having our user’s best interests in our hearts.

Would Stacktach and Ceilometer both be called “OpenStack Telemetry” then?

No. I don’t believe the concept of Programs is useful. I’d like to get rid of them altogether and replace the current programs taxonomy with a looser tag-based system. Monty Taylor, in this blog post, has some good ideas on that particular topic, including using a specific tag for “production readiness”.

Would the Technical Committee still decide which projects are in OpenStack?

No. I believe the Technical Committee should no longer be this weird court before which prospective projects must plead their case for inclusion into OpenStack. The requirements for a project to be included in the OpenStack tent should be objective and finite, and anyone on the new project-config-core team should be able to vet applicants based on the simple list of requirements.

How would this affect whether some commercial entity can call its product OpenStack?

This would not affect trademark, copyright, or branding policies in any way. The DefCore effort, of which I have pretty much steered as clear of as possible, is free to concoct whatever legal and foundation policies it sees fit with regards to what combinations of code and implementation can be called “powered by OpenStack”, “runs on OpenStack”, or whatever else business folks think gives them a competitive edge. I’ll follow up on the business/trademark side of OpenStack in a separate post, but I want to make it clear that my proposed broadening of the OpenStack tent has no relation to the business side of OpenStack. This is strictly a community approach.

What would happen to the stackforge/ code namespace?

Nothing. Projects that wished to continue to be in the OpenStack ecosystem but do not meet the requirements for being under the OpenStack tent could still live in the stackforge/ code namespace, same as the they do today.

What benefits does this approach bring?

The benefits of this approach are as follows:

  1. Clarity: due to the simple and objective requirements for inclusion into the OpenStack tent, there will be a clear way to make decisions on what should “be OpenStack”. No more ongoing questions about whether a project is “infrastructure-y enough” or aggravation about projects’ missions or functionality overlapping in certain areas.
  2. Specificity: no more vague reference to an “integrated release” that frankly doesn’t convey the information that OpenStack users are really after. Use quality and integration tags to specify which projects integrate with which versions of other projects, and focus on quality, accurate documentation about integration points instead of the incubated and integrated project status symbols and graduation process.

  3. Remove the issue of finite shared resources from the decision-making process: Being “under the OpenStack tent” does not mean that projects under the tent have any special access to or allocation of shared resources such as the OpenStack infrastructure team. Decisions about whether a project should be “in OpenStack” therefore need not be made based on resource constraints or conditions at a specific point in time. This means fewer decisions that look arbitrary to onlookers.
  4. Convey an inclusive attitude: Clearer, simpler, more objective rules for inclusion into the OpenStack tent means that the OpenStack community will present an inclusive attitude. This is the opposite of our community’s reputation today, which is one where we are viewed as having meritocracy at a micro-level (within the contributor communities in a project), but a hegemony at a macro-level, with a Cabal of insiders judging whether programs meet subjective determinations of what is “cloudy enough” or “OpenStack enough”. The more inclusive we are as a community, the more we will attract the application developers to the OpenStack community, and its on the backs of application developers that OpenStack will live or die in the long run.

Thanks for reading. I look forward to your comments.

Setting Up an External OpenStack Testing System – Part 2

In this third article in the series, we discuss adding one or more Jenkins slave nodes to the external OpenStack testing platform that you (hopefully) set up in the second article in the series. The Jenkins slave nodes we create today will run Devstack and execute a set of Tempest integration tests against that Devstack environment.

Add a Credentials Record on the Jenkins Master

Before we can add a new slave node record on the Jenkins master, we need to create a set of credentials for the master to use when communicating with the slave nodes. Head over to the Jenkins web UI, which by default will be located at http://$MASTER_IP:8080/. Once there, follow these steps:

  1. Click the Credentials link on the left side panel
  2. Click the link for the Global domain:
    credentials-list
  3. Click the Add credentials link
  4. Select SSH username with private key from the dropdown labeled “Kind”
  5. Enter “jenkins” in the Username textbox
  6. Select the “From a file on Jenkins master” radio button and enter /var/lib/jenkins/.ssh/id_rsa in the File textbox:
    add-credentials
  7. Click the OK button

Construct a Jenkins Slave Node

We will now install Puppet and the software necessary for running Devstack and Jenkins slave agents on a node.

Slave Node Requirements

On the host or virtual machine that you have selected to use as your Jenkins slave node, you will need to ensure, like the Jenkins master node, that the node has the following:

  • These basic packages are installed:
    • wget
    • openssl
    • ssl-cert
    • ca-certificates
  • Have the SSH keys you use with GitHub in ~/.ssh/. It also helps to bring over your ~/.ssh/known_hosts and ~/.ssh/config files as well.
  • Have at least 40G of available disk space
  • .

IMPORTANT NOTE: If you were considering using LXC containers for your Jenkins slave nodes (as I originally struggled to use)…. Use a KVM or other non-shared-kernel virtual machine for the devstack-running Jenkins slaves. Bugs like the inability to run open-iscsi in an LXC container make it impossible to run devstack inside an LXC container.

Download Your Config Data Repository

In the second article in this series, we went over the need for a data repository and, if you followed along in that article, you created a Git repository and stored an SSH key pair in that repository for Jenkins to use. Let’s get that data repository onto the slave node:

git clone $YOUR_DATA_REPO data

Install the Jenkins software and pre-cache OpenStack/Devstack Git Repos

And now, we install Puppet and have Puppet set up the slave software:

wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_slave.sh
bash install_slave.sh

Puppet will run for some time, installing the Jenkins slave agent software and necessary dependencies for running Devstack. Then you will see output like this:

Running: ['git', 'clone', 'https://git.openstack.org/openstack-dev/cookiecutter', 'openstack-dev/cookiecutter']
Cloning into 'openstack-dev/cookiecutter'...
...

Which indicates Puppet done and a set of Nodepool scripts are running to cache upstream OpenStack Git repositories on the node and prepare Devstack. Part of the process of preparing Devstack involves downloading images that are used by Devstack for testing. Note that this step takes a long time! Go have a beer or other beverage and work on something else for a couple hours.

Adding a Slave Node on the Jenkins Master

In order to “register” our slave node with the Jenkins master, we need to create a new node record on the master. First, go to the Jenkins web UI, and then follow these steps:

  1. Click the Manage Jenkins link on the left
  2. Scroll down and click the Manage Nodes link
  3. Click the New Node link on the left:
    manage-nodes
  4. Enter “devstack_slave1” in the Node name textbox
  5. Select the Dumb Slave radio button:
    add-node
  6. Click the OK button
  7. Enter 2 in the Executors textbox
  8. Enter “/home/jenkins/workspaces” in the Remote FS root textbox
  9. Enter “devstack_slave” in the Labels textbox
  10. Enter the IP Address of your slave host or VM in the Host textbox
  11. Select jenkins from the Credentials dropdown:
    new-node-screen
  12. Click the Save button
  13. Click the Log link on the left. The log should show the master connecting to the slave, and at the end of the log should be: “Slave successfully connected and online”:
    slave-log

Test the dsvm-tempest-full Jenkins job

Now we are ready to have our Jenkins slave execute the long-running Jenkins job that uses Devstack to install an OpenStack environment on the Jenkins slave node, and run a set of Tempest tests against that environment. We want to test that the master can successfully run this long-running job before we set the job to be triggered by the upstream Gerrit event stream.

Go to the Jenkins web UI, click on the dsvm-tempest-full link in the jobs listing, and then click the Build Now link. You will notice an executor start up and a link to a newly-running job will appear in the Build History box on the left:

Build History panel in Jenkins

Build History panel in Jenkins

Click on the link to the new job, then click Console Output in the left panel. You should see the job executing, with Bash output showing up on the right:

Manually running the dsvm-tempest-full Jenkins job

Manually running the dsvm-tempest-full Jenkins job

Troubleshooting

If you see errors pop up, you will need to address those issues. In my testing, issues generally were around:

  • Firewall/networking issues: Make sure that the Jenkins master node can properly communicate over SSH port 22 to the slave nodes. If you are using virtual machines to run the master or slave nodes, make sure you don’t have any iptables rules that are preventing traffic from master to slave.
  • Missing files like “No file found: /opt/nodepool-scripts/…”: Make sure that the install_slave.sh Bash script completed successfully. This script takes a long time to execute, as it pulls down a bunch of images for Devstack caching.
  • LXC: See above about why you cannot currently use LXC containers for Jenkins slaves that run Devstack
  • Zuul processes borked: In order to have jobs triggered from upstream, both the zuul-server and zuul-merge processes need to be running, connecting to Gearman, and firing job events properly. First, make sure the right processes are running:
    # First, make sure there are **2** zuul-server processes and
    # **1** zuul-merger process when you run this:
    ps aux | grep zuul
    # If there aren't, do this:
    sudo rm -rf /var/run/zuul/*
    sudo service zuul start
    sudo service zuul-merger start
    

    Next, make sure that the Gearman service has registered queues for all the Jenkins jobs. You can do this using telnet (4730 is the default port for Gearman):

    ubuntu@master:~$ telnet 127.0.0.1 4730
    Trying 127.0.0.1...
    Connected to 127.0.0.1.
    Escape character is '^]'.
    status
    build:noop-check-communication:master	0	0	2
    build:dsvm-tempest-full	0	0	1
    build:dsvm-tempest-full:devstack_slave	0	0	1
    merger:merge	0	0	1
    zuul:enqueue	0	0	1
    merger:update	0	0	1
    zuul:promote	0	0	1
    set_description:master	0	0	1
    build:noop-check-communication	0	0	2
    stop:master	0	0	1
    .
    ^]
    
    telnet> quit 
    Connection closed.
    

Enabling the dsvm-tempest-full Job in the Zuul Pipelines

Once you’ve successfully run the dsvm-tempest-full job manually, you should now enable this job in the appropriate Zuul pipelines. To do so, on the Jenkins master node, you will want to edit the etc/zuul/layout.yaml file in your data repository (don’t forget to git commit your changes after you’ve made them and push the changes to the location of your data repository’s canonical location).

If you used the example layout.yaml from my data repository and you’ve been following along this tutorial series, the projects section of your file will look like this:

projects:
  - name: openstack-dev/sandbox
    check:
      # Remove this after successfully verifying communication with upstream
      # and seeing a posted successful review.
      - noop-check-communication
      # Uncomment this job when you have a jenkins slave running and want to
      # test a full Tempest run within devstack.
      #- dsvm-tempest-full
    gate:
      # Remove this after successfully verifying communication with upstream
      # and seeing a posted successful review.
      - noop-check-communication
      # Uncomment this job when you have a jenkins slave running and want to
      # test a full Tempest run within devstack.
      #- dsvm-tempest-full

To enable the dsvm-tempest-full Jenkins job to run in the check pipeline when a patch is received (or recheck comment added) to the openstack-dev/sandbox project, simply uncomment the line:

      #- dsvm-tempest-full

And then reload Zuul and Zuul-merger:

sudo service zuul reload
sudo service zuul-merger reload

From now on, new patches and recheck comments on the openstack-dev/sandbox project will fire the dsvm-tempest-full Jenkins job on your devstack slave node. :) If your test run was successful, you will see something like this in your Jenkins console for the job run:

\o/ Steve Holt!

\o/ Steve Holt!

And you will note that on the patch that triggered your Jenkins job will show a successful comment, and a +1 Verified vote:

A comment showing external job successful runs

A comment showing external job successful runs

What Next?

From here, the changes you make to your Jenkins Job configuration files are up to you. The first place to look for ideas is the devstack-vm-gate.sh script. Look near the bottom of that script for a number of environment variables that you can set in order to tinker with what the script will execute.

If you are a Cinder storage vendor looking to test your hardware and associated Cinder driver against OpenStack, you will want to either make changes to the example dsvm-tempest-full or create a copy of that example job definition and customize it to your needs. You will want to make sure that Cinder is configured to use your storage driver in the cinder.conf file. You may want to create some script that copies most of what the devstack-vm-gate.sh script does, and call the devstack ini_set function to configure your storage driver, and then run devstack and tempest.

Publishing Console and Devstack Logs

Finally, you will want to get the log files that are collected by both Jenkins and the devstack run published to some external site. Folks at Arista have used dropbox.com to do this. I’ll leave it up to an exercise for the reader to set this up. Hint: that you will want to set the PUBLISH_HOST variable in your data repository’s vars.sh to a host that you have SCP rights to, and uncomment the publishers section in the example dsvm-tempest-full job:

#    publishers:
#      - devstack-logs  # In macros.yaml from os-ext-testing
#      - console-log  # In macros.yaml from os-ext-testing

Final Thoughts

I hope this three-part article series has been helpful for you to understand the upstream OpenStack continuous integration platform, and instructional in helping you set up your own external testing platform using Jenkins, Zuul, and Jenkins Job Builder, and Devstack-Gate. Please do let me know if you run into issues. I will post some updates to the Troubleshooting section above when I hear from you and (hopefully help you resolve any problems).

Setting Up an External OpenStack Testing System – Part 1

This post is intended to walk somone through the process of establishing an external testing platform that is linked with the upstream OpenStack continuous integration platform. If you haven’t already, please do read the first article in this series that discusses the upstream OpenStack CI platform in detail. At the end of the article, you should have all the background information on the tools needed to establish your own linked external testing platform.

What Does an External Test Platform Do?

In short, an external testing platform enables third parties to run tests — ostensibly against an OpenStack environment that is configured with that third party’s drivers or hardware — and report the results of those tests on the code review of a proposed patch. It’s easy to see the benefit of this real-time feedback by taking a look at a code review that shows a number of these external platforms providing feedback. In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by external Neutron vendor test platforms on a proposed patch to Neutron:

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Each of these systems, when adding a Verified label to a review does so by adding a comment to the review. These comments contain links to artifacts from the external testing system’s test run for this proposed patch, as shown here:

Comments added to a review by the vendor testing platforms

Comments added to a review by the vendor testing platforms

The developer submitting a patch can use those links to investigate why their patch has caused test failures to occur for that external test platform.

Why Set Up an External Test Platform?

The benefits of external testing integration with upstream code review are numerous:

A tight feedback loop
The third party gets quick notifications that a proposed patch to the upstream code has caused a failure in their driver or configuration. The tighter the “feedback loop”, the faster fixes can be identified
Better code coverage
Drivers and plugins that may not be used in the default configuration for a project can be tested with the same rigor and frequency as drivers that are enabled in the upstream devstack VM gate tests. This prevents bitrot and encourages developers to maintain code that is housed in the main source trees.
Increased consistency and standards
Determining a standard set of tests that prove a driver implements the full or partial API of a project means that drivers can be verified to work with a particular release of OpenStack. If you’ve ever had a conversation with a potential deployer of OpenStack who wonders how they know that their choice of storage or networking vendor, or underlying hypervisor, actually works with the version of OpenStack they plan to deploy, then you know why this is a critical thing!

Why might you be thinking about how to set up an external testing platform? Well, a number of OpenStack projects have had discussions already about requirements for vendors to complete integration of their testing platforms with the upstream OpenStack CI platform. The Neutron developer community is ahead of the game, with more than half a dozen vendors already providing linked testing that appears on Neutron code reviews.

The Cinder project also has had discussions around enforcing a policy that any driver that is in the Cinder source tree have tests run on each commit to validate the driver is working properly. Similarly, the Nova community has discussed the same policy for hypervisor drivers in that project’s source tree. So, while this may be old news for some teams, hopefully this post will help vendors that are new to the OpenStack contribution world get integrated quickly and smoothly.

The Tools You Will Need

The components involved in building a simple linked external testing system that can listen to and notify the upstream OpenStack continuous integration platform are as follows:

Jenkins CI
The server that is responsible for executing jobs that run tests for a project
Zuul
A system that configures and manages event pipelines that launch Jenkins jobs
Jenkins Job Builder (JJB)
Makes construction/maintenance of Jenkins job config XML files a breeze
Devstack-Gate and Nodepool Scripts
A collection of scripts that constructs an OpenStack environment from source checkouts

I’ll be covering how to install and configure the above components to build your own testing platform using a set of scripts and Puppet modules. Of course, there are a number of ways that you can install and configure any of these components. You can manually install it somewhere by following the install instructions in the component’s documentation. However, I do not recommend that. The problem with manual installation and configuration is two-fold:

  1. If something goes wrong, you have to re-install everything from scratch. If you haven’t backed up your configuration somewhere, you will have to re-configure everything from memory.
  2. You cannot launch a new configuration or instance of your testing platform easily, since you have to manually set everything up again.

A better solution is to use a configuration management system, such as Puppet, Chef, Ansible or SaltStack to manage the deployment of these components, along with a Git repository to store configuration data. In this article, I will show you how to install an external testing system on multiple hosts or virtual machines using a set of Bash scripts and Puppet modules I have collected into a source repository on GitHub. If you don’t like Puppet or would just prefer to use a different configuration management tool, that’s totally fine. You can look at the Puppet modules in this repo for inspiration (and eventually I will write some Ansible scripts in the OpenStack External Testing project, too).

Preparation

Before I go into the installation instructions, you will need to take care of a few things. Follow these detailed steps and you should be all good.

Getting an Upstream Service Account

In order for your testing platform to post review comments to Gerrit code reviews on openstack.org, you will need to have a service account registered with the OpenStack Infra team. See this link for instructions on getting this account.

In short, you will need to email the OpenStack Infra mailing list an email that includes:

  • The email address to use for the system account (must be different from any other Gerrit account)
  • A short account username that will appear on code reviews
  • (optional) A longer account name or description
  • (optional but encouraged) Include your contact information (IRC handle, your email address, and maybe an alternate contact’s email address) to assist the upstream infrastructure team
  • The public key for an SSH key pair that the service account will use for Gerrit access. Please note that there should be no newlines in the SSH key

Don’t have an SSH key pair for your Gerrit service account? You can create one like so:

ssh-keygen -t rsa -b 1024 -N '' -f gerrit_key

The above will produce the key pair: a pair of files called gerrit_key and gerrit_key.pub. Copy the text of the gerrit_key.pub into the email you send to the OpenStack Infra mailing list. Keep both the files handy for use in the next step.

Create a Git Repository to Store Configuration Data

When we install our external testing platform, the Puppet modules are fed a set of configuration options and files that are specific to your environment, including the SSH private key for the Gerrit service account. You will need a place to store this private configuration data, and the ideal place is a Git repository, since additions and changes to this data will be tracked just like changes to source code.

I created a source repository on GitHub that you can use as an example. Instead of forking the repository, like you might would normally do, I recommend instead just git clone’ing the repository to some local directory, and making it your own data repository:

git clone git@github.com:jaypipes/os-ext-testing-data ~/mydatarepo
cd mydatarepo
rm -rf .git
git init .
git add .
git commit -a -m "My new data repository"

Now you’ve got your own data repository to store your private configuration data and you can put it up in some private location somewhere — perhaps in a private organization in GitHub, perhaps on a Git server you have somewhere.

Put Your Gerrit Service Account Private Key Into the Data Repository

The next thing you will want to do is add your SSH key pair to the repository that you used in the step above that had you register an upstream Gerrit service account.

If you created a new key pair using the ssh-keygen command above. You would copy the gerrit_key file into your data repository.

If you did not create a new key pair (you used an existing key pair) or you created a key pair that wasn’t called gerrit_key, simply copy that key pair into the data repository, then open up the file called vars.sh, and change the following line in it:

export UPSTREAM_GERRIT_SSH_KEY_PATH=gerrit_key

And change gerrit_key to the name of your SSH private key.

Set Your Gerrit Account Username

Next, open up the file vars.sh in your data repository (if you haven’t already), and change the following line in it:

export UPSTREAM_GERRIT_USER=jaypipes-testing

And replace jaypipes-testing with your Gerrit service account username.

Set Your Vendor Name in the Test Jenkins Job

Next, open up the file etc/jenkins_jobs/config/projects.yaml in your data repository. Change the following line in it:

  vendor: myvendor

Change myvendor to your organization’s name.

(Optional) Create Your Own Jenkins SSH Key Pair

I have a private/public SSH key pair (named jenkins_key[.pub] in the example data repository. Due to the fact that I’ve put the private key in there, it’s no longer useful as anything other than an example, so you may want to recreate your own. Do so like so:

cd $DATA_DIRECTORY
ssh-keygen -t rsa -b 1024 -N '' -f jenkins_key
git commit -a -m "Changed jenkins key to a new private one"

Save Changes in Your Data Repository

OK, we’re done with the first changes to your data repository and we’re ready to install a Jenkins master node. But first, save your changes and push your commit to wherever you are storing your data repository privately:

git add .
git commit -a -m "Added Gerrit SSH key and username"
git push

Requirements for Nodes

On the nodes (hosts, virtual machines, or LXC containers) that you are going to install Jenkins master and slaves into, you will want to ensure the following:

  • These basic packages are installed:
    • wget
    • openssl
    • ssl-cert
    • ca-certificates
  • Have the SSH keys you use with GitHub in ~/.ssh/. It also helps to bring over your ~/.ssh/known_hosts and ~/.ssh/config files as well.

Setting up Your Jenkins Master Node

On the host or virtual machine (or LXC container) you wish to run the Jenkins Master node on, run the following:

git clone $YOUR_DATA_REPO data
wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_master.sh
bash install_master.sh

The above should create an SSL self-signed certificate for Apache to run Jenkins UI with, and then install Jenkins, Jenkins Job Builder, Zuul, Nodepool Scripts, and a bunch of support packages.

Important Note: Since publishing this article, the upstream Zuul system underwent a bit of a refactoring, with the Zuul git-related activities being executed by a separate Zuul worker process called zuul-merger. I’ve updated the Puppet modules in the os-ext-testing repository accordingly, but if you had installed the Jenkins master with Zuul from the Puppet modules before Tuesday, February 18th, 2014, you will need to do the following on your master node to get all reconfigured properly:

# NOTE: This is only necessary if you installed a Jenkins master from the
# os-ext-testing repository before Tuesday, February 18th, 2014!
sudo service zuul stop
sudo rm -rf /var/log/zuul/* /var/run/zuul/*
sudo -i
# As root...
cd /root/config; git pull /root/config
exit
cd os-ext-testing; git pull; cd ../
cp os-ext-testing/puppet/install_master.sh .
bash install_master.sh

Troubleshooting note:: There is a bug in the upstream openstack-infra/config project (with a patch submitted) that may cause Puppet to error with the following:

Duplicate declaration: A2mod[rewrite] is already declared in file /home/vagrant/os-ext-testing/puppet/modules/os_ext_testing/manifests/master.pp at line 34; cannot redeclare at /root/config/modules/zuul/manifests/init.pp:236 on node master

To fix this issue, open the /root/config/modules/zuul/manifests/init.pp file and comment out these lines:

  a2mod { 'rewrite':
    ensure => present,
  }
  a2mod { 'proxy':
    ensure => present,
  }
  a2mod { 'proxy_http':
    ensure => present,
  }

When Puppet completes, go ahead and open up the Jenkins web UI, which by default will be at http://$HOST_IP:8080. You will need to enable the Gearman workers that Zuul and Jenkins use to interact. To do this:

  1. Click the `Manage Jenkins` link on the left
  2. Click the `Configure System` link
  3. Scroll down until you see “Gearman Plugin Config”. Check the “Enable Gearman” checkbox.
  4. Click the “Test Connection” button and verify Jenkins connects to Gearman.
  5. Scroll down to the bottom of the page and click `Save`
  6. Note: Darragh O’Reilly noticed when he first did this on his machine, that the Gearman plugin was not actually enabled (though it was installed). He mentioned that simply restarting the Jenkins service fixed this problem, and the Gearman Plugin Config section then appeared on the Manage Jenkins -> Configure System page.

    Once you are done with that, it’s time to load up your Jenkins jobs and start Zuul:

    sudo jenkins-jobs --flush-cache update /etc/jenkins_jobs/config/
    sudo service zuul start
    sudo service zuul-merger start
    

    If you refresh the main Jenkins web UI front page, you should now see two jobs show up:

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Testing Communication Between Upstream and Your Master

    Congratulations. You’ve successfully set up your Jenkins master. Let’s now test connectivity between upstream and our external testing platform using the simple sandbox-noop-check-communication job. By default, I set this Jenkins job to execute on the master node for the openstack-dev/sandbox project [1]. Here is the project configuration in the example data repository’s etc/jenkins_jobs/config/projects.yaml file:

    - project:
        name: sandbox
        github-org: openstack-dev
        node: master
    
        jobs:
            - noop-check-communication
            - dsvm-tempest-full:
                node: devstack_slave
    

    Note that the node is master by default. The sandbox-dsvm-tempest-full Jenkins Job is configured to run on a node labeled devstack_slave, but we will cover that later when we bring up our Jenkins slave.

    In our Zuul configuration, we have two pipelines: check and gate. There is only a single project listed in the layout.yaml Zuul project configuration file, the openstack-dev/sandbox project:

    projects:
        - name: openstack-dev/sandbox
          check:
            - sandbox-noop-check-communication
    

    By default, the only job that is enabled is the sandbox-noop-check-communication Jenkins job, and it will get run whenever a patchset is created in the upstream openstack-dev/sandbox project, as well as any time someone adds a comment with the words “recheck no bug” or “recheck bug XXXXX”. So, let us create a sample patch to that project and check to see if the sandbox-noop-check-communication job fires properly.

    Before we do that, let’s go ahead and tail the Zuul debug log, grepping for the term “sandbox”. This will show messages if communication is working properly.

    sudo tail -f /var/log/zuul/debug.log | grep sandbox
    

    OK, now create a simple test patch in sandbox. Do this on your development workstation, not your Jenkins master:

    git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    cd /tmp/sandbox
    git checkout -b testing-ext
    touch mytest
    git add mytest
    git commit -a -m "Testing comms"
    git review
    

    Output should look like so:

    jaypipes@cranky:~$ git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    Cloning into '/tmp/sandbox'...
    remote: Reusing existing pack: 13, done.
    remote: Total 13 (delta 0), reused 0 (delta 0)
    Receiving objects: 100% (13/13), done.
    Resolving deltas: 100% (4/4), done.
    Checking connectivity... done
    jaypipes@cranky:~$ cd /tmp/sandbox
    jaypipes@cranky:/tmp/sandbox$ git checkout -b testing-ext
    Switched to a new branch 'testing-ext'
    jaypipes@cranky:/tmp/sandbox$ touch mytest
    jaypipes@cranky:/tmp/sandbox$ git add mytest
    jaypipes@cranky:/tmp/sandbox$ git commit -a -m "Testing comms"
    [testing-ext 51f90e3] Testing comms
     1 file changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 mytest
    jaypipes@cranky:/tmp/sandbox$ git review
    Creating a git remote called "gerrit" that maps to:
    	ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
    Your change was committed before the commit hook was installed.
    Amending the commit to add a gerrit change id.
    remote: Processing changes: new: 1, done    
    remote: 
    remote: New Changes:
    remote:   https://review.openstack.org/73631
    remote: 
    To ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
     * [new branch]      HEAD -> refs/publish/master/testing-ext
    

    Keep an eye on your tail’d Zuul debug log file. If all is working, you should see something like this:

    2014-02-14 16:08:51,437 INFO zuul.Gerrit: Updating information for 73631,1
    2014-02-14 16:08:51,629 DEBUG zuul.Gerrit: Change  status: NEW
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Done adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Run handler awake
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Fetching trigger event
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Processing trigger event 
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Starting queue processor: check
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Finished queue processor: check (changed: False)
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Starting queue processor: gate
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Finished queue processor: gate (changed: False)
    

    If you go to the link to the code review in Gerrit (the link that output after you ran git review), you will see your Gerrit testing account has added a +1 Verified vote in the code review:

    Successful communication between upstream and our external system

    Successful communication between upstream and our external system

    Congratulations. You now have an external testing platform that is receiving events from the upstream Gerrit system, triggering Jenkins jobs on your master Jenkins server, and writing reviews back to the upstream Gerrit system. The next article goes over adding a Jenkins slave to your system, which is necessary to run real Jenkins jobs that run devstack-based gate tests. Please do let me know what you think of both this article and the source repository of scripts to set things up. I’m eager for feedback and critique. :)

    [1]— The OpenStack Sandbox project is a project that can be used for testing the integration of external testing systems with upstream. By creating a patch against this project, you can trigger the Jenkins jobs that are created during this tutorial.

Understanding the OpenStack CI System

This post describes in detail the upstream OpenStack continuous integration platform. In the process, I’ll be describing the code flow in the upstream system — from the time the contributor submits a patch to Gerrit, all the way through the creation of a devstack environment in a virtual machine, the running of the Tempest test suite against the devstack installation, and finally the reporting of test results and archival of test artifacts. Hopefully, with a good understanding of how the upstream tooling works, setting up your own linked external testing platform will be easier.

Some History and Concepts

Over the past four years, there has been a steady evolution in the way that the source code of OpenStack projects is tested and reviewed. I remember when we used Bazaar for source control and Launchpad merge proposals for code review. There was no automated or continuous testing to speak of in those early days, which put pressure on core reviewers to do testing of proposed patches locally. There was also no standardized integration test suite, so often a change in one project would inadvertantly break another project.

Thanks to the work of many contributors, particularly those patient souls in the OpenStack Infrastructure team, today there is a robust platform supporting continuous integration testing for OpenStack and Stackforge projects. At the center of this platform are the Jenkins CI servers, the Gerrit git and patch review server, and the Zuul gating system.

The Code Review System

When a contributor submits a patch to one of the OpenStack projects, one pushes their code to the git server managed by Gerrit running on review.openstack.org. Typically, contributors use the git-review Git plugin, which simplifies submitting to a git server managed by Gerrit. Gerrit controls which users or groups are allowed to propose code, merge code, and administer code repositories under its management. When a contributor pushes code to review.openstack.org, Gerrit creates a Changeset representing the proposed code. The original submitter and any other contributors can push additional amendments to that Changeset, and Gerrit collects all of the changes into the Changeset record. Here is a shot of a Changeset under review. You can see a number of patches (changes) listed in the review screen. Each of those patches was an amendment to the original commit.

Individual patches amend the changeset

Individual patches amend the changeset

For each patch in Gerrit, there are three sets of “labels” that may be applied to the patch. Anyone can comment on a Changeset and/or review the code. A review is shown on the patch in the Code-Review column in the patch “labels matrix”:

The "label matrix" on a Gerrit patch

The “label matrix” on a Gerrit patch

Non-core team members may give the patch a Code-Review label of +1 (Looks good to me), 0 (No strong opinion), or -1 (I would prefer you didn’t merge this). Core team members can give any of those values, plus +2 (Looks good to me, approved) and -2 (Do not submit).

The other columns in the label matrix are Verified and Approved. Only non-interactive users of Gerrit, such as Jenkins, are allowed to add a Verified label to a patch. The external testing platform you will set up is one of these non-interactive users. The value of the Verified label will be +1 (check pipeline tests passed), -1 (check pipeline tests failed), +2 (gate pipeline tests passed), or -2 (gate pipeline tests failed).

Only members of the OpenStack project’s core team can add an Approved label to a patch. It is either a +1 (Approved) value or not, appearing as a check mark in the Approved column of the label matrix:

An approved patch.

An approved patch.

Continuous Integration Testing

Continuous integration (CI) testing is the act of running tests that validate a full application environment on a continual basis — i.e. when any change is proposed to the application. Typically, when talking about CI, we are referring to tests that are run against a full, real-world installation of the project. This type of testing, called integration testing, ensures that proposed changes to one component do not cause failures in other components. This is especially important for complex multi-project systems like OpenStack, with non-trivial dependencies between subsystems.

When code is pushed to Gerrit, a series of jobs are triggered that run a series of tests against the proposed code. Jenkins is the server that executes and manages these jobs. It is a Java application with an extensible architecture that supports plugins that add functionality to the base server.

Each job in Jenkins is configured separately. Behind the scenes, Jenkins stores this configuration information in an XML file in its data directory. You may manually edit a Jenkins job as an administrator in Jenkins. However, in a testing platform as large as the upstream OpenStack CI system, doing so manually would be virtually impossible and fraught with errors. Luckily, there is a helper tool called Jenkins Job Builder (JJB) that constructs these XML configuration files after reading a set of YAML files and job templating rules. We will describe JJB later in the article.

The “Gate”

When we talk about “the gate”, we are talking about the process by which code is kept out of a set of source code branches if certain conditions are not met.

OpenStack projects use a method of controlling merges into certain branches of their source trees called the Non-Human Gatekeeper model [1]. Gerrit (the non-human) is configured to allow merges by users in a group called “Non-Interactive Users” to the master and stable branches of git repositories under its control. The upstream main Jenkins CI server, as well as Jenkins CI systems running at third party locations, are the users in this group.

So, how do these non-interactive users actually decide whether to merge a proposed patch into the target branch? Well, there is a set of tests (different for each project) — unit, functional, integration, upgrade, style/linting — that is marked as “gating” that particular project’s source trees. For most of the OpenStack projects, there are unit tests (run in a variety of different supported versions of Python) and style checker tests for HACKING and PEP8 compliance. These unit and style tests are run in Python virtualenvs managed by the tox testing utility.

In addition to the Python unit and style tests, there are a number of integration tests that are executed against full installations of OpenStack. The integration tests are simply subsets of the Tempest integration test suite. Finally, many projects also include upgrade and schema migration tests in their gate tests.

How Upstream Testing Works

Graphically, the upstream continuous integration gate testing system works like this:

gerrit-zuul-jenkins-flow

We step through this event flow in detail below, referencing the numbered steps in bold.

The Gerrit Event Stream and Zuul

After a contributor has pushed (1a) a new patch to a changeset or a core team member has reviewed the patch and added an Approved +1 label (1b), Gerrit pushes out a notification event to its event stream (2). This event stream can have a number of subscribers, including the Gerrit Jenkins plugin and Zuul. Zuul was developed to manage the many complex graphs of interdependent branch merge proposals in the upstream system. It monitors in-progress jobs for a set of related patches and will pre-emptively cancel any dependent test jobs that would not succeed due to a failure in a dependent patch [2].

In addition to this dependency monitoring, Zuul is responsible for constructing the pipelines of jobs that should be executed on various events. One of these pipelines is called the “gate” pipeline, appropriately named for the set of jobs that must succeed in order for a proposed patch to be merged into a target branch.

Zuul’s pipelines are configured in a single file called layout.yaml in the OpenStack-Infra config project. Here’s a snippet from that file that constructs the gate pipeline:

  - name: gate
    description: Changes that have been approved by core developers...
    failure-message: Build failed. For information on how to proceed...
    manager: DependentPipelineManager
    precedence: low
    trigger:
      gerrit:
        - event: comment-added
          approval:
            - approved: 1
        - event: comment-added
          comment_filter: (?i)^\s*reverify( (?:bug|lp)[\s#:]*(\d+))\s*$
    start:
      gerrit:
        verified: 0
    success:
      gerrit:
        verified: 2
        submit: true
    failure:
      gerrit:
        verified: -2

Zuul listens to the Gerrit event stream (3), and matches the type of event to one or more pipelines (4). The matching conditions for the gate pipeline are configured in the trigger:gerrit: section of the YAML snippet above:

    trigger:
      gerrit:
        - event: comment-added
          approval:
            - approved: 1
        - event: comment-added
          comment_filter: (?i)^\s*reverify( (?:bug|lp)[\s#:]*(\d+))\s*$

The above indicates that Zuul should fire the gate pipeline when it sees reviews with an Approved +1 label, and any comment to the review that contains “reverify” with or without a bug identifier. Note that there is a similar pipeline that is fired when a new patchset is created or when a review comment is made with the word “recheck”. This pipeline is called the check pipeline. Look in the layout.yaml file for the configuration of the check pipeline.

Once the appropriate pipeline is matched, Zuul executes (5) that particular pipeline for the project that had a patch proposed.

But wait, hold up…“, you may be asking yourself, “how does Zuul know which Jenkins jobs to execute for a particular project and pipeline?“. Great question! :)

Also in the layout.yaml file, there is a section that configures which Jenkins jobs should be run for each project. Let’s take a look at the configuration of the gate pipeline for the Cinder project:

  - name: openstack/cinder
    template:
      - name: python-jobs
...snip...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm

Each of the lines in the gate: section indicate a specific Jenkins job that should be run in the gate pipeline for Cinder. In addition, there is the python-jobs item in the template: section. Project templates are a way that Zuul consolidates configuration of many similar jobs into a simple template configuration. The project template definition for python-jobs looks like this (still in layout.yaml:

project-templates:
  - name: python-jobs
...snip...
    gate:
      - 'gate-{name}-docs'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python26'
      - 'gate-{name}-python27'

So, on determing which Jenkins jobs should be executed for a particular pipeline, Zuul sees the python-jobs project template in the Cinder configuration and expands that to execute the following Jenkins jobs:

  • gate-cinder-docs
  • gate-cinder-pep8
  • gate-cinder-python26
  • gate-cinder-python27

Jenkins Job Creation and Configuration

I previously mentioned that the configuration of an individual Jenkins job is stored in a config.xml file in the Jenkins data directory. Now, at last count, the upstream OpenStack Jenkins CI system has just shy of 2,000 jobs. It would be virtually impossible to manage the configuration of so many jobs using human-based processes. To solve this dilemma, the Jenkins Job Builder (JJB) python tool was created. JJB consumes YAML files that describe both individual Jenkins jobs as well as templates for parameterized Jenkins jobs, and writes the config.xml files for all Jenkins jobs that are produced from those templates. Important: Note that Zuul does not construct Jenkins jobs. JJB does that. Zuul simply configures which Jenkins jobs should run for a project and a pipeline.

There is a master projects.yaml file in the same directory that lists the “top-level” definitions of jobs for all projects, and it is in this file that many of the variables that are used in job template instantiation are defined (including the {name} variable, which corresponds to the name of the project.

When JJB constructs the set of all Jenkins jobs, it reads the projects.yaml file, and for each project, it sees the “name” attribute of the project, and substitutes that name attribute value wherever it sees {name} in any of the jobs that are defined for that project. Let’s take a look at the Cinder project’s definition in the projects.yaml file here:

- project:
    name: cinder
    github-org: openstack
    node: bare-precise
    tarball-site: tarballs.openstack.org
    doc-publisher-site: docs.openstack.org

    jobs:
      - python-jobs
      - python-grizzly-bitrot-jobs
      - python-havana-bitrot-jobs
      - openstack-publish-jobs
      - gate-{name}-pylint
      - translation-jobs

You will note one of the items in the jobs section is called python-jobs. This is actually not a single Jenkins job, but actually a job group. A job group definition is merely a list of jobs or job templates. Let’s take a look at the definition of the python-jobs job group:

- job-group:
    name: python-jobs
    jobs:
      - '{name}-coverage'
      - 'gate-{name}-pep8'
      - 'gate-{name}-python26'
      - 'gate-{name}-python27'
      - 'gate-{name}-python33'
      - 'gate-{name}-pypy'
      - 'gate-{name}-docs'
      - 'gate-{name}-requirements'
      - '{name}-tarball'
      - '{name}-branch-tarball'

Each of the items listed in the jobs section of the python-jobs job group definition above is a job template. Job templates are expanded in the same way as Zuul project templates and JJB job groups are expanded. Let’s take a look at one such job template in the list above, called gate-{name}-python27.

(Hint: all Jenkins jobs for any OpenStack or Stackforge project are described in the OpenStack-Infra Config project’s modules/openstack_projects/files/jenkins_jobs/config/ directory).

The python-jobs.yaml file in the modules/openstack_project/files/jenkins_job_builder/config directory contains the definition of common Python project Jenkins job templates. One of those job templates is gate-{name}-python27:

- job-template:
    name: 'gate-{name}-python27'
... snip ...
    builders:
      - gerrit-git-prep
      - python27:
          github-org: '{github-org}'
          project: '{name}'
      - assert-no-extra-files

    publishers:
      - test-results
      - console-log

    node: '{node}'

Looking through the above job template definition, you will see a section called “builders“. The builders section of a job template lists (in sequential order of expected execution) the executable sections or scripts of the Jenkins job. The first executable section in the gate-{name}-python27 job template is called “gerrit-git-prep“. This executable section is defined in macros.yaml, which contains a number of commonly-run scriptlets. Here’s the entire gerrit-git-prep macro definition:

- builder:
    name: gerrit-git-prep
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/gerrit-git-prep.sh https://review.openstack.org http://zuul.openstack.org git://git.openstack.org"

So, gerrit-git-prep is simply executing a Bash script called “gerrit-git-prep.sh” that is stored in the /usr/local/jenkins/slave_scripts/ directory. Let’s take a look at that file. You can find it in the /modules/jenkins/files/slave_scripts/ [3]
directory in the same OpenStack Infra Config project:

#!/bin/bash -e
 
GERRIT_SITE=$1
ZUUL_SITE=$2
GIT_ORIGIN=$3
 
# ... snip ...
 
set -x
if [[ ! -e .git ]]
then
    ls -a
    rm -fr .[^.]* *
    if [ -d /opt/git/$ZUUL_PROJECT/.git ]
    then
        git clone file:///opt/git/$ZUUL_PROJECT .
    else
        git clone $GIT_ORIGIN/$ZUUL_PROJECT .
    fi
fi
git remote set-url origin $GIT_ORIGIN/$ZUUL_PROJECT
 
# attempt to work around bugs 925790 and 1229352
if ! git remote update
then
    echo "The remote update failed, so garbage collecting before trying again."
    git gc
    git remote update
fi
 
git reset --hard
if ! git clean -x -f -d -q ; then
    sleep 1
    git clean -x -f -d -q
fi
 
if [ -z "$ZUUL_NEWREV" ]
then
    git fetch $ZUUL_SITE/p/$ZUUL_PROJECT $ZUUL_REF
    git checkout FETCH_HEAD
    git reset --hard FETCH_HEAD
    if ! git clean -x -f -d -q ; then
        sleep 1
        git clean -x -f -d -q
    fi
else
    git checkout $ZUUL_NEWREV
    git reset --hard $ZUUL_NEWREV
    if ! git clean -x -f -d -q ; then
        sleep 1
        git clean -x -f -d -q
    fi
fi
 
if [ -f .gitmodules ]
then
    git submodule init
    git submodule sync
    git submodule update --init
fi

The purpose of the script above is simple: Check out the source code of the proposed Gerrit changeset and ensure that the source tree is clean of any cruft from a previous run of a Jenkins job that may have run in the same Jenkins workspace. The concept of a workspace is important. When Jenkins runs a job, it must execute that job from within a workspace. The workspace is really just an isolated shell environment and filesystem directory that has a set of shell variables export’d inside it that indicate a variety of important identifiers, such as the Jenkins job ID, the name of the source code project that has triggered a job, the SHA1 git commit ID of the particular proposed changeset that is being tested, etc [4].

The next builder in the job template is the “python27” builder, which has two variables injected into itself:

      - python27:
          github-org: '{github-org}'
          project: '{name}'

The github-org variable is a string of the already existing {github-org} variable value. The project variable is populated with the value of the {name} variable. Here’s how the python27 builder is defined (in macros.yaml:

- builder:
    name: python27
    builders:
      - shell: "/usr/local/jenkins/slave_scripts/run-unittests.sh 27 {github-org} {project}"

Again, just a wrapper around another Bash script, called run-unittests.sh in the /usr/local/jenkins/slave_scripts directory. Here’s what that script looks like:

version=$1
org=$2
project=$3
 
# ... snip ...
 
venv=py$version
 
# ... snip ...
 
source /usr/local/jenkins/slave_scripts/select-mirror.sh $org $project
 
tox -e$venv
result=$?
 
echo "Begin pip freeze output from test virtualenv:"
echo "======================================================================"
.tox/$venv/bin/pip freeze
echo "======================================================================"
 
if [ -d ".testrepository" ] ; then
# ... snip ...
    .tox/$venv/bin/python /usr/local/jenkins/slave_scripts/subunit2html.py ./subunit_log.txt testr_results.html
    gzip -9 ./subunit_log.txt
    gzip -9 ./testr_results.html
# ... snip ...
fi
 
# ... snip ...

In short, for the Python 2.7 builder, the above runs the command tox -epy27 and then runs a prettifying script and gzips up the results of the unit test run. And that’s really the meat of the Jenkins job. We will discuss the publishing of the job artifacts a little later in this article, but if you’ve gotten this far, you have delved deep into the mines of the OpenStack CI system. Congratulations!

Devstack-Gate and Running Tempest Against a Real Environment

OK, so unit tests running in a simple Jenkins slave workspace are one thing. But what about Jenkins jobs that run integration tests against a full set of OpenStack endpoints, interacting with real database and message queue services? For these types of Jenkins jobs, things are more complicated. Yes, I know. You probably think things have been complicated up until this point, and you’re right! But the simple unit test jobs above are just the tip of the proverbial iceberg when it comes to the OpenStack CI platform.

For these complex Jenkins jobs, an additional set of tools are added to the mix:

  • Nodepool — Provides virtual machine instances to Jenkins masters for running complex, isolation-sensitive Jenkins jobs
  • Devstack-Gate — Scripts that create an OpenStack environment with Devstack, run tests against that environment, and archive logs and results

Assignment of a Node to Run a Job

Different Jenkins jobs require different workspaces, or environments, in which to run. For basic unit or style-checking test jobs, like the gate-{name}-python27 job template we dug into above, not much more is needed than a tox-managed virtualenv running in a source checkout of the project with a proposed change. However, for Jenkins jobs that run a series of integration tests against a full OpenStack installation, a workspace with significantly more resources and isolation is necessary. For these latter types of jobs, the upstream CI platform uses a pool of virtual machine instances. This pool of virtual machine instances is managed by a tool called nodepool. The virtual machines run in both HP Cloud and Rackspace Cloud, who graciously donate these instances for the upstream CI system to use. You can see the configuration of the Nodepool-managed set of instances here.

Instances that are created by Nodepool run Jenkins slave software, so that they can communicate with the upstream Jenkins CI master servers. A script called prepare_node.sh runs on each Nodepool instance. This script just git clones the OpenStack Infra config project to the node, installs Puppet, and runs a Puppet manifest that sets up the node based on the type of node it is. There are bare nodes, nodes that are meant to run Devstack to install OpenStack, and nodes specific to the Triple-O project. The node type that we will focus on here is the node that is meant to run Devstack. The script that runs to prepare one of these nodes is prepare_devstack_node.sh, which in turn calls prepare_devstack.sh. This script caches all of the repositories needed by Devstack, along with Devstack itself, in a workspace cache on the node. This workspace cache is used to enable fast reset of the workspace that is used during the running of a Jenkins job that uses Devstack to construct an OpenStack environment.

Devstack-Gate

The Devstack-Gate project is a set of scripts that are executed by certain Jenkins jobs that need to run integration or upgrade tests against a realistic OpenStack environment. Going back to the Cinder project configuration in the Zuul layout.yaml file:

  - name: openstack/cinder
    template:
      - name: python-jobs
... snip ...
    gate:
      - gate-cinder-requirements
      - gate-tempest-dsvm-full
      - gate-tempest-dsvm-postgres-full
      - gate-tempest-dsvm-neutron
      - gate-tempest-dsvm-large-ops
      - gate-tempest-dsvm-neutron-large-ops
      - gate-grenade-dsvm
... snip ...

Note the highlighted line. That Jenkins job template is one such job that needs an isolated workspace that has a full OpenStack environment running on it. Note that “dsvm” stands for “Devstack virtual machine”.

Let’s take a look at the JJB configuration of the gate-tempest-dsvm-full job:

- job-template:
    name: '{pipeline}-tempest-dsvm-full{branch-designator}'
    node: '{node}'
... snip ...
    builders:
      - devstack-checkout
      - shell: |
          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh
      - link-logs

    publishers:
      - devstack-logs
      - console-log

The devstack-checkout builder is simply a Bash script macro that looks like this:

- builder:
    name: devstack-checkout
    builders:
      - shell: |
          #!/bin/bash -xe
          if [[ ! -e devstack-gate ]]; then
              git clone git://git.openstack.org/openstack-infra/devstack-gate
          else
              cd devstack-gate
              git remote set-url origin git://git.openstack.org/openstack-infra/devstack-gate
              git remote update
              git reset --hard
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              git checkout master
              git reset --hard remotes/origin/master
              if ! git clean -x -f ; then
                  sleep 1
                  git clean -x -f
              fi
              cd ..
          fi

All the above is doing is git clone’ing the devstack-gate repository into the Jenkins workspace, and if the devstack-gate repository already exists, checks out the latest from the master branch.

Returning to our gate-tempest-dsvm-full JJB job template, we see the remaining part of the builder is a Bash scriptlet like so:

          #!/bin/bash -xe
          export PYTHONUNBUFFERED=true
          export DEVSTACK_GATE_TIMEOUT=180
          export DEVSTACK_GATE_TEMPEST=1
          export DEVSTACK_GATE_TEMPEST_FULL=1
          export BRANCH_OVERRIDE={branch-override}
          if [ "$BRANCH_OVERRIDE" != "default" ] ; then
              export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
          fi
          cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
          ./safe-devstack-vm-gate-wrap.sh

Not all that complicated. It exports some environment variables and copies the devstack-vm-gate-wrap.sh script out of the devstack-gate repo that was clone’d in the devstack-checkout macro to the work directory and then runs that script.

The devstack-vm-gate-wrap.sh script is responsible for setting even more environment variables and then calling the devstack-vm-gate.sh script, which is where the real magic happens.

Construction of OpenStack Environment with Devstack

The devstack-vm-gate.sh script is responsible for constructing a full OpenStack environment and running integration tests against that environment. To construct this OpenStack environment, it uses the excellent Devstack project. Devstack is an elaborate series of Bash scripts and functions that clones each OpenStack project source code into /opt/stack/new/$project [5]— , runs python setup.py install in each project checkout, and starts each relevant OpenStack service (e.g. nova-compute, nova-scheduler, etc) in a separate Linux screen session.

Devstack’s creation script (stack.sh) is called from the script after creating the localrc file that stack.sh uses when constructing the Devstack environment.

Execution of Integration Tests Against an OpenStack Environment

Once the OpenStack environment is constructed, the devstack-vm-gate.sh script continue on to run a series of integration tests:

    cd $BASE/new/tempest
    if [[ "$DEVSTACK_GATE_TEMPEST_ALL" -eq "1" ]]; then
        echo "Running tempest all test suite"
        sudo -H -u tempest tox -eall -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?
    elif [[ "$DEVSTACK_GATE_TEMPEST_FULL" -eq "1" ]]; then
        echo "Running tempest full test suite"
        sudo -H -u tempest tox -efull -- --concurrency=$TEMPEST_CONCURRENCY
        res=$?

You will note that the $DEVSTACK_GATE_TEMPEST_FULL Bash environment variable was set to “1” in the gate-tempest-dsvm-full Jenkins job builder scriptlet.

sudo -H -u tempest tox -efull triggers the execution of Tempest’s integration test suite. Tempest is the collection of canonical OpenStack integration tests that are used to validate that OpenStack APIs work according to spec and that patches to one OpenStack service do not inadvertently cause failures in another service.

If you are curious what actual commands are run, you can check out the tox.ini file in Tempest:

[testenv:full]
# The regex below is used to select which tests to run and exclude the slow tag:
# See the testrepostiory bug: https://bugs.launchpad.net/testrepository/+bug/1208610
commands =
  bash tools/pretty_tox.sh '(?!.*\[.*\bslow\b.*\])(^tempest\.(api|scenario|thirdparty|cli)) {posargs}'

In short, the above runs the Tempest API, scenario, CLI, and thirdparty tests.

Archival of Test Artifacts

The final piece of the puzzle is archiving all of the artifacts from the Jenkins job execution. These artifacts include log files from each individual OpenStack service running in Devstack’s screen sessions, the results of the Tempest test suite runs, as well as echo’d output from the devstack-vm-gate* scripts themselves.

These artifacts are gathered together by the devstack-logs and console-log JJB publisher macros:

- publisher:
    name: console-log
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              copy-console: true
              copy-after-failure: true


- publisher:
    name: devstack-logs
    publishers:
      - scp:
          site: 'static.openstack.org'
          files:
            - target: 'logs/$LOG_PATH'
              source: 'logs/**'
              keep-hierarchy: true
              copy-after-failure: true

Conclusion

I hope this article has helped you understand a bit more how the OpenStack continuous integration platform works. We’ve stepped through the flow through the various components of the platform, including which events trigger what actions in each components. You should now have a good idea how the various parts of the upstream CI infrastructure are configured and where to go look in the source code for more information.

The next article in this series discusses how to construct your own external testing platform that is linked with the upstream OpenStack CI platform. Hopefully, this article will provide you most of the background information you need to understand the steps and tools involved in that external testing platform construction.


[1]— The link describes and illustrates the non-human gatekeeper model with Bazaar, but the same concept is applicable to Git. See the OpenStack GitWorkflow pages for an illustration of the OpenStack specific model.
[2]— Zuul really is a pretty awesome bit of code kit. Jim Blair, the author, does an excellent job of explaining the merge proposal dependency graph and how Zuul can “trim” dead-end branches of the dependency graph in the Zuul documentation.
[3]— Looking for where a lot of the “magic” in the upstream gate happens? Take an afternoon to investigate the scripts in this directory. :)
[4]— Gerrit Jenkins plugin and Zuul export a variety of workspace environment variables into the Jenkins jobs that they trigger. If you are curious what these variables are, check out the Zuul documentation on parameters.
[5]— The reason the projects are installed into /opt/stack/new/$project is because the current HEAD of the target git branch for the project is installed into /opt/stack/old/$project. This is to allow an upgrade test tool called Grenade to test upgrade paths.

Pushing revisions to a Gerrit code review

Martin Stoyanov recently asked an excellent question about the proper way to push revisions to a changeset in Gerrit. This is a very common question that folks new to Gerrit or Git have, and I think it deserves its own post.

When you do the first push of a local working branch to Gerrit, the act of pushing your code creates a Gerrit changeset. The changeset can be reviewed, and in the process of doing that review, it’s common for reviewers to request that the submitter make some changes to the code. Sometimes these changes are stylistic or cosmetic. Other times, the requested modifications can be extensive.

How you handle making the requested modifications and submitting those changes back to Gerrit depends on a few things:

  1. Are the changes requested mostly stylistic or cosmetic?
  2. Are the changes requested going to provide additional functionality that is dependent on the existing changeset?
  3. Are the changes requested going to provide additional functionality that is independent of the existing changeset?

Depending on the answers to the above questions, you should either amend the existing changeset commit, push a new commit to Gerrit from the same local branch, or push a new commit from a new local branch. Here’s some quick guidelines to help you decide:

The Changes Requested Are Cosmetic or Style-related

When a reviewer is providing some stylistic advice or offering suggestions for cosmetic changes or cleanups, you should amend the original commit. Do so like this:

# After making the requested changes...
git commit -a --amend
# Modify the commit message, if necessary, and save your editor
git review

While this looks fairly simple (and it is…), many folks make a fatal mistake when they modify the commit message and add sections that describe the “sausage-making” involved in the cleanups. DO NOT do this. It’s not necessary. Avoid adding any lines to the commit message that look like this:

  • “Cleaned up whitespace”
  • “DRY’d up some stuff based on review comments”
  • “Fixed typos found during reviews”

If all you did was correct typos and whitespace, simply leave the commit message as it was originally. After the call to git review, you will see a new patchset appear in the original code review. This is expected. The changeset is still viewed by Gerrit and reviewers as a single changeset, and reviewers may even select “Patchset 1″ from the “Old Version History” dropdown instead of “Base” in order to see only the changes made in this last amended commit.

The Changes Requested Are Extensive and Depend on Original Commit

If a reviewer has asked for modifications to your original code, and the requested modifications are fairly extensive and depend on the code in your original commit, you have four choices:

  • Amend the original commit to include all new changes
  • Amend the original commit for some things, push those changes to the original commit, make additional changes in the same local branch, git commit those additional changes and git review
  • Make additional changes, git commit those additional changes and git review
  • Lobby for your original commit to be accepted as-is, then when your change is accepted and merged into master, then create a new branch from master and push the additional changes in a new changeset

Whenever you do not amend a commit and issue a call to git review, you will be created a dependent changeset. Gerrit will assign a new Change-Id to the patchset, but understands that the commit logically follows your original changeset’s code. If you go to the code review screen of your newly-created changeset, you will see your original changeset referenced in the “Dependencies” section. Below, you can see a screenshot of a changeset that is part of a “dependency chain”. Another patchset is dependent on this patchset and this patchset is dependent on another patchset.

Changeset showing a dependent changeset and a changeset dependent on this one (chained dependent patchsets)

Changeset showing a dependent changeset and a changeset dependent on this one (chained dependent patchsets)

It’s best to avoid long chains of dependent patchsets. The reason is because if a reviewer requests changes for one of the changesets at the “bottom” of the dependency chain, the entire chain of changesets (even changesets that are approved like the one shown above) are going to be held up from going through the gate tests.

The Changes Requested Are Extensive but are Independent of the Original Commit

If a reviewer has requested extensive changes, but points out that the changes they want made are actually independent of the changes in your original commit, the reviewers will generally ask the original committer to wait until this changeset is merged and create a new branch for the additional work. Normally, depending on the extent of the requested changes, reviewers will insist that the submitter create a new bug or blueprint on Launchpad to keep track of the additional work they feel is needed.

Conclusion

It’s up to the discretion of core reviewers and the original submitter to work out which of the above solutions works best for the particular changeset. Each changeset introduces code and functionality that must be treated differently, and changesets from one submitter may be dealt with differently than others. However, to keep things simple for yourself and upstream reviewers, it’s best to follow this simple advice:

  1. Prefer to amend the original commit. In most cases, this is the appropriate solution to push revisions to Gerrit.
  2. Don’t include sausage-making comments in the commit message.
  3. Prefer free-standing changesets to long chains of dependent patches.
  4. Ask reviewers what their preferences are.

Follow those guidelines and you’ll keep yourself out of the weeds. For more detailed information, including strategies for handling updates to your code that is dependent on another branch of code that gets updated, see the excellent OpenStack GerritWorkflow documentation.

New Launchpad project for tracking bugs on Chef cookbooks

Yesterday, I created a Launchpad project to be used for tracking bugs and feature requests for the new OpenStack Chef cookbooks house on Stackforge. Please do file bugs as you encounter issue with any of the cookbooks or the example OpenStack Chef Repository.

You can check out the current list of open bugs and, if you’re in a contributory mood, give a shot at fixing one of them. :)

Keep in mind that when you push code to Gerrit for a review, you can let Gerrit know that your patch fixes a bug by putting in a simple tag in the commit message:

fixes: lp XXXXXX

where XXXXXX is simply the bug number on Launchpad.

When Gerrit sees the above tag pattern, it will automatically mark the bug as In Progress and assign the job to whomever submitted the patch to Gerrit.

When your patch lands on the targeted branch, Gerrit will automatically mark the bug as Fix Released.

Working with the OpenStack Code Review and CI system – Chef Edition

For too long, the state of the OpenStack Chef world had been one of duplicative effort, endless forks of Chef cookbooks, and little integration with how many of the OpenStack projects choose to control source code and integration testing. Recently, however, the Chef + OpenStack community has been getting its proverbial act together. Folks from lots of companies have come together and pushed to align efforts to produce a set of well-documented, flexible, but focused Chef cookbooks that install and configure OpenStack services.

My sincere thanks go out to the individuals who have helped to make progress in the last couple weeks, the individuals on the upstream openstack.org continuous integration team, and of course, the many authors of cookbooks whose code and work is being merged together.

OK, so what’s happened?

StackForge now hosting a set of Chef cookbooks for OpenStack

Individual cookbooks for each integrated OpenStack project have been created in the StackForge GitHub organization. Each cookbook name is prefixed with cookbook-openstack- followed by the OpenStack service name (not the project code name):

Note that we have not yet created the cookbook for Heat, but that will be coming in the Havana timeframe, for sure. Also note that the Ceilometer (metering) cookbook is empty right now. We’re in the process of pulling the ceilometer recipes out of the compute cookbook into a separate cookbook.

UPDATE: The Heat cookbook repository is now up on Stackforge.

In addition to the OpenStack project cookbooks listed above, there are three other related cookbooks:

Finally, there will be another repository called openstack-chef-repo that will contain example Chef roles, databags and documentation showing how all the OpenStack and supporting cookbooks are tied together to create an OpenStack deployment.

UPDATE: The OpenStack Chef Repository is now up on Stackforge.

Code in cookbooks gated by Gerrit like any other OpenStack project

The biggest advantage of hosting all these Chef cookbooks on the StackForge GitHub repository is the easy integration with the upstream continuous integration system. The upstream CI team has built a metric crap-ton (technical term) of automation code that enabled us to quickly have Gerrit managing the patch queues and code reviews for all these cookbook repositories as well as have each repository guarded by a set of gate jobs that run linter and unit tests against the cookbooks.

The rest of this blog post explains how to use the development and continuous integration systems when working on the OpenStack Chef cookbooks housed in Stackforge.

Prepare to develop on a cookbook

OK, so you want to start working on one of the OpenStack Chef cookbooks? Great! You will need to install the git-review plugin. The easiest way to do so is to use pip:

sudo pip install git-review

The first thing you need to do is clone the appropriate Git repository containing the cookbook code and set up your Gerrit credentials. Here is the code to do that:

git clone git@github.com:stackforge/cookbook-openstack-$SERVICE
cd cookbook-openstack-$SERVICE
git review -s

Of course, replace $SERVICE above with one of common, compute, identity, image, block-storage, object-storage, network, metering, or dashboard. What that will do is clone the upstream Stackforge repository for the corresponding cookbook to your local machine, change directory into that clone’d repository, and set up a git remote called “gerrit” pointing to the review.openstack.org Gerrit system.

If everything was successful, you should see something like this:

jpipes@uberbox:~/gerrit-tut$ git clone git@github.com:stackforge/cookbook-openstack-common
Cloning into 'cookbook-openstack-common'...
remote: Counting objects: 506, done.
remote: Compressing objects: 100% (168/168), done.
remote: Total 506 (delta 246), reused 503 (delta 243)
Receiving objects: 100% (506/506), 81.97 KiB, done.
Resolving deltas: 100% (246/246), done.
jpipes@uberbox:~/gerrit-tut$ cd cookbook-openstack-common/
jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git review -s
Creating a git remote called "gerrit" that maps to:
	ssh://jaypipes@review.openstack.org:29418/stackforge/cookbook-openstack-common.git

Repeat the above for each cookbook you wish to clone and work on locally, or simply execute this to clone them all:

for i in common compute identity image block-storage object-storage network metering dashboard;\
do git clone git@github.com:stackforge/cookbook-openstack-$i; cd cookbook-openstack-$i; git review -s; cd ../;\
done

Start to develop on a cookbook

Now that you have git clone’d the upstream cookbook repository and set up your Gerrit remote properly, you can begin coding on the cookbook. Remember, however, that you should never make changes in your local “master” branch. Always work in a local topic branch. This allows you to work on a branch of code separately from the local master branch you will use to bring in changes from other developers.

Create a new topic branch like so:

git checkout -b <TOPIC_NAME>

Here is an example of what you can expect to see:

jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git checkout -b tut-example
Switched to a new branch 'tut-example'

Once you are checked out into your topic branch, you can now add, edit, delete, and move files around as you wish. When you have made the changes you want to make, you then need to commit your changes to the working tree in source control.

IMPORTANT NOTE:: If you created any new files while working in your branch, you will need to tell Git about those new files before you commit. An easy way to check if you’ve added any new files that should be added to Git source control is to always call git status before doing your commit. git status will tell you if there are any untracked files in your working tree that you may need to add to Git:

jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ touch something_new.txt
jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git status
# On branch tut-example
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#	something_new.txt
nothing added to commit but untracked files present (use "git add" to track)

As the note shows, you use the git add command to add the untracked file to source control:

jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git status
# On branch tut-example
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#	new file:   something_new.txt
#

If you make changes to files, they will show up in git status as changed files, as shown here:

jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ vi README.md 
jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git status
# On branch tut-example
# Changes to be committed:
#   (use "git reset HEAD <file>..." to unstage)
#
#	new file:   something_new.txt
#
# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#	modified:   README.md
#

As you can see, I edited the README.md file, and the call to git status shows that file as modified. If you want to review the changes you made, use the git diff command:

jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git diff
diff --git a/README.md b/README.md
index 4fbbd57..e197d3b 100644
--- a/README.md
+++ b/README.md
@@ -24,6 +24,8 @@ of all the settable attributes for this cookbook.
 
 Note that all attributes are in the `default["openstack"]` "namespace"
 
+TODO(jaypipes): Should we list all the attributes in the README?
+
 Libraries
 =========

If you are happy with the changes, you’re now ready to commit those changes to source control. Call git commit, like so:

git commit -a

This will open up your text editor and present you with an area to write your commit message describing the contents of your patch. Commit messages should be properly formatted and abide by the upstream conventions. Feel free to read that link, but here is a brief rundown of stuff to keep in mind:

  • Make the first line of the commit message 50 chars or less
  • Separate the first line from the rest of the commit message with a blank newline
  • Make the commit message descriptive of what the patch is and what the motivation for the patch was
  • Do NOT make the commit message into a list of the things in the patch you changed in each revision — we can already see what is contained in the patch

Save and close your editor to finalize the commit. Once successfully committed, you now need to push your changes to the Gerrit code review and patch management system on review.openstack.org. You do this using a call to git review.

When you issue a call to git review for any of the cookbooks, a patch review is created in Gerrit. Behind the scenes, the git review plugin is simply doing the call for you to git push to the Gerrit remote. You can always see what git review is doing by passing the -v flag, like so:

jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git commit -a
[tut-example 66f844a] Simple patch for tutorial -- please IGNORE
 1 file changed, 2 insertions(+)
 create mode 100644 something_new.txt
jpipes@uberbox:~/gerrit-tut/cookbook-openstack-common$ git review -v
2013-05-20 12:48:54.333823 Running: git log --color=never --oneline HEAD^1..HEAD
2013-05-20 12:48:54.337044 Running: git remote
2013-05-20 12:48:54.339673 Running: git branch -a --color=never
2013-05-20 12:48:54.342580 Running: git rev-parse --show-toplevel --git-dir
2013-05-20 12:48:54.345139 Running: git remote update gerrit
Fetching gerrit
2013-05-20 12:48:55.475616 Running: git rebase -i remotes/gerrit/master
2013-05-20 12:48:55.616412 Running: git reset --hard ORIG_HEAD
2013-05-20 12:48:55.620552 Running: git config --get color.ui
2013-05-20 12:48:55.623098 Running: git log --color=always --decorate --oneline HEAD --not remotes/gerrit/master --
2013-05-20 12:48:55.626745 Running: git branch --color=never
2013-05-20 12:48:55.629703 Running: git log HEAD^1..HEAD
Using local branch name "tut-example" for the topic of the change submitted
2013-05-20 12:48:55.634665 Running: git push gerrit HEAD:refs/publish/master/tut-example
remote: Resolving deltas: 100% (2/2)
remote: Processing changes: new: 1, done    
remote: 
remote: New Changes:
remote:   https://review.openstack.org/29797
remote: 
To ssh://jaypipes@review.openstack.org:29418/stackforge/cookbook-openstack-common.git
 * [new branch]      HEAD -> refs/publish/master/tut-example
2013-05-20 12:48:56.775939 Running: git rev-parse --show-toplevel --git-dir

As you can see in the above output, a new patch review was created at the location https://review.openstack.org/29797. You can go to that URL and see the patch review, shown here before any comments or reviews have been made on the patch.

gerrit-review-29797

Reviewing patches with Gerrit

Gerrit has three separate levels of reviews: Verify, Code-Review, and Approve.

The Verify (V) review level

The Verify level is limited to the Jenkins Gerrit user, which runs the automated tests that protect each cookbook repository’s master branch. These automated tests are known as gate tests.

When you push code to Gerrit, there are a set of automatic tests that are run against your code by Jenkins. Jenkins is a continuous integration job system that the upstream OpenStack CI team has integrated into Gerrit so that projects managed by Gerrit may have a series of automated check jobs run against proposed patches to the project. Below, you can see that the Jenkins user in Gerrit has already executed two jobs — gate-cookbook-openstack-common-chef-lint and gate-cookbook-openstack-common-chef-unit — against the proposed code changes. The jobs (expectedly) both pass, as I haven’t actually changed anything in the code, only added a blank file and added a line to the README file.

jenkins-jobs-29797

Curious about how those gate jobs are set up? Check out the github.com openstack-infra/config project. Hint: look at these two files.

If the Jenkins jobs fail, you will see Jenkins issue a -1 in the V column in the patch review. Any -1 from Jenkins as a result of a failed gate test will prevent the patch from being merged into the target branch, regardless of the reviews of any human.

The Code-Review (R) review level

Anyone who is logged in to Gerrit can review any proposed patch in Gerrit. To log in to Gerrit, click the “Sign In” link in the top right corner and log in using the Launchpad Single-Signon service. Note: This requires you to have an account on Launchpad.

Once logged in, you will see a “Review” button on each patch in the patchset. You can see this Review button in the images above. If you were the one that pushed the commit, you will also see buttons for “Abandon”, “Work in Progress”, and “Rebase Change”. The “Abandon” button simply lets you mark the patchset as abandoned and gets the patch off of the review radar. “Work in Progress” lets you mark the patchset as not ready for reviews, and “Rebase Change” is generally best left alone unless you know what you’re doing. ;)

Each file (including the commit message itself) has a corresponding link that you can view the diff of that file and add inline comments similar to how GitHub pull requests allow inline commenting. Simply double click directly below the line you wish to comment on, and a box for your comments will appear, as shown below:

inline-review-29797

IMPORTANT NOTE: Unlike GitHub inline commenting on pull requests, your inline comments on Gerrit reviews are NOT viewable by others until you finalize your review by clicking the “Review” button. Your comments will appear in red as “Draft” comments on the main page of the patch review, as shown below:

inline-draft-29797

To put in a review for the patch, click the “Review” button. You will see options for:

  • +1 Looks good to me, but someone else must approve
  • +0 No score
  • -1 I would prefer you didn’t merge this

If you are a core reviewer, in addition to the above three options, you will also see:

  • +2 Looks good to me (core reviewer)
  • -2 Do Not Merge

There is also a comment are for you to put your review comments, which is directly above an area that shows all the inline comments you have made:

review-29797

After selecting the review +/- that matches your overall thoughts on the patch, and entering any comment, click the “Publish Comments” button, and your review will show in the comments of the patch, as shown below:

reviewed-29797

The Approve (A) review level

Members of the core review team also see a separate area in the review screen for the Approve (A) review level. This level tells Gerrit to either proceed with the merge of the patch into the target branch (+1 Approve) or to wait (0 No Score).

The general rule of thumb is that core reviewers should not hit +1 Approve until 2 or more core reviewers (can include the individual doing the +1 Approve) have added a +2 (R) to the patch in reviews. This rule is subject to the discretion of the core reviewer for trivial changes like typo fixes, etc.

Summary

I hope this tutorial has been a help for those new to the Gerrit and Jenkins integration used by the OpenStack upstream projects. Contributing to the Chef OpenStack cookbooks should be no different than contributing to the upstream OpenStack projects now, and additional gate tests — including full integration test runs using Vagrant or even a multi-node deployment — are on our TODO radar. Please sign up on the OpenStack Chef mailing list if you haven’t already. We look forward to your contributions!