Setting Up an External OpenStack Testing System – Part 1

This post is intended to walk somone through the process of establishing an external testing platform that is linked with the upstream OpenStack continuous integration platform. If you haven’t already, please do read the first article in this series that discusses the upstream OpenStack CI platform in detail. At the end of the article, you should have all the background information on the tools needed to establish your own linked external testing platform.

What Does an External Test Platform Do?

In short, an external testing platform enables third parties to run tests — ostensibly against an OpenStack environment that is configured with that third party’s drivers or hardware — and report the results of those tests on the code review of a proposed patch. It’s easy to see the benefit of this real-time feedback by taking a look at a code review that shows a number of these external platforms providing feedback. In this screenshot, you can see a number Verified +1 and one Verified -1 labels added by external Neutron vendor test platforms on a proposed patch to Neutron:

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Verified +1 and -1 labels added by external testing systems on a Neutron patch

Each of these systems, when adding a Verified label to a review does so by adding a comment to the review. These comments contain links to artifacts from the external testing system’s test run for this proposed patch, as shown here:

Comments added to a review by the vendor testing platforms

Comments added to a review by the vendor testing platforms

The developer submitting a patch can use those links to investigate why their patch has caused test failures to occur for that external test platform.

Why Set Up an External Test Platform?

The benefits of external testing integration with upstream code review are numerous:

A tight feedback loop
The third party gets quick notifications that a proposed patch to the upstream code has caused a failure in their driver or configuration. The tighter the “feedback loop”, the faster fixes can be identified
Better code coverage
Drivers and plugins that may not be used in the default configuration for a project can be tested with the same rigor and frequency as drivers that are enabled in the upstream devstack VM gate tests. This prevents bitrot and encourages developers to maintain code that is housed in the main source trees.
Increased consistency and standards
Determining a standard set of tests that prove a driver implements the full or partial API of a project means that drivers can be verified to work with a particular release of OpenStack. If you’ve ever had a conversation with a potential deployer of OpenStack who wonders how they know that their choice of storage or networking vendor, or underlying hypervisor, actually works with the version of OpenStack they plan to deploy, then you know why this is a critical thing!

Why might you be thinking about how to set up an external testing platform? Well, a number of OpenStack projects have had discussions already about requirements for vendors to complete integration of their testing platforms with the upstream OpenStack CI platform. The Neutron developer community is ahead of the game, with more than half a dozen vendors already providing linked testing that appears on Neutron code reviews.

The Cinder project also has had discussions around enforcing a policy that any driver that is in the Cinder source tree have tests run on each commit to validate the driver is working properly. Similarly, the Nova community has discussed the same policy for hypervisor drivers in that project’s source tree. So, while this may be old news for some teams, hopefully this post will help vendors that are new to the OpenStack contribution world get integrated quickly and smoothly.

The Tools You Will Need

The components involved in building a simple linked external testing system that can listen to and notify the upstream OpenStack continuous integration platform are as follows:

Jenkins CI
The server that is responsible for executing jobs that run tests for a project
Zuul
A system that configures and manages event pipelines that launch Jenkins jobs
Jenkins Job Builder (JJB)
Makes construction/maintenance of Jenkins job config XML files a breeze
Devstack-Gate and Nodepool Scripts
A collection of scripts that constructs an OpenStack environment from source checkouts

I’ll be covering how to install and configure the above components to build your own testing platform using a set of scripts and Puppet modules. Of course, there are a number of ways that you can install and configure any of these components. You can manually install it somewhere by following the install instructions in the component’s documentation. However, I do not recommend that. The problem with manual installation and configuration is two-fold:

  1. If something goes wrong, you have to re-install everything from scratch. If you haven’t backed up your configuration somewhere, you will have to re-configure everything from memory.
  2. You cannot launch a new configuration or instance of your testing platform easily, since you have to manually set everything up again.

A better solution is to use a configuration management system, such as Puppet, Chef, Ansible or SaltStack to manage the deployment of these components, along with a Git repository to store configuration data. In this article, I will show you how to install an external testing system on multiple hosts or virtual machines using a set of Bash scripts and Puppet modules I have collected into a source repository on GitHub. If you don’t like Puppet or would just prefer to use a different configuration management tool, that’s totally fine. You can look at the Puppet modules in this repo for inspiration (and eventually I will write some Ansible scripts in the OpenStack External Testing project, too).

Preparation

Before I go into the installation instructions, you will need to take care of a few things. Follow these detailed steps and you should be all good.

Getting an Upstream Service Account

In order for your testing platform to post review comments to Gerrit code reviews on openstack.org, you will need to have a service account registered with the OpenStack Infra team. See this link for instructions on getting this account.

In short, you will need to email the OpenStack Infra mailing list an email that includes:

  • The email address to use for the system account (must be different from any other Gerrit account)
  • A short account username that will appear on code reviews
  • (optional) A longer account name or description
  • (optional but encouraged) Include your contact information (IRC handle, your email address, and maybe an alternate contact’s email address) to assist the upstream infrastructure team
  • The public key for an SSH key pair that the service account will use for Gerrit access. Please note that there should be no newlines in the SSH key

Don’t have an SSH key pair for your Gerrit service account? You can create one like so:

ssh-keygen -t rsa -b 1024 -N '' -f gerrit_key

The above will produce the key pair: a pair of files called gerrit_key and gerrit_key.pub. Copy the text of the gerrit_key.pub into the email you send to the OpenStack Infra mailing list. Keep both the files handy for use in the next step.

Create a Git Repository to Store Configuration Data

When we install our external testing platform, the Puppet modules are fed a set of configuration options and files that are specific to your environment, including the SSH private key for the Gerrit service account. You will need a place to store this private configuration data, and the ideal place is a Git repository, since additions and changes to this data will be tracked just like changes to source code.

I created a source repository on GitHub that you can use as an example. Instead of forking the repository, like you might would normally do, I recommend instead just git clone’ing the repository to some local directory, and making it your own data repository:

git clone git@github.com:jaypipes/os-ext-testing-data ~/mydatarepo
cd mydatarepo
rm -rf .git
git init .
git add .
git commit -a -m "My new data repository"

Now you’ve got your own data repository to store your private configuration data and you can put it up in some private location somewhere — perhaps in a private organization in GitHub, perhaps on a Git server you have somewhere.

Put Your Gerrit Service Account Private Key Into the Data Repository

The next thing you will want to do is add your SSH key pair to the repository that you used in the step above that had you register an upstream Gerrit service account.

If you created a new key pair using the ssh-keygen command above. You would copy the gerrit_key file into your data repository.

If you did not create a new key pair (you used an existing key pair) or you created a key pair that wasn’t called gerrit_key, simply copy that key pair into the data repository, then open up the file called vars.sh, and change the following line in it:

export UPSTREAM_GERRIT_SSH_KEY_PATH=gerrit_key

And change gerrit_key to the name of your SSH private key.

Set Your Gerrit Account Username

Next, open up the file vars.sh in your data repository (if you haven’t already), and change the following line in it:

export UPSTREAM_GERRIT_USER=jaypipes-testing

And replace jaypipes-testing with your Gerrit service account username.

Set Your Vendor Name in the Test Jenkins Job

Next, open up the file etc/jenkins_jobs/config/projects.yaml in your data repository. Change the following line in it:

  vendor: myvendor

Change myvendor to your organization’s name.

(Optional) Create Your Own Jenkins SSH Key Pair

I have a private/public SSH key pair (named jenkins_key[.pub] in the example data repository. Due to the fact that I’ve put the private key in there, it’s no longer useful as anything other than an example, so you may want to recreate your own. Do so like so:

cd $DATA_DIRECTORY
ssh-keygen -t rsa -b 1024 -N '' -f jenkins_key
git commit -a -m "Changed jenkins key to a new private one"

Save Changes in Your Data Repository

OK, we’re done with the first changes to your data repository and we’re ready to install a Jenkins master node. But first, save your changes and push your commit to wherever you are storing your data repository privately:

git add .
git commit -a -m "Added Gerrit SSH key and username"
git push

Requirements for Nodes

On the nodes (hosts, virtual machines, or LXC containers) that you are going to install Jenkins master and slaves into, you will want to ensure the following:

  • These basic packages are installed:
    • wget
    • openssl
    • ssl-cert
    • ca-certificates
  • Have the SSH keys you use with GitHub in ~/.ssh/. It also helps to bring over your ~/.ssh/known_hosts and ~/.ssh/config files as well.

Setting up Your Jenkins Master Node

On the host or virtual machine (or LXC container) you wish to run the Jenkins Master node on, run the following:

git clone $YOUR_DATA_REPO data
wget https://raw.github.com/jaypipes/os-ext-testing/master/puppet/install_master.sh
bash install_master.sh

The above should create an SSL self-signed certificate for Apache to run Jenkins UI with, and then install Jenkins, Jenkins Job Builder, Zuul, Nodepool Scripts, and a bunch of support packages.

Important Note: Since publishing this article, the upstream Zuul system underwent a bit of a refactoring, with the Zuul git-related activities being executed by a separate Zuul worker process called zuul-merger. I’ve updated the Puppet modules in the os-ext-testing repository accordingly, but if you had installed the Jenkins master with Zuul from the Puppet modules before Tuesday, February 18th, 2014, you will need to do the following on your master node to get all reconfigured properly:

# NOTE: This is only necessary if you installed a Jenkins master from the
# os-ext-testing repository before Tuesday, February 18th, 2014!
sudo service zuul stop
sudo rm -rf /var/log/zuul/* /var/run/zuul/*
sudo -i
# As root...
cd /root/config; git pull /root/config
exit
cd os-ext-testing; git pull; cd ../
cp os-ext-testing/puppet/install_master.sh .
bash install_master.sh

Troubleshooting note:: There is a bug in the upstream openstack-infra/config project (with a patch submitted) that may cause Puppet to error with the following:

Duplicate declaration: A2mod[rewrite] is already declared in file /home/vagrant/os-ext-testing/puppet/modules/os_ext_testing/manifests/master.pp at line 34; cannot redeclare at /root/config/modules/zuul/manifests/init.pp:236 on node master

To fix this issue, open the /root/config/modules/zuul/manifests/init.pp file and comment out these lines:

  a2mod { 'rewrite':
    ensure => present,
  }
  a2mod { 'proxy':
    ensure => present,
  }
  a2mod { 'proxy_http':
    ensure => present,
  }

When Puppet completes, go ahead and open up the Jenkins web UI, which by default will be at http://$HOST_IP:8080. You will need to enable the Gearman workers that Zuul and Jenkins use to interact. To do this:

  1. Click the `Manage Jenkins` link on the left
  2. Click the `Configure System` link
  3. Scroll down until you see “Gearman Plugin Config”. Check the “Enable Gearman” checkbox.
  4. Click the “Test Connection” button and verify Jenkins connects to Gearman.
  5. Scroll down to the bottom of the page and click `Save`
  6. Note: Darragh O’Reilly noticed when he first did this on his machine, that the Gearman plugin was not actually enabled (though it was installed). He mentioned that simply restarting the Jenkins service fixed this problem, and the Gearman Plugin Config section then appeared on the Manage Jenkins -> Configure System page.

    Once you are done with that, it’s time to load up your Jenkins jobs and start Zuul:

    sudo jenkins-jobs --flush-cache update /etc/jenkins_jobs/config/
    sudo service zuul start
    sudo service zuul-merger start
    

    If you refresh the main Jenkins web UI front page, you should now see two jobs show up:

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Jenkins Master Web UI Showing Sandbox Jenkins Jobs Created by JJB

    Testing Communication Between Upstream and Your Master

    Congratulations. You’ve successfully set up your Jenkins master. Let’s now test connectivity between upstream and our external testing platform using the simple sandbox-noop-check-communication job. By default, I set this Jenkins job to execute on the master node for the openstack-dev/sandbox project [1]. Here is the project configuration in the example data repository’s etc/jenkins_jobs/config/projects.yaml file:

    - project:
        name: sandbox
        github-org: openstack-dev
        node: master
    
        jobs:
            - noop-check-communication
            - dsvm-tempest-full:
                node: devstack_slave
    

    Note that the node is master by default. The sandbox-dsvm-tempest-full Jenkins Job is configured to run on a node labeled devstack_slave, but we will cover that later when we bring up our Jenkins slave.

    In our Zuul configuration, we have two pipelines: check and gate. There is only a single project listed in the layout.yaml Zuul project configuration file, the openstack-dev/sandbox project:

    projects:
        - name: openstack-dev/sandbox
          check:
            - sandbox-noop-check-communication
    

    By default, the only job that is enabled is the sandbox-noop-check-communication Jenkins job, and it will get run whenever a patchset is created in the upstream openstack-dev/sandbox project, as well as any time someone adds a comment with the words “recheck no bug” or “recheck bug XXXXX”. So, let us create a sample patch to that project and check to see if the sandbox-noop-check-communication job fires properly.

    Before we do that, let’s go ahead and tail the Zuul debug log, grepping for the term “sandbox”. This will show messages if communication is working properly.

    sudo tail -f /var/log/zuul/debug.log | grep sandbox
    

    OK, now create a simple test patch in sandbox. Do this on your development workstation, not your Jenkins master:

    git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    cd /tmp/sandbox
    git checkout -b testing-ext
    touch mytest
    git add mytest
    git commit -a -m "Testing comms"
    git review
    

    Output should look like so:

    jaypipes@cranky:~$ git clone git@github.com:openstack-dev/sandbox /tmp/sandbox
    Cloning into '/tmp/sandbox'...
    remote: Reusing existing pack: 13, done.
    remote: Total 13 (delta 0), reused 0 (delta 0)
    Receiving objects: 100% (13/13), done.
    Resolving deltas: 100% (4/4), done.
    Checking connectivity... done
    jaypipes@cranky:~$ cd /tmp/sandbox
    jaypipes@cranky:/tmp/sandbox$ git checkout -b testing-ext
    Switched to a new branch 'testing-ext'
    jaypipes@cranky:/tmp/sandbox$ touch mytest
    jaypipes@cranky:/tmp/sandbox$ git add mytest
    jaypipes@cranky:/tmp/sandbox$ git commit -a -m "Testing comms"
    [testing-ext 51f90e3] Testing comms
     1 file changed, 0 insertions(+), 0 deletions(-)
     create mode 100644 mytest
    jaypipes@cranky:/tmp/sandbox$ git review
    Creating a git remote called "gerrit" that maps to:
    	ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
    Your change was committed before the commit hook was installed.
    Amending the commit to add a gerrit change id.
    remote: Processing changes: new: 1, done    
    remote: 
    remote: New Changes:
    remote:   https://review.openstack.org/73631
    remote: 
    To ssh://jaypipes@review.openstack.org:29418/openstack-dev/sandbox.git
     * [new branch]      HEAD -> refs/publish/master/testing-ext
    

    Keep an eye on your tail’d Zuul debug log file. If all is working, you should see something like this:

    2014-02-14 16:08:51,437 INFO zuul.Gerrit: Updating information for 73631,1
    2014-02-14 16:08:51,629 DEBUG zuul.Gerrit: Change  status: NEW
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Done adding trigger event: 
    2014-02-14 16:08:51,630 DEBUG zuul.Scheduler: Run handler awake
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Fetching trigger event
    2014-02-14 16:08:51,631 DEBUG zuul.Scheduler: Processing trigger event 
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Starting queue processor: check
    2014-02-14 16:08:51,631 DEBUG zuul.IndependentPipelineManager: Finished queue processor: check (changed: False)
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Starting queue processor: gate
    2014-02-14 16:08:51,631 DEBUG zuul.DependentPipelineManager: Finished queue processor: gate (changed: False)
    

    If you go to the link to the code review in Gerrit (the link that output after you ran git review), you will see your Gerrit testing account has added a +1 Verified vote in the code review:

    Successful communication between upstream and our external system

    Successful communication between upstream and our external system

    Congratulations. You now have an external testing platform that is receiving events from the upstream Gerrit system, triggering Jenkins jobs on your master Jenkins server, and writing reviews back to the upstream Gerrit system. The next article goes over adding a Jenkins slave to your system, which is necessary to run real Jenkins jobs that run devstack-based gate tests. Please do let me know what you think of both this article and the source repository of scripts to set things up. I’m eager for feedback and critique. :)

    [1]— The OpenStack Sandbox project is a project that can be used for testing the integration of external testing systems with upstream. By creating a patch against this project, you can trigger the Jenkins jobs that are created during this tutorial.

  • Pingback: Understanding the OpenStack CI System – join-fu!

  • Pingback: OpenStack Community Weekly Newsletter (Feb 7 – 14) » The OpenStack Blog

  • Trinath Somanchi

    Excellent article working good for me..

    Awaiting for the Second part of the article. :-)

    • http://joinfu.com/ Jay Pipes

      Thank you , Trinath. Second article coming shortly. It turns out that LXC cannot run Devstack (due to things like https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855), and therefore I had to redo my test instances using KVM. So, that slowed me down a bit. Hopefully will be pushing the second article shortly as soon as I have a full successful devstack Tempest run on the slave.

      Best,

      -jay

  • Pingback: Setting Up an External OpenStack Testing System – Part 2 – join-fu!

  • Trinath Somanchi

    Doubt at lines

    Scroll down until you see “Gearman Plugin Config”. Check the “Enable Gearman” checkbox.
    Click the “Test Connection” button and verify Jenkins connects to Gearman

    I have got no gearman server installed, “test connection: failed for me.

    I have installed “gearman-job-server” and this set got passed.

    Can you update the document on this troubleshooting.

    • http://joinfu.com/ Jay Pipes

      Hi Trinath!

      OK, so it sounds like you just need to update the openstack-infra/config code. On your master, do:

      sudo -i
      cd /root/config
      git pull
      exit

      and then re-run the install_master.sh script:

      bash install_master.sh

      Note that gearman does not need to be manually installed. Gearman’s libraries are installed by the openstack-infra/config’s zuul::init.pp Puppet manifest, which is included by the os_ext_testing::master Puppet manifest.

      The LOST build message simply means that communication to and from your CI server was working, but Zuul was not able to determine what happened to the Jenkins job that was triggered. This is likely because the Jenkins gearman plugin was not activated.

      Best,
      -jay

      • Trinath Somanchi

        How can we check whether the gearman plugin is successfully activated with Jenkins? Other than the Jenkins GUI, is there any place where I can monitor the things.

        Help me in this regard.
        -
        Trinath

        • http://joinfu.com/ Jay Pipes

          You can check the Jenkins log file (in /var/log/jenkins). There should be a line in there saying that the Gearman plugin is enabled. As for knowing whether it is active (and successfully communicating with Gearman), I don’t know any other way :(

          • Trinath Somanchi

            Done Jay!.. Your comments helped me.! I just once again done a “save” in the jenkins config, and it worked..

            Got a +1

            :)

            Thanks a lot for the article. It really helps.

          • http://joinfu.com/ Jay Pipes

            Excellent news, Trinath! :)

      • Pattabi

        I face the same problem with Gearman Plugin not appearing the Jenkins UI after I run the install_master.sh. I follwed the addional steps of updating the openstack-infra/config code and rerun install_master.sh.

        Still I do not see the Gearman Plugin Option in the Jenkins UI.

        Not sure if I am missing anything else. I do not find any log entries in the Jenkins log file other than Jenkins started message.

        Any help on this is highly appreciated.

        Regards.
        Pattabi

        • http://joinfu.com/ Jay Pipes

          Hi Pattabi,

          If you look in the Jenkins main log (in /var/log/jenkins/), grep for “gearman” and let me know if you see a line in the log file about the plugin. I’m curious to see if you see errors in there.

          As a last resort, you can always install the Gearman plugin manually. Go to Manage Jenkins -> Manage Plugins -> Available tab, and install the Gearman plugin…

          Best,
          -jay

          • Pattabi

            Hi Jay,

            Thanks for the response. In fact I was able to proceed further by manually installing the “gearman-job-server” and restarting the Jenkins.

            I was able to proceed until the last step in terms of committing a change on sandbox project.

            However, I do not see the messages in zuul log file.I see the following periodic errors on zuul.log file. Any pointers ?

            2014-02-27 16:25:43,147 ERROR gerrit.GerritWatcher: Exception on ssh event stream:
            Traceback (most recent call last):
            File “/usr/local/lib/python2.7/dist-packages/zuul/lib/gerrit.py”, line 64, in _run
            key_filename=self.keyfile)
            File “/usr/local/lib/python2.7/dist-packages/paramiko/client.py”, line 305, in connect
            retry_on_signal(lambda: sock.connect(addr))
            File “/usr/local/lib/python2.7/dist-packages/paramiko/util.py”, line 278, in retry_on_signal
            return function()
            File “/usr/local/lib/python2.7/dist-packages/paramiko/client.py”, line 305, in
            retry_on_signal(lambda: sock.connect(addr))
            File “/usr/lib/python2.7/socket.py”, line 224, in meth
            return getattr(self._sock,name)(*args)
            error: [Errno 110] Connection timed out

          • http://joinfu.com/ Jay Pipes

            That indicates that there is either a proxy issue (can you ping review.openstack.org properly from that host/VM, and if you do: ssh -T $CI_USER_NAME@review.openstack.org -p29418 as the Zuul user, do you successfully SSH to the Gerrit host?)

            Or there may be an issue with your Gerrit SSH key. Are you sure you have added your Gerrit SSH private key to your data repository?

            Best,

            -jay

          • Pattabi

            Hi Jay,

            - I can ping review.openstack.org properly from the VM
            - I can do ssh -T $CI_USER_NAME@review.openstack.org -29418 as the Zuul user
            - The Gerrit SSH Key is in my data repository

            I still have the same error trace on the zuul debug log.
            Any other pointers on how to go about debugging the issue and/or other alternatives ?

            Thanks in advance.

            Regards,
            Pattabi

          • http://joinfu.com/ Jay Pipes

            Unfortunately, I’m kind of out of ideas here. :( The only thing I can think of is service zuul-merger stop; service zuul stop; and then rm -rf /var/log/zuul/*, and then restart both Zuul services. Then, recheck the Zuul logs to see if there’s any more information in there…

            Other than that, I’m really not sure.

            Best,
            -jay

  • mayu

    Great job, it is a guide for my ci construction. Thanks a lot.

  • Pingback: Dell Open Source Ecosystem Digest #35. Issue Highlight: OpenStack 2013.2.2 released - Dell TechCenter - TechCenter - Dell Community

  • Trinath Somanchi

    Hi Jay-

    I have an issue with Zuul.

    In the debug logs of zuul I see, the following

    2014-03-06 15:56:48,296 DEBUG zuul.Gerrit: Change status: NEW

    2014-03-06 15:56:48,297 DEBUG zuul.Scheduler: Adding trigger event:

    2014-03-06 15:56:48,298 DEBUG zuul.Scheduler: Run handler awake

    2014-03-06 15:56:48,299 DEBUG zuul.Scheduler: Fetching trigger event

    2014-03-06 15:56:48,300 DEBUG zuul.Scheduler: Processing trigger event

    2014-03-06 15:56:48,300 DEBUG zuul.IndependentPipelineManager: Starting queue processor: check

    2014-03-06 15:56:48,300 DEBUG zuul.IndependentPipelineManager: Finished queue processor: check (changed: False)

    2014-03-06 15:56:48,301 DEBUG zuul.Scheduler: Done adding trigger event:

    2014-03-06 15:56:48,301 DEBUG zuul.DependentPipelineManager: Starting queue processor: gate

    2014-03-06 15:56:48,302 DEBUG zuul.DependentPipelineManager: Finished queue processor: gate (changed: False)

    2014-03-06 15:56:48,302 DEBUG zuul.Scheduler: Run handler sleeping

    2014-03-06 15:56:48,303 INFO zuul.Gerrit: Updating information for 62599,32

    2014-03-06 15:57:02,561 DEBUG zuul.Gearman: Looking for lost builds

    I have enabled for ‘check’ not for gate in Zuul. But still I see not Jobs running in Jenkins.

    did I miss something.

    Kindly help me troubleshoot the same.

    Thanking you

    • http://joinfu.com/ Jay Pipes

      [please use a pastebin link for pasting such things in future, Trinath :) ]

      If you do:

      telnet 127.0.0.1 4730

      and then “status” in your telnet session, what shows up?

      -jay

      • Trinath Somanchi

        Sure jay, Will use pastebin

        When I do the telnet as said above,

        I get connected to 127.0.0.1 and its stands still ..there..

        any this we infer form this.

        • http://joinfu.com/ Jay Pipes

          if you type “status”, and hit Enter, what do you see?

          • Trinath Somanchi

            i get this.. http://paste.openstack.org/show/72844/

            some how its working now .. I get the jobs running now..

            I have and FTP server to post the logs, can you guide with the article on submitting the logs to FTP and publishing the link in gerrit.

            Kindly help me

          • http://joinfu.com/ Jay Pipes

            I left that as an exercise for the reader to do in my last article. I will try to add another article that shows how to set this up, but you can look at the upstream JJB and Puppet configurations for some clues.

            Before that article, I need to complete an article about adding nodepool as the devstack slave VM manager…