Python Code Inspection with SonarQube

Last time, I wrote about setting up Travis CI to effortlessly perform continuous integration and testing. My next step was to determine what was actually being tested in my Python Chip 8 emulator, and improve upon areas that had insufficient tests to cover them. While code coverage isn’t the best metric for code quality, it does at least provide visibility on what you are not actually testing.


A good tool that allows you to inspect your code is SonarQube (previously just called Sonar). Available as a standalone server, SonarQube gets you up and analyzing code in minutes. By default, the out-of-the-box configuration provides an H2 on-disk database that isn’t rated for production, but doesn’t require any external dependencies (like PostgreSQL or MySQL). To get it up and running, simply unzip it and run it. On Linux this looks like:

./sonarqube-4.3/bin/linux-x86-64/ start

This will start SonarQube on your local machine, and set the server listening on port 9000. To get to it, simply browse to http://localhost:9000 in your web browser. When you initially visit the link, there will be no projects loaded. To load your project in to SonarQube, you will need one additional piece of software – the Sonar Runner.

Sonar Runner

The Sonar Runner is responsible for running your test cases and posting the results to SonarQube. In order for the Sonar Runner to be able to know what to do, each project needs a file called (unless you are using a Maven module – but that’s for a future post!). The properties file stores some pretty standard stuff:

sonar.projectName=Chip8 Python

The sonar.project* should be fairly self-explanatory. The sonar.sources property tells Sonar Runner where the source files for the project are located. The sonar.tests property tells Sonar Runner where the unit tests are located. The last three lines tell the Sonar Runner where to find the coverage reports, and what format to find them in. Note that both the sonar.sources and sonar.tests property need to point to different sub-directories. If you keep them in the same directory, you will get errors such as:

ERROR: Error during Sonar runner execution
ERROR: Unable to execute Sonar
ERROR: Caused by: File [relative=chip8/, abs=/export/disk1/emulators/python/chip8/chip8/] can't be indexed twice. Please check that inclusion/exclusion patterns produce disjoint sets for main and test files

Nosetests and Coverage

Okay, so with Sonar Runner configured, we need one more tool. Sonar Runner by itself will not run the unit tests and gather coverage information. We need to use another package called nosetests. Nose is an advanced test runner that can easily be installed with pip. We also need the coverage tool. These can be installed with:

pip install nose
pip install coverage

Once installed, you need to run nosetests to run the unit tests for you, and generate information relating to the source code. The following line runs the test runner, generates coverage information, and generates an XML report that Sonar Runner will use:

nosetests --with-coverage --cover-package=chip8 --cover-branches --cover-xml

Note the --cover-package option. This restricts the coverage module to the chip8 directory – without it, every single Python source file will be included in the coverage report.

Putting It All Together

With SonarQube, Sonar Runner, and Nose, you are now ready to start inspecting your code. A typical session would be to make some changes to a source file, then run the following:

nosetests --with-coverage --cover-package=chip8 --cover-branches --cover-xml
sed -i 's/filename="/filename=".\//g' coverage.xml

The nosetests line we have seen before. But what’s with the sed command? As I discovered, it’s a work-around to making the Sonar Runner properly identify the file names in the coverage.xml file. Without it, the Sonar Runner will discard the coverage metrics for the files in the chip8 package (see here and here for more information and discussion). Finally, the sonar-runner command will execute the runner and post the results. Once again visit http://localhost:9000 to see changes to your project.

Interpreting the Results

As I mentioned before, testing using coverage as the guiding principle isn’t the best way to ensure you’ve tested everything. For a good run-down of why this is the case, check out Ned Batchelder’s talk at PyCon 2009 called Coverage testing, the good and the bad.

As a blunt tool, SonarQube can at least tell you what you’re not testing. On the dashboard for the project, you can see metrics related to the unit test coverage and the unit test success:


Clicking on the unit tests coverage report will display the coverage breakdown per module. Clicking on a module will provide coverage details for that file. For example, checking out the CPU code for the Python emulator, we see that the code coverage is 69.6%:


The green lines at the left hand side of the code listing represent lines of code that have been covered by running the unit test suite. Scrolling down a little in the code, the functions execute_instruction and execute_logical_instruction where not tested by the unit test suite. SonarQube nicely highlights these areas in pink so that we can quickly see what we’re not testing:


Now it’s up to me to go back and write tests to ensure that those functions are working as expected. Then, I can re-run the test suite, and perform another SonarQube analysis on the code.


SonarQube and the Sonar Runner provide a simple and effective way to inspect what your unit tests are actually testing with only a few extra packages. This only scratches the surface of what SonarQube can actually do. In a future post, I will examine some of the other SonarQube metrics, and how they can help improve code quality.

Travis CI and GUI Testing

Writing computer emulators in my spare time is actually something I really enjoy doing. The first emulator I wrote was a simple Chip 8 emulator in C. While that was a good learning experience, debugging it was frustrating. Simple unit test frameworks like MinUnit work well for C code, but writing more extensive tests can be time consuming, especially if you dive right into a project first, without following some type of Test Driven Development (TDD) philosophy. So, my second cut of the emulator was born – this time a Chip 8 emulator written in Python with the goodness of a unit test framework behind the language, and TDD on my mind.

Once I got the bulk of the new emulator up and running with unit tests, most of my changes were style tweaks in order to satisfy Pylint, as well as to correct some of the comments in the code. I started to grow tired of running all of my unit tests each time I made a change to the codebase since it was unlikely that my changes would impact previously tested and corrected code. So, I turned to the power of the web to help me offload the burden of locally running the full suite of tests by looking for a good Continuous Integration (CI) service.

Finding a Continuous Integration Service

While Jenkins, Bamboo and TeamCity are all good CI servers that I have used in the past (Jenkins and Teamcity are two services with a zero cost to set up and start using), they all required a local install. While I’m not against setting up my own services, I wanted something I could hook into with little maintenance on my end. In other words, if I was running on a laptop or netbook, I wanted the power to simply make my changes and push to a remote repository and have the unit tests run automatically. To address this need, I quickly found Travis CI, a continuous integration service that is freely available for open source projects.

Talking to Travis CI

Modifying my GitHub repo to make use of Travis CI was as simple as logging into Travis with my GitHub credentials, letting it sync up, and flipping the switch on Travis to tell it to scan my repos when changed via a commit hook. From there, all I needed was a simple YAML configuration file (.travis.yml) in my project’s root directory to configure it for Travis. Here was the configuration file:

language: python
  - 2.7
  - nosetests --with-coverage --cover-package=chip8

I simply added it to my git repository, committed the changes, and pushed it to master:

git add .travis.yml
git commit -m "Travis YAML file"
git push origin master

Then, I watched on Travis as the changes were picked up on the repo, and my project was built with the unit tests.

Testing GUI Applications on Travis

While Travis CI was really good, I quickly ran into a problem – I had unit tests for my Chip 8 screen class that needed to write to an actualy display while they ran. After scouring the docs, I quickly found that Travis CI supports GUI testing through the use of the X Virtual Framebuffer. All you need to do is put the display port in your configuration, and start the X Virtual Framebuffer:

  - DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start

All together, the configuration file is:

language: python
  - 2.7
  - DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start
  - nosetests --with-coverage --cover-package=chip8

Problem solved! Now I can run the full range of tests on my projects each time I push to my repo. Travis CI even emails me when I’ve broken a build. Travis CI also offers a service called Travis Pro for private repos if you have code that you need to keep closed source.


Travis CI is really easy to use, and for open source projects, offers simple configuration and flexibility. If you are looking for continuous integration, and don’t want to have to turn to tools such as Jenkins or TeamCity, then give Travis CI a shot. With a few simple changes, you can even test your GUI application using Travis.