This series has carried on for a while now. In the process, I’ve covered setting up and organizing a Subversion repository for code versioning, using NAnt for automated builds, FxCop and NDepend for code analysis and metrics, and NUnit for unit tests. The preceding sections are linked below for reference.
- source control – Subversion – setup, organizing
- compile code – NAnt
- perform code analysis – FxCop
- collect code metrics – NDepend
- run unit tests – NUnit
As a logical follow-up to the NUnit segment, it makes sense to now talk about code coverage.
What is Code Coverage?
In the previous installment, I showed how to set up and run automated unit tests with NUnit. Having the tests passing is all well and good, but it is important to be aware of how thoroughly the code is being tested. That means functions, and process branches within the functions. If statements, loops, function calls, and so on. On any non-trivial project, tracking that coverage information can get cumbersome. However, there is a tool that can track that information for you. That tool is called NCover.
NCover is a tool that runs alongside NUnit (or whichever unit test tool is being used). It analyzes the compiled code that is under test, and tracks how much of that code is executed by the unit tests. Results of the analysis are sent to an output file in XML format. This format is not overly readable as-is, but an advantage of using XML for data is that it can easily be converted into other readable formats.
There is no GUI provided with NCover, just a command line-interface. This can be used in a NAnt build file using the
exec task. But there is another way, which I will show shortly.
NCover started out as an open source project. It eventually became a commercial product, presumably when the people working on the project realized the market value of such a tool. NCover is now based at ncover.com. NCover’s transition from open source to commercial is documented on the NCover site. Despite it now being a commercial product, the open source versions are still available on the site. They will forever be kept at beta status, but the latest one is still quite usable.
NCover on its own is certainly usable, but is easier to use if coupled with NCoverExplorer, which is discussed next.
It was earlier mentioned that the free version of NCover does not include a GUI application, and only works through a command-line interface. This is not always the intuitive way to do things. Also, as mentioned previously, NCover outputs coverage data in an XML-formatted file. To get a nice report of this data, the XML format needs to be converted to a more easily readable format.
Grant Drake created NCoverExplorer to fill these needs.
NCoverExplorer is an additional tool that can be used to convert the NCover XML output into another format, namely HTML. The details of the conversion are controlled by an XSL file, which specifies how the XML should be converted to HTML. NCoverExplorer has a command-line interface that allows one to specify the source XML file, and it then does the conversion and creates a report in an HTML file.
As an additional feature, the NCoverExplorer GUI can load an XML file produced by NCover to allow interactive browsing of source code with coverage information displayed alongside.
When NCover went commercial (see above), Grant Drake announced that NCoverExplorer would become commercial as well, and be bundled with NCover. That makes sense, as they complement each other quite nicely. As NCoverExplorer is now bundled with NCover 2.x, it is no longer maintained separately, but is still available for download. Like the last open source NCover release, the latest release is very usable.
With NCover and NCoverExplorer downloaded, you can run the NCoverExplorer.exe to use the GUI application.
I’ll show how I first did coverage on my project. First, you need to click the “Run NCover” button in the top toolbar. That will open another, smaller window.
The initial showing of the window indicates that there are some fields that need to be filled in. You’ll also notice the four tabs. Not to worry, clicking the Change buttons on the first tab will send you to the various other tabs to fill in the information. The fields on the first tab will update accordingly. I will show the other three tabs below.
You need to indicate the location of the NCover console application. Hit the … button, browse to the executable, and click OK. The version number will automatically be indicated. You can choose the log level that NCover will use – the log just records the process of the application while it is working. It is not the output file. You can leave the checkboxes checked.
The Working Directory field is the folder where your compiled assemblies are located. Just press the button and select the folder containing the assemblies to be tested. Your testing assemblies need to be in the same folder. Also, you must have compiled the subject application in debug mode, which creates a
.pdb file for each assembly. This is needed for NCover to do its work. For the application to profile, browse to the executable for the unit test tool you are using. I am using NUnit, so that is what I have selected. Below that, you need to add each test assembly – this is for NUnit to know where the tests are.
You do not usually need to do anything on this tab. But it provides some extra options. In the first part, you can indicate whether to get coverage for all assemblies which have matching
.pdb files, or just the ones that you specify here. If you have multiple assemblies in your working folder, such as third party libraries, you should explicitly specify which files you want checked for coverage. In the second section, you can specify which namespaces should not be included in coverage, for whatever reason. The last section lets you change the name and location of the XML file that NCover outputs. By default, it goes in the same folder selected earlier as the working directory.
Back on the first tab, you will see that the fields are all filled in, and the buttons below are now enabled. In the first row, you can choose to view the build script that corresponds to the options you chose previously. You can choose either the MSBuild or NAnt version. You can copy the script to your buildfile and use that. Nice that the app puts it together for you!
Note that NCoverExplorer always inserts absolute paths, which is not ideal if you plan to check the build script into source control. The absolute paths are needed for NCover to work, but you can use the build script property system to insert the local path where necessary. The command line button shows what you would need to enter to run the program straight from the command line.
Below, you can press the Run button which will get NCover to do the work that it does. You can watch what happens on the right side of the NCover window. If NCover is successful, the output XML file will automatically be opened in NCoverExplorer.
Browsing NCover Results
Now we are at the most useful feature of NCoverExplorer: the ability to visually browse the coverage data created by NCover. The left pane has a tree view of analyzed assemblies, which are further broken down into namespaces, classes, and functions. Black text indicates full test coverage, red is for partial coverage, and gray means no coverage at all. The coverage percentages are shown as well. Just the tree view alone is very helpful, but there’s more to see!
The top right shows some detail information when a class or function is selected. The information includes the number of times the function is called, the coverage percentage, the number of unvisited points, the number of points that were visited, and the line number where the function starts in the source code.
Finally, the bottom right pane shows the source code of the current class. This is why the
.pdb files are required. Code highlighted in blue has been covered by tests, code highlighted in red has not been covered.
There is a statistics summary view available. It can be viewed by clicking the Statistics Summary button on the toolbar. The information shown includes the number of files, number of classes, number of members, and number of unvisited functions.
An options window can be accessed by clicking the Options button in the toolbar. Most of the sections there are for customizing NcoverExplorer’s appearance, but the last two sections are of interest.
This section lets you specify your preferences for coverage levels. The Satisfactory Coverage % slider is of most interest here: it will control the overall acceptable coverage threshold. If you change this, then click OK to close the dialog, your tree view will update, as mine did when I lowered the threshold to 25%.
Red items are those below the selected threshold, blue items are above the threshold.
This is the other interesting part of the options dialog. You can specify what should be excluded from coverage, meaning the matching code will not be counted towards your total coverage level. You can set exclusions for assemblies, namespaces, or classes, or any combination thereof.
In this case, I want to focus on my Core assembly, since that is where most of the project is. So I decided to exclude the DataAccess assembly from consideration – I will turn my attention to it later. So I entered Wadmt.DataAccess into the text field, and clicked Add. I then closed the dialog, and lo! The treeview was updated (shown below), and the DataAccess project was hidden from view. Although it is still listed, it has a red X beside it, indicating that it is excluded.
You can actually do this from the main window, without using the options dialog. In the toolbar, there are a pair of buttons (shown at left). If you select an assembly, namespace, class, or function in the treeview, and then click the red X in the toolbar, that selected item will be excluded from the results. It is still visible, but marked as excluded. You can select the same item and click the green + to have it included in the results again.
My successful unit tests at this point number 102. As the coverage results show, I have quite a ways to go before a decent portion of this project is tested! I’m working on some refactoring in my Core assembly to enable further testing (I may write more about this refactor-to-test situation in a later article).
There is one more feature of NCoverExplorer to demonstrate: the report generator. You can access it via the Reports button in the toolbar.
This is an easy dialog. You can fill in the name of the project being reported on (so the name will appear in the output), select the location and name of the report, choose whether you want HTML or XML output, and choose the type of report.
There are four report types:
- Module % Summary
- Module/Namespace % Summary
- Module/Class % Summary
- Module/Class Function Summary
Rather than try to explain the difference of each, I have screenshots of the respective outputs below. You can judge for yourself.
There are also two checkboxes that can be useful. Show Excluded nodes will list any excluded items (as discussed previously) at the bottom of the report. Display After Generation will automatically open the report in your web browser.
Integrating NCover and NCoverExplorer into NAnt Builds
So the NCoverExplorer GUI is pretty nice. The interactivity is a big plus. And you can generate a nice HTML report to summarize the results. However, the GUI is not that useful when working on a project and wanting the coverage data quickly.
Seeing that NCover and NCoverExplorer have command-line interfaces, one could use the functionality they provide by running an
exec task. But the command-line options are complex and hard to get right. So Grant Drake created NAnt and MSBuild tasks for NCover. The tasks can be imported into the buildfile, and used in place of the generic
Documentation for the tasks (for NAnt, as that is what I am using) is available:
To use the tasks provided by NCoverExplorerExtras, the following line needs to be added near the top of the build file:
Here is the full task that I use to run NCover, which in turn runs the test assemblies through NUnit and collects the coverage data, and NCoverExplorer which converts the XML outout of NCover into a nice HTML report. The task can be added to the project’s
.build file, the one that NAnt would use. The various parameters used are documented for the NCover and NCoverExplorer tasks, linked above.
There is a prerequisite to this task in that it is necessary to build the project and the test assemblies first. And the project has to be built in debug mode, so .NET will generate the associated
.pdb files. NCover needs access to those files to gather the coverage information. If they are in the same folder as the matching
.dll files, it should just work.
A screenshot of the task running is shown below.
A successful run of the task should put an HTML file in the location specified by the
htmlReportName parameters of the
ncoverexplorer task above. The result I get is as follows. I like the format of the third report shown above, so that is what I am using.
This is a great way to get a quick, summary view of where your test coverage is at. It isn’t very detailed, however, and is not interactive. For the detail view, you will want to go back into NCoverExplorer. You can load an NCover output file by clicking the Open button and selecting the file. The results will be loaded into NCoverExplorer for your viewing pleasure.
As an additional note, if you don’t like the HTML report that is generated, you can edit the CoverageReport.xsl file located in the NCoverExplorer directory. The file controls how the XML data is converted to HTML. So you can change what information is displayed, and how it is displayed. You can even adjust the embedded stylesheet to your liking.
Before I finish up, I want to say something about tracking code coverage. You should understand by now that coverage is a useful metric to have. However, you should also realize that coverage isn’t everything. It does tell you how much of your code is covered by tests, but it doesn’t say anything to the quality of those tests, or of the code being tested. Your code can be well tested, and still be of poor quality.
Also, there are some parts of code that are inherently difficult or impossible to unit test: data access code, code that touches the filesystem or the network, code that handles the application’s UI, and so on. This will make 100% coverage unlikely.
The point is, code coverage is not a silver bullet. Your coverage data should be used in conjunction with your other tools and techniques – unit tests with NUnit, code analysis with FxCop, metrics with NDepend, and so on. Not to mention code reviews.
The best thing to do is to consider your code coverage objectively. Look at your application, try to determine what coverage level is reasonable based on the factors mentioned above. Coverage data is a tool and a guideline, not an end.
On the lighter side, Kyle Baley recounts how he got obsessed about the coverage level in one of his projects. A really funny read!
As mentioned earlier, NCover went from an open source project to a commercial product. The free version is still available for use, but is no longer maintained or officially supported. In the last year, the NCover product has been updated and maintained and given new features. It supports .NET 2.0 and beyond.
The free version is satisfactory for my needs at present, but I may in the near future look at purchasing a copy of the commercial product. The pricing on the NCover site is $149 for the professional edition, $299 for enterprise. The enterprise version contains additional features over the pro version. The pricing is a bit steep, but if the product is useful enough, would become negligible.
There is a trial version of NCover 2 available for download, good for a month. I think I will give it a run, and see if it is noticeably better than what I have now. There is also an upcoming version 3.0, now in public beta. That would be interesting to take a look at.
I’ve shown how NCover can gather code coverage statistics when running unit tests, and how NCoverExplorer can convert that coverage data into an easily understandable format. I then showed a task for NAnt that handles the collection and conversion of coverage data as part of the project build process.
If one is taking their testing seriously, then it is easy to see the value of code coverage statistics. These data tell you how thoroughly your code base is covered by your tests, and that is some valuable info. In my case, I have a project of some size, and it is plain from my results that I still have a good ways to go before my code is reasonably well covered. Simply by viewing the coverage report after a build, I can easily determine which area I should next get covered.
And remember: code coverage is not an end, but part of the process!