March 2010 - Posts
I installed to today the latest version of NDepend and gave it a try. Last time I blogged about NDepend, I used it on a small solution, and while the generated report was full of useful information, I did not use the product at full strength, because as it’s name suggests, NDepend is about analyzing dependencies, and my solution was too small for interesting observations. This time I plan to analyze a large solution consisting of all of our projects with an exception of a Web administration projects that are not so interesting for such analysis. The solution file was generated using a utility described earlier and contains 140 projects. I never worked with such large solutions (usually I group projects in much smaller solutions), so I was a bit uncertain about how long it will take to load on my Dell D830 notebook. But it started up relatively quickly.
The biggest news about NDepend 3.0 is that it is now fully integrated into Visual Studio development environment. So now all its menus and windows are parts of development session, and windows of course can be docked.
The very first impression from NDepend: it’s fast, blazingly fast. As you can see from the summary below, it used only 18 seconds to analyze 1188 source files in 140 projects of my solutions: 15 milliseconds on each file.
So these are the metrics for my large solution, though I disagree with one detail: the report states that the solution has only 1 attribute class. Yes, it has only one class with .NET Attribute as its immediate parent, but there are several other classes that derive from Attribute, but not directly. One class derives from TypeMock.DecoratorAttribute that inherits from Attritbute. In addition there are several classes inheriting from PostSharp.Laos.OnMethodBoundaryAspect, and walking its inheritance tree brings us to MulticastAttribute that inherits from Attribute. I believe all such classes should be classified as attribute classes (since this is how they are used).
Enough with class attributes. I browsed the report and noticed that I’d like to change a few metrics’ parameters right away. For example, a list of too long type names included all types with names longer than 35 characters. While this seems to be a reasonable limit, some exclusions from such rule can also be reasonable. The top 10 entires in this list ended with either “Exception”, “Preset”, “Response” or “Request”. “Preset” is a suffix for a special category of classes that we use in tests, “Response”/”Request” are used in Web service communication, and “Exception” is of course a suffix for exceptions. I want to see classes with “really” long names, so how do I exclude these special classes from the rule?
It appeared to be very easy. I displayed a CQL Query Explorer and double-clicked on the rule. NDepend displayed a CQL Query Edit window, and it was obvious what I had to do to customize the rule. Below is an updated query:
The simplicity of rule customization encouraged me to start inspecting other report’s details. One of the most important ones was the list of methods to refactor. And it also had to be customized. Because this is what I saw:
Firstly, I think the table will look more informative if it highlights the figures that violates the metrics. Otherwise you have to browse every column and compare its data with the values from the respective CQL query. But what deserves customization is that the top of the list is occupied by class constructors with number of parameters exceeding recommended maximum (5). Should an exception be made for constructors? I believe it should, at least these days, when developers tend to use IoC containers and compose large applications with constructor injection. But how do I specify constructor exclusion? Luckily, I didn’t even have to browse NDepend online documentation - NDepend supports intellisense.
Exclusion constructors from the query resulted in much more interesting list of methods (below). They are candidates for refactoring for different reasons: number of lines, number of IL instructions, netsing deptsh, number of variables, etc. Again, I would appreciate if offending figures were highlighted.
It took me just minutes to apply these customizations, and what’s most encouraging is that I did not have to read anything about what these metrics are and how to adjust them to fit my needs: everything is intuitive, with additional explanation text shown along the queries and reports. I need more time to interpret metrics and diagrams related to dependencies – the solution is too big to give a simple picture. I leave it to a second look. To be continued…
Craig Andera in his blog post showed yet another Fibonacci algorithm, this one with “yeild” operator.
private IEnumerable Fibonacci()
yield return 0;
yield return 1;
int a = 0;
int b = 1;
int temp = a + b;
a = b;
b = temp;
yield return b;
Now it's possible to fetch Fibonacci numbers in this manner:
static void Main(string args)
foreach (int a in Fibonacci())
Console.Write(" more (y/n)?");
string more = Console.ReadLine();
if (more.ToUpper() != "Y")
As you can see, code in Main procedure uses “foreach” statement, but the Fibonacci sequence is endless, so it can’t be populated in advance. Without “yield” we would have to create a temporary state variable (actually two: to store “a” and “b”) and pass them to a GetNextFibonacci that would produce a next number and return updated “a” and “b”. But with yield it’s possible to compute results on demand.
Roy Osherove listed advantages of NUnit over MsTest but also mentioned one MsTest’s strength that can be crucial for many developers: “the integration with other team system tools and reporting is just beyond compare and the reporting alone helps alot to find recurring breaking tests, code churn vs. new tests and others”. This reminded me about what I recently read in a book by Jeff McWherter and Ben Hall Testing ASP.NET Web Applications where they said that forthcoming release of Visual Studio 2010 will make it possible to configure VS test runner to execute tests that use syntax different from MsTest.
I am a faithful user of TestDriven.Net, but I know that ability to use built-in Visual Studio test runner is an important factor that may affect the choice of unit test framework. So it would be nice to separate these decisions: selection of a unit test framework and selection of a test runner. I asked in Microsoft’s VS 2010 forum it new version of Visual Studio is really that flexible when it comes to configuring it’s test runner. Unfortunately not. Microsoft’s Euan Garden answered: “this was something we wanted to do in the release but never made it into the product.” However, Euan gave a couple of hints: Gallio and NUnit integration CodePlex project.
I checked CodePlex and found NUnitForVS: NUnit integration for Visual Studio 2008. I downloaded an ran the installer and within 5 minutes was able to make Visual Studio development environment to treat NUnit tests as they were native MsTest. The only preparation (in addition to installing NUnitForVS) was to open test project file in a text editor and add the following entry to the first PropertyGroup:
This has to be done to every test project that uses NUnit, but otherwise Visual Studio won’t be able to classify the project as a test project (why should it? – the only type of test projects that it natively recognizes is MsTest). After this change Visual Studio accepts NUnit tests, and you will see all your tests in the Test View window:
So far so good. Next checkpoint is to run and debug some of these tests. This also works well and test results are displayed in a respective window:
Note the very last test in this list, the one that is called “Add(2,3,5)”. This is a parameterized test implemented using TestCase attribute:
[TestCase(2, 3, 5)]
public void Add(int x, int y, int sum)
Calculator calc = new Calculator();
decimal result = calc.Add(x, y);
Assert.AreEqual(sum, result, "Incorrect result");
So using NUnitForVS we can exploit features of NUnit 2.5 that don’t exist in MsTest, and still Visual Studio correctly treats them. This is good news.
When it comes to collecting code coverage, the situation is a little trickier. When I open code coverage window, I see not very encouraging message “Code Coverage is not enabled for this test run”:
Actually this is correct message: you have to explicitly enable code coverage instrumentation. But how? Apparently our solution sill lacks some piece, but luckily it’s easy to fix. All we need is to add a test configuration file with extension “testrunconfig”. Then we can activate its configuration and enable code coverage collection. One simple way to add such file is to add and then delete a test project to a solution (a “real” MsTest project). The MsTest project will be gone but will a trace in a form of test configuration file:
If you now enable code coverage instrumentation for assemblies in your solution and rerun the tests, you will see the coverate summary:
Pretty useless, isn’t it? The summary displays coverage only for the test assembly and not for the actual code under test. Why?
More googling, and the explanation came from Peter Stephen’s blog post: even though coverage instrumentation is enabled for all assemblies in our solution, one of assemblies was taken from a wrong place: the assembly under the test lies both in its own “bin” folder and in the “bin” folder of the test assembly, and it is there it has to be taken from. To resolve the problem you just need to add the assembly from a right place:
Note 2 copies of UnitTestDemo.dll. The unchecked one is the one that was added initially, the checked one was added manually. And everything works:
Of course, I only tried NUnitForVS on a very simple project. I plan to check it out with more complex code. But so far it looks quite promising opening Visual Studio test runner for NUnit - the most popular unit test framework.