Friday 18 June 2010

C-eeing Thru Your Code


Ladies and gentlemen, let me introduce you to this exciting library, CThru. (I planned to add several jokes here, but changed my mind). This is an open source AOP framework built on top of Typemock Isolator. Recently I added several improvements to the built-in aspects, and thought I might write a blog post about it.

CThru requires Typemock Isolator to be installed on the target box, so you probably want to use it in your test projects. CThru does not require you to modify the original code, nor the compiled binaries, so you're free to use it on, say, BCL libraries (except for mscorlib). It works by intercepting method calls and add or change their behavior. Each aspect is responsible for deciding which calls to intercept by implementing the ShouldIntercept method. You can make a decision based on the target instance, class name, or method name. For example, you could apply it to all implementors of a particular interface, or for all types in a particular namespace.

In order to make it work, you add all necessary aspects by calling CThruEngine.AddAspect(), then call CThruEngine.StartListening(). The cleanup is done by calling CThruEngine.StopListeningAndReset().

So, how do you write your own aspect? First, you inherit from the abstract Aspect class, and implement the ShouldIntercept method. As I said before, it determines the condition for which the aspect is applied. To add specific behavior, you should override the MethodBehavior or ConstructorBehavior methods. By default, your code is run before the actual method is invoked. You can also skip the original method, return custom value, or throw an exception by manipulating the properties of the DuringCallbackEventArgs valued argument. If you want to add some behavior after the original method is invoked, you can execute the Aspect.CallOriginalMethod method (don't forget to set e.MethodBehavior = MethodBehaviors.SkipActualMethod so that it isn't called twice).

There are aspects that are generic in nature: the behavior is the same, but they are applied in different situation, so it doesn't make sense to hardcode the ShouldIntercept method. For example, the Stub aspect just ignores the method call, but which call it ignores should be determined by the code that uses the aspect. So, there's a convenient base class called "CommonAspect". It makes it possible to pass the intercepting decision in the constructor argument. For example, if you want to ignore all methods with names starting with "My", you write:
CThruEngine.AddAspect(new Stub(info => info.MethodName.StartsWith("My")));
That said, let's review some of the built-in aspects:
  • SkipAllByTypeName -- very simple aspect, designed to illustrate how aspects should be written. Skips all method calls of classes that contain the specified string in their names.
  • MissingMethodAspect -- implements Ruby-like behavior in VB.Net (when you try and execute a non-existent method, it invokes the method called "method-missing" instead).
  • TraceAspect -- traces all intercepted calls to the console (or the supplied TextWriter instance), optionally with the stack trace.
  • TraceResultAspect -- same but traces the results as well. Due to the CThru limitation, all calls invoked by the traced call are not traced (unlike with TraceAspect).
  • DebugAspect -- if you have attached the debugger, it pauses the execution before calling the intercepted method. Very useful for investigating the state of the system at a particular point, when you don't have access to the code at that point.
  • Stub -- just ignore the call and return null. ToDo: implement returning a custom value.
Several projects that are built on top of, or developed using, CThru:
  • SilverUnit, a framework for testing Silverlight applications. Bundled with CThru.
  • Balder. Managed 2D and 3D graphics engine, targetting Silverlight, Xna and OpenGL.
  • Ivonna, an Asp.Net unit and integration testing tool.

So, whether you just need some exploratory testing, or building a domain-specific test framework, CThru can save you a lot of effort.

Monday 14 June 2010

On structuring your tests


The ideas presented here are nothing new; however, a recent discussion with Dror Helper made me want to state my views on this subject (and make the whole world agree with me, whatever it takes), because to be honest, despite my best efforts in meditation, when I see a class called "MyClassTester" it makes me.. wanna start a holy war nervous.

The idea is, if you do TDD, you don't start with a class, because if you do, you already have some design before you have written any tests. So, you start with a user story. Ant let it be a Web app, to be more concrete. Here it goes:
When a user fills the registration form correctly and hits the "Register" button, several things happen:
  • she becomes, in some sense, "registered" in the system;
  • an email is sent, containing the confirmation link;
  • she's redirected to a particular screen.
Now, if I were to start writing unit tests for a particular class, which class would it be? The specs say nothing about classes or such. So, first, I decide that it's going to be an MVC app, like all the cool guys doing these days. And I'm going to start with the most exciting thing: my controller. So, I'm creating a class called AccountControllerTester or something. Next, I have to write a test, so I meditate over the requirements, and here's what I think:
  • "Registered" is kinda vague, but let's do it with the built-in Membership system. I'll inject this service into my controller, mock the service in my test and verify the call to it.
  • Email is easy. A service again, I'll write it later (after the lunch), let's use an interface for the moment.
  • Redirection is just return View(".."). Easy to test.
So you end up with either one test with three asserts (which is a bad practice), or three test methods related to one particular feature. What makes it a mess is that you're going to add more tests: related to input validation, and to the other features that are handled by this controller: password change and retrieval, email confirmation etc. For every action method, you can have 5-10 tests. On the other hand, the actual confirmation email sending belongs to a different class, so you probably put the test there, together with other email-related tests.

You see, one big problem is that each test class becomes huge. But what's more important, it is hard to tell what your system does. One purpose of tests is documentation, and it should be readable for outsiders. If I am to figure out the behavior of the system in the registration process, where should I look? How do I know that part of it is in the AccountControllerTests, and another part is in the EmailServiceTests?

And still another problem is that you just wrote a lot of code, but made zero business value with it. I mean, I can't register at your site yet! And won't be able until you write all the pieces and connect them together (for which you probably should write an integration test).

To summarize, what you get with this approach is:
  • premature, rather than test-driven, design;
  • potentially brittle tests (since they are coupled to your classes, you can't refactor easily);
  • huge test classes;
  • documenting your classes rather than your system;
  • no clear relationship between the specs and your tests;
  • no business value until you have all the pieces.

There's a better way

Now, let's do it another way. Let's create a folder called Membership, and inside it a folder called "Registration". We're testing the case when we submit valid registration data, so let's make a test class called "IfSubmittedValidRegistrationData". The test methods will be called "ShouldSendConfirmationEmail" etc, so the test output will show something like,
WhenSubmittedValidRegistrationData.ShouldSendConfirmationEmail -- passed.
This is quite close to documenting your system!

Now, you can still write it as a unit test for your controller, if you prefer, but I suggest you start with integration tests. I use Ivonna for testing, and it makes it a lot easier: my "integration" is only server-side. In addition, I can use several built-in CThru aspects, like EmailSpy, and I can use a lightweight in-memory database. I initialize the posted values (Arrange) and execute the request (Act) in the FixtureSetup method, and all my test methods are one or two lines of code where I verify the results.
  • UserShouldBeRegistered -- I just check it via the MembershipAPI.
  • ShouldSendConfirmationEmail-- I use EmailSpy that prevents the message from being sent, and saves it for further investigation, so that I can verify that it contains the confirmation link, is being sent to the correct address etc.
  • ShouldRedirectToTheWelcomeScreen -- check the Ivonna.Framework.WebResponse.RedirectLocation property.
Now I can go the full Red-Green-Refactor way. I stuff all the code into my action method until all my tests pass. At this moment my system actually works: I can register a user! But my code is ugly: it is a quick-and-dirty solution written just in order to make my tests pass. I want to make it better. So I refactor it.

It's at the refactoring phase that unit tests can provide a big value. Yes we all know that unit testing can lead us to a much better design. But sometimes it is enough to do it mentally. For example, "how would I unit test my controller"? Oh yes, I should refactor the email-related code into a separate class, and do dependency injection. Probably extract an interface like IMessagingService and implement it as EmailService. Whatever. But I actually write a unit test only if I can think of a good name for it. EmailServiceTester totally won't do. TheConfirmationMessageShouldBeActuallySentByEmail is more like it (and the corresponding unit test for the controller is ShouldSendTheConfirmationMessage -- note that it doesn't mention "email"). But in this particular case, it's probably not worth it.

There is another situation when we have a particular feature that produces some output depending on various inputs, and it's not a yes/no situation, like in the previous case. Take searching, for example. You start with an integration test, like before. And make it work. For one of several search form parameters, you know that the search produces the correct results that appear in a grid on the search results page. It would be wasteful to write integration test for every search parameter. So, assume we already have it refactored into several units (note that I don't say "classes"): SearchParamsReader (this is actually the MVC Binder), TheThingThatGivesUsSearchResultsDependingOnTheSearchParams, and SearchResultsWriter (the one responsible for displaying the results). You make three subfolders in your Searching folder (which already contains the integration test), each responsible for the corresponding piece of functionality. Actually, it's probably worth it to make just one subfolder and test TheThingThat.. Again, you don't put everything for testing TheThing.. into one huge class, but you create several classes: SearchByKeyword, SearchByMinMaxPrice etc. This is unit, not integration, testing, and yet it corresponds to user requirements and documents the system behavior. I could refactor TheThing.. into several classes, I could rename it, and the tests won't break.

Here's the recipe for happiness

  1. Make a folder corresponding to a feature, then a subfolder for a sub-feature etc, until you have a concrete action, like registration or search, or maybe a concrete context.
  2. Inside, create a fixture for each combination of context + action (like "submitting a duplicate username") and name it accordingly (WhenSubmittedExistingUsername).
  3. Put all preparation into the FixtureSetup method. You want it to be readable, so refactor all the nasty details into private methods and move them to the end.
  4. Each check should go to a separate Test method. Name them so that they match the requirements.
  5. More granular tests, if you need them, should go to subfolders.
  6. If you can't think of a decent test name (something that doesn't use the class/method names), it's probably not worth writing (but might be useful for driving your design).