DotSVN is an Open Source .NET library that allows you to programmatically access a Subversion library. While this is an interesting idea, what I found even more interesting was the article's graph showing the growth in usage of Subversion over the last 3.5 years. The rate of adoption is dramatic, and this is just for public Apache servers!
Tuesday, September 25, 2007
Call GC.Collect() is generally considered unnecessary and can actually disrupt the smooth functioning of the .NET garbage collector. Dennis Dietrich gives a scenario where it is reasonable to call the garbage collector in your code. His example outlines a situation where you want to test that all references to an object have been deleted at some point in the program so that the object can be garbage collected. The catch-22 is if you maintain a reference to the object to see if it still exists, it cannot be collected.
Enter WeakReference. This class allows you to maintain a reference to an object while letting it be collected if need be. In the time between the object's destruction and its pickup by GC, a WeakReference allows you to continue to use the object. Now continuing to use an object that's in limbo may seem like a bad idea, as the garbage man could drive by anytime. But here's an interesting scenario where you may wish to keep a WeakReference so that you could have the possibility of turning it back into a strong reference (and thus keep the object from being collected).
In our testing case, we wish to do the opposite, make sure we cannot continue to reference the object. After GC.Collect() is called, we check the IsAlive property of our WeakReference.
On a side note, it's interesting that something so fragile implements ISerializable.
Saturday, September 01, 2007
In the process of looking up some cheat sheets, I found this site. The SQL 2005 was what led me there, but this caught my eye, since a lot of guys at work use the command line. There are things in here I had never heard of, much less used.
I recently started a new position where they use VS2005 and Test-Driven Development, which is something new to me. At home I've been using VS2008 beta2, and it has been very solid. So, I installed it side-by-side on my company laptop. All of a sudden, one of the unit tests I was working with failed. After a bit of digging, it turned out that it was related to the order of List<T>.Sort().
This particular unit test was for a custom IComparer that cared only about certain high-priority filename extensions. The rest of the extensions were left essentially unsorted, and the filename portions were ignored. The thing was, the unit test was expecting all of the items sorted to be in a particular order. When I put some debugging in, I found that order of two of the four "non-high-priority" extensions were swapped when I ran the test.
After some angst and some mild reproaches from my boss ("We usually use VPCs for installing beta software..."), I decided the best course of action was to repave the machine. Later on, buried in a post about how garbage collection is changed in 3.5, I found a reference to "SP1 for the 2.0 framework"!
It kind of makes sense that something would need to be done to allow the new multi-targeting feature, but I had always thought everything was going to be pretty much the same in the 2.0 framework. Turns out that's not exactly the case. I didn't come to any conclusive evidence in short time I spent Reflectoring, but it's clear that something's different, since the size and date of mscorlib is different in 2.0 after 3.5 is installed.
In any event, I pointed out that the framework documentation for 2.0 (and all subsequent versions) says that a Quicksort algorithm is used, which does not guarantee that equals will remain in their original order. I finally got a concession that perhaps the tests could be revised. In doing so, I found out that the version control history showed that someone else had found the same problem and had simply changed the expected sort order to match what came up on that test machine!
As my first taste of Test-Driven Development, this was a bit sour. Shouldn't we be thinking about what needs to be tested instead of just writing any old thing? In this case, it was testing that items inserted into the List<T> after an initial sort would also be sorted when you once again issued List<T>.Sort(). WTF?!? If List<T>.Sort is broken and it's up to my puny unit test to find it, let's all just pack up and go home.
If a test produces a "false failure", shouldn't we see if something needs to be revised instead of changing the test to match the new result?? If it's too much trouble to make sure the tests test what should be tested and have them run correctly, why do we do TDD?