When you reference the handlers assembly from your test assembly and this assembly initializes a bus instance and the endpoint to send messages to is a publisher it can happen that you see multiple messages being send to the endpoint. while you expected less messages. Inspecting the messages indicates subscription acknowledgement messages and you wonder why that is as you don't want your test project to subscribe to anything at all! This happens because NServiceBus scans all available assemblies for messages handlers and when it finds these then it will subscribe to the events which results in the messages that you see in the (journal) queues.
Luckily this behavior can be disabled by calling .DoNotAutoSubscribe() during bus initialization as shown in the following example:
var bus = Configure.With()
.DoNotAutoSubscribe() // Needed as otherwise the client will subscribe to event messages because handler assembly is references and scanned by NServiceBus.
Queues are not automatically created anymore since NServiceBus v3. Queues are now created during the installation phase which you usually do not have in web apps or test apps. You can still have the old behavior by triggering the installation during bus creation. Make the following change to trigger this.
.Start(() => Configure.Instance.ForInstallationOn<Windows>().Install());
I am using git-tfs for a couple of weeks now and had some issues with it that using the checkintool results in one tfs changeset that combines all previous commits. This can often be usefull but in my case I would like to push all individual local git commits seperately to tfs. This is exactly what rcheckin does and this way your git commit history is mirrored to tfs. As a reminder:
git tfs checkin
Performs one commit containing all previous commits where either the default commit message can be used (concatenation of all git commit comments) or a new custom comment.
git tfs checkintool
Same as git tfs checkin but now you can easily change which changed files to commit via the TFS checkin dialog. It shows the concatenation of all git commit comments to use as a commit comment but can easily altered in the gui.
git tfs rcheckin
Commits each previous git commit to tfs and reusing the git comment for each commit.
Take a look at the following mvc csharp code:
return File(filename, "video/mp4")
My intension of this was. Please cache this file for one hour (SetMaxAge) and then check if the file is still valid (SetEtagFromFileDependancy) and if it is then cache it again for one hour.
However, it turns out that MVC thinks different :-). When the item is expired MVC returns status code 200 even though the clients submits a matching If-None-Match header. In such cases the webserver is allowed to respond with 304 and update the item with new information passed in the response header.
It could be that I have forgotten something in the code above but the following is the workaround I created for this situation. It needs a custom etag to use so you cannot make use of SetETagFromFileDependencies. The method basically does a Etag comparison and responds with a 304 if they match. If they do not match the method returns null but has set the above mentioned headers.
ActionResult EtagFix(string etagResponse, HttpCacheability cachability, TimeSpan maxAge)
var cache = Response.Cache;
if (!etagResponse.Equals(Request.Headers["If-None-Match"])) return null;
Response.StatusCode = 304;
Response.StatusDescription = "Not Modified";
return new EmptyResult();
This method should be pasted as a method in your controller and then it should just work. The reason that I return null is that you can do the following nifty trick:
return EtagFix("EtagValue", HttpCacheability.Public, TimeSpan.FromHours(1)) ?? File(Server.MapPath("~/Content/mylargefile.dat");
The reason that this is cool is that "File(Server.MapPath("~/Content/mylargefile.dat")" will ONLY be called with EtagFix returns null. This avoids 'expensive' action result implementations to be created and do work that they do not have to do.
Today I got the following Subversion message via TortoiseSVN while I wanted to commit some changes.
\ is not a working copy
It took me a while to figure this out but it has to do with the fact that I subst the root of the branch to my S: drive. When I went to my user folder (c:\users\ramon\src\*) and performed a commit it just worked as expected.
The reason for the substition is that when I compile the sources that the S: drive paths are embedded into the PDB files. When another developer then attaches the debugger to any of the applications then he only needs to have the same substitution to find all source code files. Besides that, it doesn't matter on which project I work on and no matter which drive or folder the paths stay the same. This is especially usefull as I am a console and keyboard junkie.
A long time since my previous blog as nowadays I often tweet my ramblings but this one does not fit a tweet :-)
Sometimes you are working with strong named assemblies and when you are having unit tests and want to access internals then you have to use the InternalsVisibleTo assembly attribute. So to discover the public key token I ran “sn.exe –tp project.publickey” and then you get the public key (long) and the public key token (short).
Microsoft (R) .NET Framework Strong Name Utility Version 4.0.30319.1
Copyright (c) Microsoft Corporation. All rights reserved.
Public key is
Public key token is 67178dccc283ce39
So I used the following attribute:
[assembly: InternalsVisibleTo("My.Project.Tests, PublicKeyToken=67178dccc283ce39")]
And got this nice compiler error:
Friend assembly reference is invalid. Strong-name signed assemblies must specify a public key in their InternalsVisibleTo declarations.
Then I pasted the long variant in the InternalsVisibleTo attribute and it compiled but I knew for 100% that the short version had to work. After investigation there seem two ways to pass the required strong name public key information. You can choose if you want to pass the whole public key or the public key token.
When both assemblies are signed then you need to pass the full PublicKey.
Today I got reminded again that it is sometimes required to adjust thread pool settings. This time to test some possible connection issues and I required to open a number of connections simultaneously and also use them in parallel.
The test system is a virtual machine which only has one core so the defaults that .net uses are based on that. I first thought that the problems were caused by nunit but pretty soon found out that the case had to do with the thread pool. When I queried the current values via ThreadPool.GetMinThreads it told me that the thread pool used just one thread as minimal. After forcing that with ThreadPool.SetMinThreads to a thread count in where I could test my scenario I still had issues.
I am now using a custom Parellel.For which I adjusted so that I can set a different ChunkSize (set to 1) and ThreadCount for testing purposes.
At my job (Company Webcast) we have several API’s for our customers to use. One of those API’s allow webcasts to be created and modified and that interface has data containing dates. Our platform works with UTC date time’s as we are an international operating company so it is logical for use to store those as UTC.
We use WCF and yesterday we had a very weird issue where calls failed. After investigating the issue we found out that the cause was that how the date time got supplied to our service in the message
The following values are valid in an XML message:
A WCF service accepts both values but it treats them different which I did not expect! The first value became 2010-05-26 17:00 where DateTime.Kind is set to Local and the second becomes 2010-05-26 15:00 with DateTime.Kind set to UTC. This amazed me a bit as I assumed that both would always result in either a UTC or Local DateTime.
The reason it fails is that another argument states the time-zone from where the live webcast will be held. This is used in combination with the DateTime to convert the DateTime value to a local time to inform the viewer about the conditions of the event. This code assumed that the incoming DateTime value would always be of kind UTC.
So now our front-end api’s convert incoming DateTime values to a DateTime value with kind UTC.
This could also be a problem when you persist this DateTime to for example a database and your storage logic does not convert the DateTime from/to UTC or Local depending on your needs. We use NHibernate for storage and it does not by default has a way to set UTC/Local to a <property> definition. This can really become a problem when time is part of your business logic as it is in ours as we use it to schedule tasks and it is very important to know it a time is UTC or not especially when something happens on the other side of the world.
I often hear that NHibernate is not usable for selecting records as it is not possible to perform a not equal comparison with the criteria api. I must admin that it took me a while before I found out but it really is more logical when you know. Lets take a look at the following example:
// Select entries where name = "Ramon"
var criteria = DetachedCriteria.For<MyCoolClass>()
Your first reaction it to find the opposite of Restrictions.Eq but then you are amazed that it does not exists. The solution is so simple that it will almost embares you:
// Select entries where name != "Ramon"
var criteria = DetachedCriteria.For<MyCoolClass>()
Do you see the difference? I added an exclamation mark before Restrictions.Eq to invert the restrictions operator.
So now you know and probably never forget ;-)
I just read a very cool NHibernate trick to let your application start faster that was mentioned by Ricardo Peres:
Configuration cfg = new Configuration().Configure();
IFormatter serializer = new BinaryFormatter();
using (Stream stream = File.OpenWrite("Configuration.serialized"))
using (Stream stream = File.OpenRead("Configuration.serialized"))
cfg = serializer.Deserialize(stream) as Configuration;
Yet again a VST did not load in my Ableton Live DAW. This time it is the Amplitube 2 VST by IK Multimedia. It did work if I launched Ableton with administrator priviledge but we al know that we shouldn't do that if it is not necessary and running as administrator to load a VST sure is not very useful. I launched the excellent process monitor and filtered for stuff coming from the ableton process that did not succeed and I found out that Amplitube 2 is writing to the file:
A very naste way from IK Multimedia to put that file in the Windows folder as normal users cannot create or modify that file. The file is created by Amplitube so it must be some sort of timer file for the product evaluation.
To fix this without running as administrator and with UAC enabled do the following:
- Launch your DAW by right clicking its icon and select "Run as administrator".
- Rescan your VST folder for new plugins. Amplitube gets loaded and the above mentioned file is created. You should be able to use the VST.
- Quit your DAW.
- Go to c:\Windows with file explorer.
- Find the file "msocreg32.dat", right click it and select properties from the context menu.
- Go to the security tab
- Click "Edit". A UAC box will appear to ask you for permission and allow it.
- Select the "Users" group.
- Set a checkmark next to "Modify" in the lower list.
- Click twice on "OK" to close both dialogs.
- Launch your DAW as you normally would.
Now you are running as a normal user with a function Amplitube VST. This happens to more VST's like for example the Native Instruments collection required quite a lot of folders to be writable by the user. Because the files are located in the Program Files folder this is not allowed. Just do the above mentioned modification on *only* the "c:\Program Files (x86)\Native Instruments" folder and no more popups to select another folder to write to.
The tool called "process monitor" by Sysinternals really helps to identify such permission problems so download it if you have similar problems with other VST's in your favorite DAW.
I was having a problem where NHibernate did not automatically delete childs if a collection was emptied by calling IList.Clear() like in the following code example:
var s = GetSession();
var parent = s.Get<Parent>(1);
What did work ofcourse was code like the following before executing clear which marked the records for deletion and NHibernate executed the correct delete statements when the session was closed or flushed.
foreach(var c in parent.Childs) s.Delete(c);
I searched the internet for quite a while and playing around with the cascade and inverse attributes in the .hbm files as I knew it had be an error in the configuration.
<bag name="Topics" cascade="all" inverse="true">
After searching for quite some time I found that the problem was cascade="all" which should have been cascade="all-delete-orphan" and when I read that on a forum I had a very bigy WTF moment. I *really* assumed that all would do the deletion of the orphans as that is what the keyword implies, that it does *all* while in reality it does all except deletion of orphans.
So I would like to suggest the (N)Hibernate team to change the names of the cascade values or just ditch the all value.
My midi hardware it working nice in my DAW home studio but today I noticed that I do not have the ability to change the default midi out device in Windows 7. After googling around it seems that this was also the case in Windows Vista. But luckily there are options to configure the default out device!
The first that I found was the Windows Vista MIDI Mapper control panel and minutes later the Vista MIDI fix. The control panel application fixed my problem but only lists hardware midi stuff like my MIDI USB keyboards and the hardware midi output of my Creative card as the Vista MIDI fix application lists more midi out options but I haven' not tested those yet.
Both applications run without any problems here on my Windows 7 RC x64 installation.
I tried to install the virtualbox machine additions in my OpenSUSE guest but had some problems. Mouse integration worked but display resizing did not. It turned out that I just didn’t read the output from the additions package well.
Remove all "Modes" lines from the "Screen" section and any Option "PreferredMode" lines from "Monitor" sections.
Did that by removing the Option from the Monitor section and all Modes lines from the subsections of the Screen section and it now working as it should with display resizing. The only thing that does not work is clipboard sharing.
I just saw a cool trick done in a finalizer of a class. When a class implements IDisposable then its creator needs to call Dispose when it is finishen. Lots of developers forget this and that usually results in system resources that are locked until the garbage collector thinks its time to do its work.
The code construction I saw was:
I have never thought of doing this but it makes sense to just add a assert to a finalizer to get notified that you didn’t dispose the object. The finalizer will never be called if it would because of the GC.SuppressFinalize(this); statement that should be done when calling IDisposable.Dispose on the object.
It could be that you are getting this in a service and then this doesn’t make any sense but then you could just log an error instead.
More Posts Next page »