# Friday, 21 January 2011

This blog post is a repurposing of content I created for a presentation I gave on DVCS on 01-11-2011 to the Twin Cities Developer Group.


What is DVCS?

Source code version control without a central master server repository (think peer to peer).

Market leaders: git, hg / Mercurial

New competitor with potential: SourceGear Veracity



Is DVCS better than non-distributed source control (aka Subversion/SVN, VSS, TFS, etc.)?

Usually, yes, but there are exceptions.  Don't underestimate the learning curve.



What are the Disadvantages of DVCS?

Lack of mature graphical front ends (although people seem okay with TortoiseHg/TortoiseGit), the majority of DVCS users appear to use the command line.

The learning curve for DVCS is different.

There are different complexities like workflow & backup process creation.

Many people struggle with the concept of "no canonical master".

Large binary files don't work well in a DVCS where everyone has a local copy of basically the entirety of every version of every binary file.

There are currently tools to migrate to DVCS, but not necessarily tools to migrate away from DVCS.


Advantages of DVCS – Better Implementation

What could non-distributed version controls systems do better to compete with DVCS?

  • Implement better merging
  • Implement better handling of versioning directories

What are some of the reasons that merging is better in DVCS?

  • Change sets
  • Likely more version history available to work with since smaller check-ins are better supported
  • Better ancestor tracking/usage
  • Better directory revision management
  • Better file rename detection

The battle is not DVCS (git/hg) vs. SVN, but that is how it is often characterized.  SVN is lacking in some areas compared to it's (sometimes commercial, sometimes more mature) non-distributed version control competitors.


Advantages of DVCS – Better Design

What can’t non-distributed version controls systems do?

  • Working "offline" (faster history access, faster commits)
  • Better experience working peer to peer for integration testing
  • QA can manage their own repository
  • Can commit a logical unit of work without tests passing or the code even compiling


DVCS Trends

Where is DVCS catching up?

  • GUIs
  • ALM integration
  • Hard locks
  • User/role permissions
  • Centralized admin, etc.

DVCS in corporations is very interesting to me personally and Veracity will likely solve those problems better than git/hg ever will since they are primarily targeted at open source projects, not corporate/enterprise environments.

Obstacles to an enterprise DVCS - http://www.ericsink.com/articles/vcs_trends.html


Some differences between Mercurial and Veracity


Friday, 21 January 2011 14:58:14 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 20 January 2011

Please note: The following information is specific to the 01-19-2011 Veracity nightly build (version  I may come back from time to time as the Veracity command API solidifies to update this to newer and more accurate information.

SourceGear Veracity tends to be more like Mercurial / hg than like git.  There is a ton of documentation on Mercurial, but not much available for Veracity at the moment.

While learning Veracity, I often found myself reading docs for Mercurial and then figuring out how Veracity was different through trial and error.

This blog posts contains my notes on some of the things I've learned.

It's often possible to replace an hg command one for one with a vv command (i.e. hg status ~= vv status) and the results will be meaningfully similar if not exactly the same.


Generally, step one is to create your repo.  In Mercurial, this is done by creating a new directory and executing "hg init" within that directory.  For Veracity, you do the same basic thing, but you also need to give the repo a name as a parameter to init:

vv init repo1

After initializing a repository, the first logical step is to setup your user.  In Mercurial, this is done by creating an .hgrc file in the appropriate location.  In Veracity, you must first create the user if they don't yet exist (use "vv users" to show existing users) using:

vv createuser myemail@example.com

Once the user exists, you set the current user using:

vv whoami myemail@example.com

You can use "vv whoami" at any time to see who Veracity thinks you currently are.


Once you have a repository, you can startup a web server to view the repository (although there obviously won't be much to look at if you just initialized it).  For Mercurial, this is simply:

hg serve

For Veracity, the "vv serve" command requires you first run "vv localsettings set server/files <path>" to provide a path to the website files.  I suspect Mercurial does not need this "set server files" step because the installer takes care of setting this path somehow.


The status command works the same, but the output is visibly different (git's output is visibly different for the status command as well).


The add command requires a file or directory name in Veracity, whereas Mercurial will add all files if you don't specify a parameter.

Like Mercurial (and unlike git), you only need to use the add command on new files (modified existing files are automatically part of the change set by default).


The commit command works similarly, although hg has a nice feature where it will pop up an editor to provide a commit comment while Veracity works more like git in that you provide the commit message with a command line parameter (which is also possible with hg).


A basic no parameter diff works very similarly between Veracity and Mercurial.


The branch and branches commands with no parameters work exactly the same.

Creating a new branch in Veracity requires adding a "--new" parameter (Mercurial doesn't need/use "--new"):

vv branch branch_name_here --new

One interesting branching difference is that the default branch in Mercurial is named "default", but in Veracity (as well as git), the default branch is named "master".


To switch branches in Veracity, you use the branch or update (with -b parameter before the branch name) command.

vv update -b master -aka- vv branch master
vv update -b branch1 -aka- vv branch branch1

To switch branches in hg, you use the update or checkout command.

hg update default
hg update branch1

(To switch branches in git, you use the checkout command.)


Cloning a repository from a web server is different.


hg clone http://localhost:8000/ .


vv clone http://localhost:8080/repos/corprepo localdevrepo
vv checkout localdevrepo


Pushing and pulling from a repository is basically the same in the no parameter scenario:

>hg pull
pulling from http://localhost:8000/
searching for changes
no changes found

>hg push
pushing to http://localhost:8000/
searching for changes
no changes found

>vv pull
Pulling from http://localhost:8080/repos/corprepo:
Retrieving dagnodes...done
Retrieving blobs...done
Committing changes...done
Merging databases...done

>vv push
Pushing to http://localhost:8080/repos/corprepo:
Sending dagnodes...done
Sending blobs...done
Committing changes...done
Cleaning up...done

Veracity currently only supports pushing and pulling from a cloned repo to it's "parent", but SourceGear is actively working on enhancements in this area.

Thursday, 20 January 2011 18:11:41 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 29 November 2010

In May of 2010, Google released a new 2.5 version of the Google Checkout API.  The most compelling feature of this version of the API is that it no longer requires https/SSL to retrieve notifications.  They didn't update the .NET sample code to show how to use the new API though (it's now November, so it's been six months).

Below I have not only included the missing sample code for notification handling with GCO 2.5, but I did it using ASP.NET MVC instead of ASP.NET Classic WebForms.  I believe this is the first published Google Checkout ASP.NET MVC sample.

You will want to check the Integration Console in the Google Checkout Sandbox website constantly while testing.  Some errors will only show up there.  You will also want to catch and log errors somewhere like a database, which the below sample does not do.  You may also want to send yourself an e-mail when an order comes through.  You'll have to code that feature yourself, it's not too difficult.

First, Download the Google Checkout .NET DLLs (at least version, published 10-16-2010)
Unzip the Google Checkout .NET DLLs and put them somewhere you can find them later

Next, Start Visual Studio 2008

File -> New -> Project
 Visual C# -> Web -> ASP.NET MVC Web Application
 No, do not create a unit test project

In Solution Explorer -> Right Click References -> Add Reference...
 Select the "GCheckout.dll" you downloaded and unzipped above

Open web.config for editing:
 Replace "<appSettings/>" with:

add key="GoogleMerchantID" value="YourMerchantID" />
add key="GoogleMerchantKey" value="YourMerchantKey" />
add key="GoogleEnvironment" value="Sandbox" />

Be sure to substitute in your proper sandbox merchant ID and key.

In Solution Explorer -> Right Click Controllers -> Add -> Controller...
 Controller Name: GcoNotifHandlerController

Open Global.asax.cs for editing:
 Add a new route before the default route:

  new { controller = "GcoNotifHandler", action = "Index", id = "" }

In Solution Explorer -> Right Click Models -> Add -> Class...
 Name: GoogleCheckoutHelper.cs

Add the following new static methods to GoogleCheckoutHelper:

public static string GetGoogleOrderNumber(string serialNumber)
  return serialNumber.Substring(0, serialNumber.IndexOf('-'));

private static void HandleAuthorizationAmountNotification(GCheckout.AutoGen.AuthorizationAmountNotification inputAuthorizationAmountNotification)
  // TODO: Add custom processing for this notification type

private static void HandleChargeAmountNotification(GCheckout.AutoGen.ChargeAmountNotification inputChargeAmountNotification)
  // TODO: Add custom processing for this notification type

private static void HandleNewOrderNotification(GCheckout.AutoGen.NewOrderNotification inputNewOrderNotification)
  // Retrieve data from MerchantPrivateData
  GCheckout.AutoGen.anyMultiple oneAnyMultiple = inputNewOrderNotification.shoppingcart.merchantprivatedata;
XmlNode[] oneXmlNodeArray = oneAnyMultiple.Any;
  string hiddenMerchantPrivateData = oneXmlNodeArray[0].InnerText;
  // TODO: Process the MerchantPrivateData if provided

  foreach (GCheckout.AutoGen.Item oneItem in inputNewOrderNotification.shoppingcart.items)
    // TODO: Get MerchantItemId from shopping cart item (oneItem.merchantitemid) and process it

  // TODO: Add custom processing for this notification type

private static void HandleOrderStateChangeNotification(GCheckout.AutoGen.OrderStateChangeNotification inputOrderStateChangeNotification)
  // Charge Order If Chargeable
  if ((inputOrderStateChangeNotification.previousfinancialorderstate.ToString().Equals("REVIEWING")) && (inputOrderStateChangeNotification.newfinancialorderstate.ToString().Equals("CHARGEABLE")))
ChargeOrderRequest oneChargeOrderRequest = new GCheckout.OrderProcessing.ChargeOrderRequest(inputOrderStateChangeNotification.googleordernumber);
GCheckoutResponse oneGCheckoutResponse = oneChargeOrderRequest.Send();

  // Update License If Charged
  if ((inputOrderStateChangeNotification.previousfinancialorderstate.ToString().Equals("CHARGING")) && (inputOrderStateChangeNotification.newfinancialorderstate.ToString().Equals("CHARGED")))
    // TODO: For each shopping cart item received in the NewOrderNotification, authorize the license

  // TODO: Add custom processing for this notification type

private static void HandleRiskInformationNotification(GCheckout.AutoGen.RiskInformationNotification inputRiskInformationNotification)
  // TODO: Add custom processing for this notification type

public static void ProcessNotification(string serialNumber)
  string googleOrderNumber = GetGoogleOrderNumber(serialNumber);

  List<string> listOfGoogleOrderNumbers = new List<string> { googleOrderNumber };

  GCheckout.OrderProcessing.NotificationHistoryRequest oneNotificationHistoryRequest = new GCheckout.OrderProcessing.NotificationHistoryRequest(listOfGoogleOrderNumbers);

  GCheckout.OrderProcessing.NotificationHistoryResponse oneNotificationHistoryResponse = (GCheckout.OrderProcessing.NotificationHistoryResponse)oneNotificationHistoryRequest.Send();

  // oneNotificationHistoryResponse.ResponseXml contains the complete response

  // Iterate through the notification history for this order looking for the notification that exactly matches the given serial number
  foreach (object oneNotification in oneNotificationHistoryResponse.NotificationResponses)
    if (oneNotification.GetType().Equals(typeof(GCheckout.AutoGen.NewOrderNotification)))
NewOrderNotification oneNewOrderNotification = (GCheckout.AutoGen.NewOrderNotification)oneNotification;
      if (oneNewOrderNotification.serialnumber.Equals(serialNumber))
    else if (oneNotification.GetType().Equals(typeof(GCheckout.AutoGen.OrderStateChangeNotification)))
OrderStateChangeNotification oneOrderStateChangeNotification = (GCheckout.AutoGen.OrderStateChangeNotification)oneNotification;
      if (oneOrderStateChangeNotification.serialnumber.Equals(serialNumber))
    else if (oneNotification.GetType().Equals(typeof(GCheckout.AutoGen.RiskInformationNotification)))
RiskInformationNotification oneRiskInformationNotification = (GCheckout.AutoGen.RiskInformationNotification)oneNotification;
      if (oneRiskInformationNotification.serialnumber.Equals(serialNumber))
    else if (oneNotification.GetType().Equals(typeof(GCheckout.AutoGen.AuthorizationAmountNotification)))
AuthorizationAmountNotification oneAuthorizationAmountNotification = (GCheckout.AutoGen.AuthorizationAmountNotification)oneNotification;
      if (oneAuthorizationAmountNotification.serialnumber.Equals(serialNumber))
    else if (oneNotification.GetType().Equals(typeof(GCheckout.AutoGen.ChargeAmountNotification)))
ChargeAmountNotification oneChargeAmountNotification = (GCheckout.AutoGen.ChargeAmountNotification)oneNotification;
      if (oneChargeAmountNotification.serialnumber.Equals(serialNumber))
      throw new ArgumentOutOfRangeException("Unhandled Type [" + oneNotification.GetType().ToString() + "]!; serialNumber=[" + serialNumber + "];");

Open Controllers\GcoNotifHandlerController.cs for editing:
 Replace the Index method with:

public ActionResult Index()
  string serialNumber = null;

  // Receive Request
  System.IO.Stream requestInputStream = Request.InputStream;
  string requestStreamAsString = null;
  using (System.IO.StreamReader oneStreamReader = new System.IO.StreamReader(requestInputStream))
    requestStreamAsString = oneStreamReader.ReadToEnd();

  // Parse Request to retreive serial number
  string[] requestStreamAsParts = requestStreamAsString.Split(new char[] { '=' });
  if (requestStreamAsParts.Length >= 2)
    serialNumber = requestStreamAsParts[1];

  // Call NotificationHistory Google Checkout API to retrieve the notification for the given serial number and process the notification

  string notifAckBegin = "<notification-acknowledgment xmlns=\"http://checkout.google.com/schema/2\"";
  string notifAckSerialNumAttrBegin = " serial-number=\"";
  string notifAckSerialNumAttrEnd = "\"";
  string notifAckEnd = " />";
  return this.Content(notifAckBegin + notifAckSerialNumAttrBegin + serialNumber + notifAckSerialNumAttrEnd + notifAckEnd);

That's all there is for building the basic notification handling web service.

You need to publish your notification handling web service to a public website and configure your Google Checkout sandbox account (at https://sandbox.google.com/checkout/sell/ -> Settings -> Integration -> API Callback URL) to use the URL of your web service (http://www.<yourdomain>.com/GcoNotifHandlerForSandbox/).

The Callback contents should be set to "Notification Serial Number" and the API Version should be set to "Version 2.5".

To test your notification web service, you'll need a website with a Google Checkout button.  Instructions for building that are available here:


(Note: I have successfully tested this with both ASP.NET MVC 1 & 2 for Visual Studio 2008.)

Good luck!

Monday, 29 November 2010 12:06:24 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, 07 August 2010

Consuming OData from Silverlight 4 can be a very frustrating experience for people like me who are just now joining the Silverlight party.

When it doesn't work, it can fail silently and/or with incorrectly worded warnings.

This is a perfect example of where a blog post can hopefully fill in some of the gaps left by the Microsoft documentation.

First, let's setup a very simple Silverlight 4 application against the odata.org Northwind service.

Note: You may need to install the Silverlight 4 Tools for VS 2010 for this sample.

In Visual Studio 2010:

File -> New -> Project
 Visual C# -> Silverlight -> Silverlight Application -> OK
  New Silverlight Application wizard
   Uncheck "Host the Silverlight application in a new website"
   Silverlight Version: Silverlight 4

Right Click References in Solution Explorer -> Add Service Reference...
 Address: http://services.odata.org/Northwind/Northwind.svc/
 Namespace: RemoteNorthwindServiceReference

From the toolbox, drag and drop a DataGrid from the "Common Silverlight Controls" section onto MainPage.xaml
 Change the AutoGenerateColumns property to True
 Change the Margin to 0,0,0,0
 Change the Width to 400

In MainPage.xaml.cs:
 Add the following using statement:

  using System.Data.Services.Client;

 Add the following class variable:

  private DataServiceCollection<RemoteNorthwindServiceReference.Shipper> _shippers;

 Add the following to the bottom of the MainPage constructor method:

  RemoteNorthwindServiceReference.NorthwindEntities remoteNorthwindService =
    new RemoteNorthwindServiceReference.NorthwindEntities(
      new Uri("http://services.odata.org/Northwind/Northwind.svc/"));

  _shippers = new DataServiceCollection<RemoteNorthwindServiceReference.Shipper>();
  _shippers.LoadCompleted +=
new EventHandler<LoadCompletedEventArgs>(_shippers_LoadCompleted);

  var query = from shippers in remoteNorthwindService.Shippers select shippers;

 Add the following new method:

  private void _shippers_LoadCompleted(object sender, LoadCompletedEventArgs e)
    if (_shippers.Continuation != null)
      this.dataGrid1.ItemsSource = _shippers;

Debug -> Start Without Debugging
 You should get a dialog box titled "Silverlight Project" that says:
  "The Silverlight project you are about to debug uses web services.  Calls to the web service will fail unless the Silverlight project is hosted in and launched from the same web project that contains the web services.  Do you want to debug anyway?"
 Click "Yes"

This application runs fine.  The warning dialog message was obviously inaccurate and misleading.

I wish I would have understood what this warning dialog was trying to say, but since the remote OData service was working, I had no real choice but to ignore the dialog, which would haunt me when trying to use the same application to call a local OData service.

I believe this dialog is trying to warn you that Silverlight has special constraints when calling web services, but it's still not clear to me what those are.  Proceed cautiously.

Here is one way to successfully call a local OData webservice from Silverlight 4 in Visual Studio 10:

File -> New -> Project
 Visual C# -> Silverlight -> Silverlight Application -> OK
  New Silverlight Application wizard
   OK (Accept the defaults: ASP.NET Web Application Project, Silverlight 4, etc.)

Right Click SilverlightApplication<number>.Web in Solution Explorer -> Add New Item...
 Visual C# -> Data -> ADO.NET Entity Data Model -> Add
  Entity Data Model Wizard
   Next (Generate from database - this assumes you have Northwind running locally)
   New Connection... -> Point to your local Northwind Database Server -> Next
   Select the Tables check box (to select all tables) -> Finish

Right Click SilverlightApplication<number>.Web in Solution Explorer -> Add New Item...
 Visual C# -> Web -> WCF Data Service -> Add

In WcfDataService1.cs:
 Replace " /* TODO: put your data source class name here */ " with "NorthwindEntities" (no quotes)
 Add the following line to the InitializeService method:

  config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);

Debug -> Start Without Debugging

Right Click References under SilverlightApplication<number> in Solution Explorer -> Add Service Reference...
 Discover -> Services In Solution (note the port number of the discovered service, you will need it below)
 Namespace: LocalNorthwindServiceReference

Stop the browser that was started above

Follow the same basic steps as the first example above to modify MainPage except:
 Replace RemoteNorthwindServiceReference with LocalNorthwindServiceReference
 Replace "http://services.odata.org/Northwind/Northwind.svc/" with "http://localhost:<port number noted above>/WcfDataService1.svc/"

Debug -> Start Without Debugging

This application runs fine (without the warning dialog this time).

It can be ridiculously difficult to get a local OData service working with Silverlight 4 if you don't carefully dance around the project setup issues.
The warning dialog when you start debugging is nearly useless and the app will return no data with no apparent errors if you setup the project incorrectly.

Some references I found useful while building this sample:

 MSDN: How to: Create the Northwind Data Service (WCF Data Services/Silverlight)

 Audrey PETIT's blog: Use OData data with WCF Data Services and Silverlight 4

 Darrel Miller's Bizcoder blog: World’s simplest OData service

Once you deploy your OData service to a real web hosting environment, you'll likely need to do some special setup to access your OData service from Silverlight 4 (clientaccesspolicy.xml / crossdomain.xml):

 Making a Service Available Across Domain Boundaries

Tremendous thanks go out to Scott Davis of Ignition Point Solutions for pointing me to the flawed project setup as the reason I couldn't get this working initially.

Saturday, 07 August 2010 12:49:17 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [1]  | 
# Sunday, 01 August 2010

This post is an update to a post I made last September (~10 months ago):


The biggest roadblock I ran into when writing that blog post is that ActiveWriter doesn't work well with the Northwind database which had significant ripple effects throughout the sample.

That inspired me to spend quite a bit of time over the last 10 months to make the Castle ActiveRecord code generation story better.

So here is an updated sample using the free version of the Agility for ORMs Castle ActiveRecord code generator in place of ActiveWriter.


This is a quick guide to getting up and running with NHibernate and Linq quickly.

We are going to assume our database already exists.  We are going to assume that database is Northwind, and we are going to assume that we are doing database driven design (as opposed to domain driven design).  Northwind setup is described below.

We are going to use Visual Studio 2008 with Service Pack 1 and SQL Server 2008 Express.  (Note: A web application variant of this should work with Visual Web Developer 2008 Express Edition resulting in a completely free development stack.  This sample should also work fine in Visual Studio 2010, but you'll need to change the project to target the .NET Full Profile instead of the .NET Client Profile.)

Step 1:

Download and install the Northwind database.

Jeff Atwood provides approximate instructions here (ask specific questions in the comments if you get stuck):


Follow the path for installing the binary files from the command line.  I have tested that the SQL 2005 instructions work on SQL 2008.

Step 2:

Next Download Castle ActiveRecord.  The download link is available from here:


At the time of writing, the current version is "2.1.2 (2010-01-31)".

Unzip it and remember where you put it, you'll need that info in step 3.

Step 3:

Start Visual Studio 2008

We will create a new console application project:

File -> New -> Project
 Visual C# -> Windows -> Console Application
  ConsoleApplication1 -> OK

We need to add the appropriate NHibernate & Castle Active Record references:

Solution Explorer
 Right Click ConsoleApplication1 -> References -> Add
  Browse Tab
   Go to your Castle ActiveRecord download location and add:
   Click OK

And add a reference to System.Configuration as well:

Solution Explorer
 Right Click ConsoleApplication1 -> References -> Add
  .NET Tab
  Click OK

Step 4:

Download the free (aka Convention Only) version of the Agility for ORMs Castle ActiveRecord code generator by registering and then downloading here:


The current version is

Step 5:

Run the AFO Castle ActiveRecord code generator, specifying the correct connection string for the Northwind database you setup in Step 1 above.

Note the output directory where the generated files went, you'll need that info in step 6.

Step 6:

Add the generated files to the console application project.

Solution Explorer
 Right Click ConsoleApplication1 -> Add -> New Folder -> DataLayer
 Right Click DataLayer -> Add -> Existing Item...
  Add all of the .cs files generated in Step 5.

Step 7:

We will now add an Application Configuration File to the project and put the Northwind Connection String into it:

Solution Explorer
 Right Click ConsoleApplication1 -> Add -> New Item...
  Visual C# Items -> General -> Application Configuration File

Modify the file to look like this and use your specific DB connection info:

<?xml version="1.0" encoding="utf-8" ?>
    <add name="Northwind" connectionString="Data Source=.\SQLExpress;Initial Catalog=Northwind;Trusted_Connection=True;"/>

Step 8:

Open the main "Program.cs" class and add the following new method:

        private static void InitializeNHibernateActiveRecord()
            string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["Northwind"].ToString();
            InPlaceConfigurationSource configuration = InPlaceConfigurationSource.Build(DatabaseType.MsSqlServer2008, connectionString);

            ActiveRecordStarter.Initialize(System.Reflection.Assembly.GetExecutingAssembly(), configuration);

Add the following code to the Main Method:

        static void Main(string[] args)

                using (new SessionScope())
                    var queryToExecute = from oneProduct in ActiveRecordLinq.AsQueryable<Product>()
                                         select oneProduct;

                    foreach (Product oneProduct in queryToExecute.Take(5).ToList())
                        Console.WriteLine("ProductID=[" + oneProduct.ProductID + "] ProductName=[" + oneProduct.ProductName + "] Supplier CompanyName=[" + oneProduct.Supplier.CompanyName + "]");
            catch (Exception oneException)
                Console.WriteLine("oneException=[" + oneException + "]");
                throw; // you can remove this if you'd rather the program exit "more normally"

Add the following using statements at the top of Program.cs:

using Castle.ActiveRecord;
using Castle.ActiveRecord.Framework.Config;
using Castle.ActiveRecord.Linq;

using Model.Northwind;

Step 9:

Copy the NHibernate.ByteCode.Castle.dll file from your Castle Active Record download unzip directory to your projects bin\Debug\ folder.

Step 10:

Run the application:

Ctrl-F5 (Debug -> Start Without Debugging)

And you should see the following output:

ProductID=[1] ProductName=[Chai] Supplier CompanyName=[Exotic Liquids]
ProductID=[2] ProductName=[Chang] Supplier CompanyName=[Exotic Liquids]
ProductID=[3] ProductName=[Aniseed Syrup] Supplier CompanyName=[Exotic Liquids]
ProductID=[4] ProductName=[Chef Anton's Cajun Seasoning] Supplier CompanyName=[New Orleans Cajun Delights]
ProductID=[5] ProductName=[Chef Anton's Gumbo Mix] Supplier CompanyName=[New Orleans Cajun Delights]
Press any key to continue . . .

We have successfully executed a join query through Linq.

Please note:

You'll notice that the Agility for ORMs Castle ActiveRecord code generator only generated 5 class files in addition to a readme.txt file (there are 13 tables in Northwind which would otherwise result in 11 class files).  Northwind is not a convention based database due to use of assignable keys, surrogate keys, and composite keys.  The free version of the AFO code generator only works with convention oriented database tables and only 5 of the tables in Northwind follow conventions.  The readme.txt file explains why the other tables were not generated.

The commercial version ($30) of the AFO code generator will properly generate code for all 13 tables in Northwind and is intended to support the entire feature set of Castle ActiveRecord including things like Composite Keys, which aren't currently supported by ActiveWriter.

Now that you have the basic NHibernate Linq infrastructure in place, there are plenty of Linq examples and sample code available elsewhere.


Sunday, 01 August 2010 12:24:48 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, 17 April 2010

Are you a .NET developer in the Twin Cities, Minnesota area that would be interested in having a mentor?  I'm looking for .NET developers to mentor.

I don't really have any specific criteria in mind and I don't expect that enough people read my blog/twitter that I will be overwhelmed with requests.

Ambition is probably the quality I'm looking for most.

If you are interested, contact me via e-mail and we can work out whether it's a good fit.

You can learn more about me here: http://www.capprime.com/About.htm

You can contact me through my contact page here: http://www.capprime.com/Contact.htm

I will try to come back and update this blog post if/when I no longer have time to mentor additional people, so consider this offer open for the foreseeable future.

Saturday, 17 April 2010 10:42:27 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 15 March 2010

Apparently SMO can't get the SqlDataType of a UserDefinedDataType.

SQL Server Management Objects (SMO) is a pretty powerful API into Microsoft SQL Server.  I've been pretty happy using it in various scenarios over the years.

Recently, I was surprised to find out that SMO can't get the SqlDataType underlying a UserDefinedDataType.  This is reproducible using the Microsoft Pubs sample database.

Attempt #1:

string databaseName = "pubs";
string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings[databaseName].ToString();

SqlConnection oneSqlConnection = new SqlConnection(connectionString);
ServerConnection oneServerConnection = new ServerConnection(oneSqlConnection);
Server oneServer = new Server(oneServerConnection);
Database oneDatabase = new Database(oneServer, databaseName);

UserDefinedDataType tidUserDefinedDataType =
   new Microsoft.SqlServer.Management.Smo.UserDefinedDataType(oneDatabase, "tid", "dbo");
bool initializeDidSucceed = tidUserDefinedDataType.Initialize(true);

In this scenario, for reasons unknown to me, initializeDidSucceed is false!

Now, if we try to access tidUserDefinedDataType.SystemType which seems like the appropriate data item, we get this exception:

Microsoft.SqlServer.Management.Smo.PropertyNotSetException: To accomplish this action, set property SystemType.
   at Microsoft.SqlServer.Management.Smo.SqlSmoObject.OnPropertyMissing(String propname, Boolean useDefaultValue)
   at Microsoft.SqlServer.Management.Smo.PropertyCollection.RetrieveProperty(Int32 index, Boolean useDefaultOnMissingValue)
   at Microsoft.SqlServer.Management.Smo.PropertyCollection.GetValueWithNullReplacement(String propertyName, Boolean throwOnNullValue, Boolean useDefaultOnMissingValue)
   at Microsoft.SqlServer.Management.Smo.UserDefinedDataType.get_SystemType()

Attempt #2:

string databaseName = "pubs";
string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings[databaseName].ToString();

SqlConnection oneSqlConnection = new SqlConnection(connectionString);
ServerConnection oneServerConnection = new ServerConnection(oneSqlConnection);

Server oneServer = new Server(oneServerConnection);
Database oneDatabase = new Database(oneServer, databaseName);

Table oneTable = new Table(oneDatabase, "titles", "dbo");

Now the following values are returned:

oneTable.Columns[0].Name                             = title_id
oneTable.Columns[0].DataType.Name               = tid
oneTable.Columns[0].DataType.SqlDataType      = UserDefinedDataType
oneTable.Columns[0].DataType.MaximumLength  = 6

So, we can successfully get the length of the underlying data type, but we can't get the SqlDataType.

Attempt #3:

USE pubs

DECLARE @CurrentSchemaName VARCHAR(128)
DECLARE @UserDefinedDataTypeName VARCHAR(128)

SET @CurrentSchemaName = 'dbo'
SET @UserDefinedDataTypeName = 'tid'

FROM sys.types
WHERE user_type_id =
   SELECT system_type_id
   FROM sys.types
     INNER JOIN sys.schemas
       ON sys.types.schema_id = sys.schemas.schema_id
   WHERE sys.schemas.name = @CurrentSchemaName
     AND sys.types.name = @UserDefinedDataTypeName

Using straight SQL, we appear to be able to get the answer we are looking for: "varchar".

I wonder why SMO is missing this capability?

Monday, 15 March 2010 12:56:31 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, 20 February 2010

T4 Templates are really nice given that the tools are included free in Visual Studio (the Express Editions of Visual Studio appear to have some under-documented limitations when it comes to T4 support).

The primary way that T4 templates are processed is from within Visual Studio.  That works pretty well.

What if you want to process a T4 template outside of Visual Studio (as part of an automated build process, for example)?

The easiest option for running a T4 template outside of Visual Studio is through the TextTransform.exe (custom T4 host) command line tool.

The main difficulty I've encountered using the TextTransform.exe command line tool is error handling.

Here's a sample error (written to StandardError):
MyT4Template.tt(0,0) : error : Running transformation: System.InvalidCastException: Unable to cast object of type 'Microsoft.VisualStudio.TextTemplating.CommandLine.CommandLineHost' to type 'System.IServiceProvider'.
   at Microsoft.VisualStudio.TextTemplatingc7fe0d0277b54f7c95c25936373918ff.GeneratedTextTransformation.TransformText()

Notice the (0,0).  That is where the line and column numbers would normally go for where the error occurred.  Also notice the stack trace contains no useful information (the error message, at least, in this case *is* useful).

The other downsides of using TextTransform.exe are similar to situations where you are calling out to an EXE file from code as opposed to working against an API.

The main positive I've encountered using the TextTransform.exe command line tool is if the template processes okay in Visual Studio, it will likely process okay using TextTransform.exe.  The above error was generated by using a template that tried to reference EnvDTE.DTE which works fine from within Visual Studio, but doesn’t necessarily make sense to do outside of Visual Studio.

As an alternative, you might attempt to use the Visual Studio T4 engine from .NET code like so:
ITextTemplatingEngineHost host = new Microsoft.VisualStudio.TextTemplating.VSHost.TextTemplatingService();
ITextTemplatingEngine engine = new Microsoft.VisualStudio.TextTemplating.Engine();
string outputCode = engine.ProcessTemplate(inputContent, host);

But that won't work.  You will get these compile time errors:
The type 'Microsoft.VisualStudio.TextTemplating.VSHost.TextTemplatingService' has no constructors defined
'Microsoft.VisualStudio.TextTemplating.VSHost.TextTemplatingService' is inaccessible due to its protection level

The hardest option for running a T4 template outside of Visual Studio is by implementing the ITextTemplatingEngineHost interface.  There is an example of how to do that here:

Walkthrough: Creating a Custom Text Template Host

The problem with that example is that it fails for multiple distinct reasons for templates that run just fine in Visual Studio and from TextTransform.exe.  It doesn’t appear to be even close to a full featured ITextTemplatingEngineHost implementation (it’s quite a "teaser" of a sample, enough to show it has potential, but not enough to show you how far from reality you are).

Mono provides an alternative implementation of the ITextTemplatingEngineHost interface here:


And while that fails less often on templates that run just fine in Visual Studio, it still fails in multiple ways.  I appreciate what the Mono folks have done, but for this use case (trying to run T4 templates outside of Visual Studio), there isn’t much value add.

The advantage of using a custom ITextTemplatingEngineHost host implementation is you have significantly more power and control than you have with TextTransform.exe.  TextTransform.exe has a limited input/output/error mechanism and its internals are not very extensible/customizable.

The disadvantage of using a custom ITextTemplatingEngineHost host implementation (as opposed to TextTransform.exe) is that you possibly have to write a decent amount of code against an undocumented system before your T4 template that processes fine in Visual Studio will process equivalently from .NET code (this will depend greatly on the contents of your T4 template).

If anyone is aware of other (free or commercial) custom implementations of the ITextTemplatingEngineHost interface that are usable from .NET code, please let me know!

Update March 7, 2010: Apparently this will be much easier in Visual Studio 2010:

Generating Text Files at Run Time by Using Preprocessed Text Templates

Saturday, 20 February 2010 15:09:50 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 25 January 2010

The main documentation for the NHibernate Validator is here:


And it alludes to the fact that you can configure the validator to be called automatically before Insert/Update.

However, it fails to clearly communicate how to do this.

The important missing piece is that "ValidatorInitializer.Initialize()" needs to be called before the session factory gets created.

Here is some working sample code:

string connectionString =

NHibernate.Validator.Engine.ValidatorEngine oneValidatorEngine =
new NHibernate.Validator.Engine.ValidatorEngine();

NHibernate.Validator.Cfg.INHVConfiguration oneINHVConfiguration =
new NHibernate.Validator.Cfg.Loquacious.FluentConfiguration();
//oneINHVConfiguration.Properties[NHibernate.Validator.Cfg.Environment.AutoregisterListeners] = "true";
oneINHVConfiguration.Properties[NHibernate.Validator.Cfg.Environment.ValidatorMode] = "UseExternal";
new NHibernate.Validator.Cfg.MappingConfiguration("YourNamespaceGoesHere", null));

NHibernate.Cfg.Configuration nHibernateConfiguration = new NHibernate.Cfg.Configuration();

NHibernate.Validator.Cfg.ValidatorInitializer.Initialize(nHibernateConfiguration, oneValidatorEngine);

NHibernate.ISessionFactory oneISessionFactory = nHibernateConfiguration.BuildSessionFactory();


This code throws an NHibernate.PropertyValueException instead of an NHibernate.Exceptions.GenericADOException / System.Data.SqlClient.SqlException, which is what we want in this case.  Another exception I've seen the NHibernate Validator throw is NHibernate.Validator.Exceptions.InvalidStateException.

It's much easier and more pleasant to try to recover from a validation error than a database error.

Monday, 25 January 2010 14:42:45 (GMT Standard Time, UTC+00:00)  #    Disclaimer  |  Comments [0]  | 
# Sunday, 27 September 2009

I like to store my application's configuration in app.config or web.config (depending on the project type) which is pretty standard for .NET applications.  Therefore, I'm not a big fan of ActiveRecord's XmlConfigurationSource.

That said, the InPlaceConfigurationSource is somewhat difficult to puzzle out.  After reading the source code, I was able to bend it to my will.

In particular, I like this syntax well enough:

InPlaceConfigurationSource configuration =
    InPlaceConfigurationSource.Build(DatabaseType.MSSQLServer2005, connectionString);

It lets me setup my connection string and database type and default most everything else.

The "problem" is that I can't then do something clean and simple like this:

configuration.Add("show_sql", true);

Instead I have to do this:

Castle.Core.Configuration.MutableConfiguration oneMutableConfiguration =
  Castle.Core.Configuration.MutableConfiguration("show_sql", true.ToString()));
configuration.Add(typeof(ActiveRecordBase), oneMutableConfiguration);

Which works, but it's not the friendliest API to program against.

Be careful, the "Add" method on InPlaceConfigurationSource is not really "Add".  It is really "Replace".

The other main alternative for using InPlaceConfigurationSource is something like this, which is useful for when I can't just use the defaults:

IDictionary<string, string> properties =
  new System.Collections.Generic.Dictionary<string, string>();
properties.Add("connection.driver_class", "NHibernate.Driver.SqlClientDriver");
  "NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle");

properties.Add("dialect", "NHibernate.Dialect.MsSql2005Dialect");
properties.Add("connection.connection_string", connectionString);

properties.Add("show_sql", true.ToString());

InPlaceConfigurationSource configuration = new InPlaceConfigurationSource();
configuration.Add(typeof(ActiveRecordBase), (IDictionary<string, string>)properties);

This was written and tested against "ActiveRecord 2.0 - August 1st, 2009".

Sunday, 27 September 2009 13:05:50 (GMT Daylight Time, UTC+01:00)  #    Disclaimer  |  Comments [0]  |