# Tuesday, August 04, 2015
One of my 4 machines immediately offered to upgrade to Windows 10. The rest are just sitting there, probably because Microsoft is stacking up upgrades over weeks or months.

It's simple to force an update.

Use the Windows 10 media creation tool: http://www.microsoft.com/en-us/software-download/windows10
Download it (it's 18Meg), and run it. Select "Upgrade this PC now". It downloads the multiple Gigs of Win10, then installs.

Privacy settings

Near the end of installation, there will be a screen "Get Going Fast" with the default button "Use Express Settings". Instead click the small link "Customise settings" (it's "Customize" in US versions, "Customise" in UK).

On "Customise Settings" first page, turn off "use page prediction" (2nd option), and at least the first 2 "Connectivity" options below that (automatically connect to open hotspots, networks shared by contacts).
"Smartscreen" and "Send error and diagnostic information" are acceptable.

On "Customise Settings" second page ("Personalisation" and "Location"), turn off all of them.

If you use a local account, it will show "Make it yours" which will convert you to a Microsoft (hotmail/outlook.com) account. Click the small link at the bottom "Skip this step".

Edge is the default browser

Settings > System > Default apps
You can change the Browser, Music, Photo and other defaults here.

Chrome and Firefox will both complain they are no longer default browser- Windows 10 will open the default apps page for you.


Settings is the new control panel. If you followed the defaults, you can fix them here

If you accidentally switched to a Microsoft account, Accounts > Your account > "Sign in with a local account instead".

In Privacy, fix the options under "General", "Location", and "Speech, inking and typing".

In Network & Internet > Wi-Fi > Manage Wi-Fi Settings > "Wi-Fi Sense" turn off both options (Connect to open hotspots, connect to networks shared by contacts).

In Update & Security > Windows Update > Advanced Options > Choose how updates are delivered > make sure you don't use "PCs on my local network, and PCs on the internet" (the "PCs on my local network" should be okay on a home network).

posted on Tuesday, August 04, 2015 9:17:38 AM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Sunday, March 29, 2015

I finally moved my project "database schema reader" from Codeplex to Github.

Github import was simple and quick. I didn't import the historical issues (the project went on Codeplex in 2010, and it was a couple of years old then). I just added an "upstream" remote so I can still push to Codeplex and keep it updated. 

Codeplex comments and discussions are disabled at the moment; "we're anticipating that this functionality will be re-enabled early next week"! The codeplex blog hasn't been updated in 2 years, and the "codeplex" discussions are full of spam (fortunately I've never had spam in the project discussions). There are many third-party services that integrate with Github, but nothing links into Codeplex.

The main advantage of the move to Github was hooking into Appveyor for CI builds. It took a few builds to experiment with things. I'm using database integration tests with SqlServer, SQLite, PostgreSql and MySql. Now these are working, the tests are more portable - I can easily recreate the databases. The packaging of the builds (dlls into a zip, plus a nuget packages) was also easy, and can create releases within Github. The appveyor.yml file isn't the easiest way to configure builds, and I soon figured out it's better to use the website UI.

Overall, much better than TFS (ugh!) and TeamCity builds. I played a little with Travis CI (building in Mono), but for now it's not worth pursuing.

I tried Coverity for code analysis, which flagged a few (minor) things, but didn't seem to add a lot more than FxCop.

I looked at Coveralls,  a code coverage service. Code coverage is 77%, incidentally, which I think is quite good for something that includes code to support Oracle, DB2, Ingres, 3 different Sybase databases, VistaDb, Firebird and Intersystems Cache (for now, these must continue to be tested on my machine only.). I don't believe code coverage is very useful, so for now I won't include this into the Appveyor build.

I'm very impressed, and very happy with Appveyor.

posted on Sunday, March 29, 2015 5:41:03 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Wednesday, July 23, 2014


The mscorlib System.Version class accepts 2, 3 or 4 integers to represent a version.
var v = new Version(2014010110, 7383783, 38989899, 893839893);
Console.WriteLine(v.ToString()); //shows 2014010110.7383783.38989899.893839893

The values are Major.Minor[.Build[.Revision]]

As we'll see shortly, actual assembly versions are much more limited!


Semantic versioning can be mapped into the .net scheme.

In SemVer, the scheme is Major.Minor.Patch.

  • Major is breaking changes
  • Minor is backwards compatible changes including additions
  • Patch is backwards compatible bug fixes.

So .net Build is equivalent to semver Patch, and revision, which is optional anyway, is disregarded. (The original .net convention was that build used the same source with different symbols).

Version attributes

The version attributes are normally in Properties/AssemblyInfo.cs (but could be anywhere).
You can also access AssemblyVersion and AssemblyFileVersion via the project properties - application - [Assembly Information...] button.

There are 3:

//CLR uses this as the version 
[assembly: AssemblyVersion("")]

//Not used by CLR, often the specific build
[assembly: AssemblyFileVersion("")]

//If not present, == AssemblyVersion. 
[assembly: AssemblyInformationalVersion("v1.0 RC")]

AssemblyInformationalVersion used to error if it wasn't a System.Version (all ints). Since Visual Studio 2010, you can put in free-form strings, which is useful for tags like "RC".

To access these via code:

var executingAssembly = Assembly.GetExecutingAssembly();
var ver = executingAssembly.GetName().Version; //AssemblyVersion
var fv = System.Diagnostics.FileVersionInfo.GetVersionInfo(executingAssembly.Location);
Console.WriteLine(fv.FileVersion); //AssemblyFileVersion
Console.WriteLine(fv.ProductVersion); //AssemblyInformationalVersion

There is also a fv.ProductMajorPart and fv.ProductMinorPart but these aren't populated if the AssemblyInformationalVersion can't be parsed into a System.Version.

The values- major, minor, build, revision - are ints, up to 2,147,483,647. But there's a big gotcha. For operating system reasons, the compiler limits values to 65,534, int16.

For AssemblyVersion, you get a CSC Error: "Error emitting 'System.Reflection.AssemblyVersionAttribute' attribute -- 'The version specified '65536.65535.65535.65535' is invalid'"

For AssemblyFileVersion, you get a CSC Warning    "Assembly generation -- The version '65536.65535.65535.65535' specified for the 'file version' is not in the normal 'major.minor.build.revision' format". It will build, at least.

Versioning Strategy

  • For all version types major and minor parts should be manually set.
  • To simplify CLR versioning, we don't need to increment the AssemblyVersion except once, manually, at final release. For AssemblyVersion, just set major and minor (and perhaps build for semver). Normally build and revision will always be 0.0.  We don't want any version numbers changing during developer builds, or even continuous integration builds unless they are automatically deployed to test.
  • When a dll is published/deployed, we should increment the AssemblyFileVersion.
  • We should be able to trace back to the build.

There are several candidates for traceable build and revision numbers, but none are "semver" (both build and revision are significant).

  • Increment by date, as in wildcards (below): build is days since a specific date, revision is seconds since midnight. But there is no obvious connection between the dll and the build on the build-server.
  • Date, except we can't fit "year-month-day-hour-minute-second" into an int16. You could overflow it: build is mmdd, revision is hhmm.
  • Build name. TFS uses a buildDef_yyyymmdd.n format for the build name.
  • Changeset number if numeric, and it is less than 65535.

Both build name and changeset number might be better set in AssemblyInformationalVersion.


For AssemblyVersion only, you can use wildcards for build and revision.

If you use it for file version, you get a warning:
CSC : warning CS1607: Assembly generation -- The version '1.2.0.*' specified for the 'file version' is not in the normal 'major.minor.build.revision' format

  • AssemblyVersion build = number of days since 01/01/2000.
  • AssemblyVersion revision = number of seconds since midnight.

If you build twice without changing, the revision goes up. If you build the next day without changes, the build goes up.

Wildcards are pretty useless.

Build Tasks

Build tasks run after source control get-latest, before compilation. They find the AssemblyInfo.cs files, flip the readonly flag, and find and replace the AssemblyFileVersion, then compile. The changed AssemblyInfo file should not be checked in. The process is not run in developer builds, only in "publish" builds.

MSBuild Extension Pack is a set of msbuild tasks, which is also available as a Nuget package (MSBuild.Extension.Pack). One task, MSBuild.ExtensionPack.VisualStudio.TfsVersion, edits the AssemblyFileVersion given a date or tfs-format build name.

Another project, Community TFS Build Extensions, made by some of the same people, hooks up into TFS 2012/2013 xaml workflows and includes a TfsVersion build activity.

posted on Wednesday, July 23, 2014 8:22:40 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, March 07, 2014

Forms Authentication is ASP.net is simple but the FormsIdentity and GenericPrincipal/RolePrincipal are a little too simple. All we get are IIdentity.Name and IPrincipal.IsInRole(x)

Most real applications need a bit more, like the user's full name or email address, or domain-specific data.

Custom Principal

The usual way to do this was to create a custom principal, the UserData field in the forms authentication cookie, and the asp.net pipeline event "PostAuthenticateRequest".

Here's our custom principal:

    public class UserPrincipal : GenericPrincipal
        public UserPrincipal(IIdentity identity, string[] roles)
            : base(identity, roles)

        public string Email { get; set; }

Here's the login action. Instead of the normal FormsAuthentication.SetAuthCookie, we do it manually (see below):

        public ActionResult Login(LoginModel model, string returnUrl)
            if (ModelState.IsValid) //Required, string length etc
                var userStore = new UserRepository();
                var user = userStore.FindUser(model.UserName, model.Password);
                if (user != null)
                    //FormsAuthentication.SetAuthCookie(user.Name, false);
                    //redirect to returnUrl
                    if (!string.IsNullOrEmpty(returnUrl) &&
                        Url.IsLocalUrl(returnUrl) &&
                        !returnUrl.Equals("/Error/NotFound", StringComparison.OrdinalIgnoreCase))
                        return Redirect(returnUrl);
                    return Redirect("~/");
                ModelState.AddModelError("UserName", "User or password not found");
            return View(model);

And here's where we set the authentication cookie, here putting our user object as Json into the userData field of the cookie.

        private void SetAuthCookie(User user)
            var userData = JsonConvert.SerializeObject(user);
            var authTicket = new FormsAuthenticationTicket(
                  1, //version
                  DateTime.Now, //issue date
                  DateTime.Now.AddMinutes(30), //expiration
                  false,  //isPersistent
                  FormsAuthentication.FormsCookiePath); //cookie path
            var cookie = new HttpCookie(FormsAuthentication.FormsCookieName,

Finally, we hook up the PostAuthenticationRequest event. Normal forms authentication will have recognised the authentication cookie and created a GenericPrincipal and FormsIdentity. We unpack the userData field, and create our custom principal.

        protected void Application_PostAuthenticateRequest(Object sender, EventArgs e)
            var context = HttpContext.Current;
            if (context.User == null || !context.User.Identity.IsAuthenticated)

            var formsIdentity = context.User.Identity as FormsIdentity;
            if (formsIdentity == null)

            var id = formsIdentity;
            var ticket = id.Ticket;
            var userData = ticket.UserData; // Get the stored user-data, in this case, our roles
            var user = JsonConvert.DeserializeObject<User>(userData);
            var customPrincipal = new UserPrincipal(formsIdentity, user.RolesList.ToArray());
            customPrincipal.Email = user.Email;
            Thread.CurrentPrincipal = Context.User = customPrincipal;

The userdata is encrypted and safe from tampering, but it can make the cookie rather large.

.Net 4.5 making claims

Now in ASP.net 4.5, we have Windows Identity Foundation (WIF, pronounced "dub-i-f") and claims principals and identities. Usually this is discussed with "federation" and single-sign-on identity providers, but actually claims can be useful in "traditional" stand-alone websites like we've just discussed.

ClaimsPrincipals and Identities have a list of Claims. This can be just a property bag with names and values, but there are many standard claim names, defined by OASIS, in the ClaimTypes enum. In addition to ClaimTypes.Name and ClaimTypes.Role, there are Email, GivenName, Surname, DateOfBirth, MobilePhone and so on. These standard defined types mean libraries can discover these claims without defining common interfaces or contracts. But it is also extensible with application specific claims. The old fixed custom principal is starting to look old-fashioned.

The WIF session authentication module can take over from forms authentication, storing the claims in cookies. You don't need to use the federation aspects. The module handles cookies a little better than forms authentication- if it gets too large, it's chunked over several cookies. There is also a ReferenceMode = true, which keeps the claims data in server side memory and only sends a simple key in the cookie (it's obviously not webfarm safe).

FormsAuthentication with Claims

First, the configuration. You'll need to define the configuration sections. We are still using forms authentication, so keep the authentication mode=Forms. Add the WIF session authentication module, which will handle the cookie.

    <section name="system.identityModel"
             System.IdentityModel, Version=, Culture=neutral, PublicKeyToken=B77A5C561934E089"
    <section name="system.identityModel.services"
             System.IdentityModel.Services, Version=, Culture=neutral, PublicKeyToken=B77A5C561934E089"
    <compilation debug="true" targetFramework="4.5" />
    <httpRuntime targetFramework="4.5" />
    <authentication mode="Forms">
      <forms loginUrl="~/Account/Login" />
      <add name="SessionAuthenticationModule"
           System.IdentityModel.Services, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089"

           preCondition="managedHandler" />

If you aren't using SSL, you need to add the following (note the configSection defined above):

      <cookieHandler requireSsl="false" />

In the login page, set the user properties into claims.

        private void SetClaimsCookie(User user)
            var claims = new List<Claim>();
            claims.Add(new Claim(ClaimTypes.Name, user.Name));
            claims.Add(new Claim(ClaimTypes.Email, user.Email));
            foreach (var role in user.RolesList)
                claims.Add(new Claim(ClaimTypes.Role, role));
            //needs an authentication issuer otherwise not authenticated
            var claimsIdentity = new ClaimsIdentity(claims, "Forms");
            var claimsPrincipal = new ClaimsPrincipal(claimsIdentity);
            var sessionAuthenticationModule = FederatedAuthentication.SessionAuthenticationModule;
            var token = new SessionSecurityToken(claimsPrincipal);

You don't need the PostAuthenticateRequest event- the WIF session module is doing that bit.

And that's it! [Authorize("Admin")] attributes work as normal. Retrieving the claims is simple.

var cp = (ClaimsPrincipal)User;
var email = cp.FindFirst(ClaimTypes.Email);

Logging out looks like this:

        public ActionResult SignOut()
            var sessionAuthenticationModule = FederatedAuthentication.SessionAuthenticationModule;

            return Redirect("~/");
posted on Friday, March 07, 2014 8:59:14 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Thursday, February 13, 2014


Using a database as a queue is a common requirement. An example is sending emails within a website- it can be slow, error-prone, and you don't want to delay returning a page to the user. So the server processing just queues a request in the database, and a worker process picks it up and tries to execute it.

The real problem is there may be more than one worker process, perhaps running on different servers. By using the table as a queue, they can avoid deadlocks or processing records multiple times.

Comprehensive breakdown of queuing including heap queues, FIFO and LIFO.


Let's have a [Created] column, and an [IsProcessed] column. Alternatively we could just delete the rows when they are processed.

CREATE TABLE [dbo].[EmailRequests](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[EmailAddress] [nvarchar](250) NOT NULL,
	[Subject] [nvarchar](50) NOT NULL,
	[Body] [nvarchar](500) NOT NULL,
	[IsProcessed] [bit] NOT NULL,
	[Created] [datetime] NOT NULL,

The INSERT is just a normal INSERT.

INSERT INTO [EmailRequests]
           ('test@email.com','Hello','Spam spam spam',0,CURRENT_TIMESTAMP)

This is a FIFO queue, but the order isn't strict (see this explanation)

  • The lock hints mean lock the row (as normal), but skip any existing locks (so avoiding deadlocks)
  • The OUTPUT clause with the CTE makes it all a single atomic operation
  • The inserted identifier includes UPDATEs and INSERTs. For DELETEs, there is a deleted identifier,
with cte as (
 select top(1) [Id], [IsProcessed], [EmailAddress], [Subject], [Body]
 from [EmailRequests] with (ROWLOCK, READPAST)
 where [IsProcessed]= 0
 order by [Created]
update cte
	set [IsProcessed] = 1
	output inserted.[Id], inserted.[EmailAddress], inserted.[Subject], inserted.[Body]


To make this a bit more realistic, you could add a [IsEmailSent] column, updated when the emailing succeeds. Only one worker de-queued the record and has the [Id] so this is straightforward. Then you need a process for dealing with records that are [IsProcessed] but not [IsEmailSent] (dequeued, but the email failed). You might retry (in which case, add a [RetryCount] counter up to a maximum), or have a manual alert (the email address is bogus, etc etc).


Remember resetting [IsProcessed] to 0 or infinitely retrying may poison the queue!

posted on Thursday, February 13, 2014 7:49:22 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Wednesday, February 12, 2014

Sometimes you want to save a record, but it may be an existing record (UPDATE) or a new one (INSERT). The pattern is sometimes called an “upsert” (update/insert).

You could try to do this the standard way you would via an ORM (SELECT to see if it exists, if it does, UPDATE, else INSERT). But if other users are updating at the same time, you will see concurrency errors or deadlocks.

First, let’s look at simpler SQL that is vulnerable to concurrency errors, than two ways of doing it safely.

Simple (not concurrent-safe!!)

UPDATE then check @@ROWCOUNT and INSERT if necessary. Only use this when the same record will not be created by two sources.

DECLARE @CategoryName NVARCHAR(15) = 'Dairy';
DECLARE @Description NVARCHAR(MAX) = 'Milk, cheese and yoghurts';
DECLARE @Id int = null;

UPDATE [Categories]
    SET [Description] = @Description,
        @Id = [CategoryID]
    WHERE [CategoryName] = @CategoryName;

    INSERT INTO [Categories]
--if id is not set in UPDATE, then grab scope identity
--select it out

This example grabs the affected Id too (whether identity insert or update).


A more conventional IF NOT EXISTS... INSERT - ELSE - UPDATE. The lock hints protect for concurrency. The UPDLOCK and SERIALIZABLE hints are as suggested in this Sam Saffron blog post from 2007

        WHERE [CategoryName] = @CategoryName )
        INSERT INTO [Categories]
        SET @Id = CAST(SCOPE_IDENTITY() AS int);
        UPDATE [Categories]
            SET [Description] = @Description,
                @Id = [CategoryID]
        WHERE [CategoryName] = @CategoryName;



Much the same as before, using the MERGE command. MERGE by itself is not concurrent-safe; you must still use lock hints.

    --if/where part
        (SELECT @CategoryName, @Description ) AS source
            ([CategoryName], [Description])
        ON (target.CategoryName = source.CategoryName)
    --found, so update
        UPDATE SET [Description] = @Description,
                @Id = [CategoryID]
    --not found, so insert
        INSERT ([CategoryName]



Other Databases


Oracle has SELECT FOR UPDATE cursors

posted on Wednesday, February 12, 2014 8:32:28 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Thursday, August 08, 2013

I hit this error when a json client was calling an ASP MVC controller to get Json.

Exception Details: System.InvalidOperationException: The JSON request was too large to be deserialized.

The posted JSON is large, but not the default maximum of 4Mb.

There is a .Net security fix (http://support.microsoft.com/kb/2661403):

Microsoft security update MS11-100 limits the maximum number of form keys, files, and JSON members to 1000 in an HTTP request.

You can bypass this (at the risk of Denial of service) with an appSetting in web.config:

  <appSettings>     <add key="aspnet:MaxJsonDeserializerMembers" value="150000" />

posted on Thursday, August 08, 2013 1:24:22 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Thursday, June 13, 2013
The jQuery 2 branch does not support IE 6,7 or 8. Unless the site is exclusively targeted at mobile, or you have a very small and up-to-date audience, everyone should still use the jQuery 1.9+ branch.

Nuget insists that you want to update to jQuery 2.x


The package should not have been updated from 1.x to 2.x. There should have been a separate package for jQuery 2, so .net websites continue to update on the 1.x branch.

There is a workaround.

You must manually change the packages.config in the project. Add a range of allowed versions:
<package id="jQuery" version="1.10.1" targetFramework="net45" allowedVersions="[1.7.1,2)" />

Square bracket "[" is "greater or equal to". So versions greater than 1.7.1 here...

Closing round bracket ")" is less than (not inclusive). So versions up to but not including 2.

posted on Thursday, June 13, 2013 9:53:11 AM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, May 31, 2013
WebAPI got updated 30 May 2013.

if you do a NuGet update you'll probably not be able to build afterwards.

WebAPI depends on WebAPI.WebHost
which depends on WebAPI.Core
which depends on Web.Client
which depends on Microsoft.Net.Http
which now depends on Microsoft.Bcl and Microsoft.Bcl.Build.

Microsoft.Bcl is a portability library which allows .Net 4 etc to use .Net 4.5 types. Apparently it has no effect on 4.5

But it (or at least the Bcl.Build) has an ugly bug when you try to build
Error    12    The "EnsureBindingRedirects" task failed unexpectedly.
System.NullReferenceException: Object reference not set to an instance of an object.
   at Roxel.BuildTasks.EnsureBindingRedirects.MergeBindingRedirectsFromElements(IEnumerable`1 dependentAssemblies)
   at Roxel.BuildTasks.EnsureBindingRedirects.Execute()
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
   at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__20.MoveNext()    Ems.WebApp

The fix is to add culture="neutral" to any binding redirects that are missing them. In the default MVC template they are missing for some, and you almost certainly haven't changed them.
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
        <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="" newVersion="" />
        <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="" newVersion="" />
Do a Rebuild (rather than a build) to ensure everything's loaded.

Hopefully there will be a update pretty soon.

posted on Friday, May 31, 2013 9:04:42 AM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, November 30, 2012

At work I can happily connect to my Azure-hosted Team Foundation Service. But I couldn't do it from home. It says it is looking up identity providers, but the live.com logon screen never shows up. I just see the dreaded TFS31003 error ("Either you have not entered the necessary credentials or your user account does not have permission to connect to Team Foundation Server ").

My home machines are Windows 8 and linked to my personal LiveIDs, not my work logon. Windows 8 likes to connect to lots of Skydrive and lots of other services, storing all those credentials. And Visual Studio picks those rather than allow me to add a new one. Deleting entries in the Windows credentials store didn't work.

How can I force Visual Studio to select the right logon?

In Visual Studio, View>Other Windows>Web Browser

In the browser, go to live.com, and log on.

On mine it automatically logged on with another of my logons, so I signed off, and then signed back in with the correct one.

Now when I tried to connect to the TFS service address, it works.

posted on Friday, November 30, 2012 6:07:20 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]