# Wednesday, July 23, 2014

System.Version

The mscorlib System.Version class accepts 2, 3 or 4 integers to represent a version.
var v = new Version(2014010110, 7383783, 38989899, 893839893);
Console.WriteLine(v.ToString()); //shows 2014010110.7383783.38989899.893839893

The values are Major.Minor[.Build[.Revision]]

As we'll see shortly, actual assembly versions are much more limited!

SemVer

Semantic versioning can be mapped into the .net scheme.

In SemVer, the scheme is Major.Minor.Patch.

  • Major is breaking changes
  • Minor is backwards compatible changes including additions
  • Patch is backwards compatible bug fixes.

So .net Build is equivalent to semver Patch, and revision, which is optional anyway, is disregarded. (The original .net convention was that build used the same source with different symbols).

Version attributes

The version attributes are normally in Properties/AssemblyInfo.cs (but could be anywhere).
You can also access AssemblyVersion and AssemblyFileVersion via the project properties - application - [Assembly Information...] button.

There are 3:

//CLR uses this as the version 
[assembly: AssemblyVersion("1.0.0.0")]

//Not used by CLR, often the specific build
[assembly: AssemblyFileVersion("1.0.0.0")]

//If not present, == AssemblyVersion. 
[assembly: AssemblyInformationalVersion("v1.0 RC")]

AssemblyInformationalVersion used to error if it wasn't a System.Version (all ints). Since Visual Studio 2010, you can put in free-form strings, which is useful for tags like "RC".

To access these via code:

var executingAssembly = Assembly.GetExecutingAssembly();
var ver = executingAssembly.GetName().Version; //AssemblyVersion
var fv = System.Diagnostics.FileVersionInfo.GetVersionInfo(executingAssembly.Location);
Console.WriteLine(ver);
Console.WriteLine(fv.FileVersion); //AssemblyFileVersion
Console.WriteLine(fv.ProductVersion); //AssemblyInformationalVersion

There is also a fv.ProductMajorPart and fv.ProductMinorPart but these aren't populated if the AssemblyInformationalVersion can't be parsed into a System.Version.

The values- major, minor, build, revision - are ints, up to 2,147,483,647. But there's a big gotcha. For operating system reasons, the compiler limits values to 65,534, int16.

For AssemblyVersion, you get a CSC Error: "Error emitting 'System.Reflection.AssemblyVersionAttribute' attribute -- 'The version specified '65536.65535.65535.65535' is invalid'"

For AssemblyFileVersion, you get a CSC Warning    "Assembly generation -- The version '65536.65535.65535.65535' specified for the 'file version' is not in the normal 'major.minor.build.revision' format". It will build, at least.

Versioning Strategy

  • For all version types major and minor parts should be manually set.
  • To simplify CLR versioning, we don't need to increment the AssemblyVersion except once, manually, at final release. For AssemblyVersion, just set major and minor (and perhaps build for semver). Normally build and revision will always be 0.0.  We don't want any version numbers changing during developer builds, or even continuous integration builds unless they are automatically deployed to test.
  • When a dll is published/deployed, we should increment the AssemblyFileVersion.
  • We should be able to trace back to the build.

There are several candidates for traceable build and revision numbers, but none are "semver" (both build and revision are significant).

  • Increment by date, as in wildcards (below): build is days since a specific date, revision is seconds since midnight. But there is no obvious connection between the dll and the build on the build-server.
  • Date, except we can't fit "year-month-day-hour-minute-second" into an int16. You could overflow it: build is mmdd, revision is hhmm.
  • Build name. TFS uses a buildDef_yyyymmdd.n format for the build name.
  • Changeset number if numeric, and it is less than 65535.

Both build name and changeset number might be better set in AssemblyInformationalVersion.

Wildcards

For AssemblyVersion only, you can use wildcards for build and revision.

If you use it for file version, you get a warning:
CSC : warning CS1607: Assembly generation -- The version '1.2.0.*' specified for the 'file version' is not in the normal 'major.minor.build.revision' format

  • AssemblyVersion build = number of days since 01/01/2000.
  • AssemblyVersion revision = number of seconds since midnight.

If you build twice without changing, the revision goes up. If you build the next day without changes, the build goes up.

Wildcards are pretty useless.

Build Tasks

Build tasks run after source control get-latest, before compilation. They find the AssemblyInfo.cs files, flip the readonly flag, and find and replace the AssemblyFileVersion, then compile. The changed AssemblyInfo file should not be checked in. The process is not run in developer builds, only in "publish" builds.

MSBuild Extension Pack is a set of msbuild tasks, which is also available as a Nuget package (MSBuild.Extension.Pack). One task, MSBuild.ExtensionPack.VisualStudio.TfsVersion, edits the AssemblyFileVersion given a date or tfs-format build name.

Another project, Community TFS Build Extensions, made by some of the same people, hooks up into TFS 2012/2013 xaml workflows and includes a TfsVersion build activity.

posted on Wednesday, July 23, 2014 8:22:40 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, March 07, 2014

Forms Authentication is ASP.net is simple but the FormsIdentity and GenericPrincipal/RolePrincipal are a little too simple. All we get are IIdentity.Name and IPrincipal.IsInRole(x)

Most real applications need a bit more, like the user's full name or email address, or domain-specific data.

Custom Principal

The usual way to do this was to create a custom principal, the UserData field in the forms authentication cookie, and the asp.net pipeline event "PostAuthenticateRequest".

Here's our custom principal:

    public class UserPrincipal : GenericPrincipal
    {
        public UserPrincipal(IIdentity identity, string[] roles)
            : base(identity, roles)
        {
        }

        public string Email { get; set; }
    }

Here's the login action. Instead of the normal FormsAuthentication.SetAuthCookie, we do it manually (see below):

        [AllowAnonymous]
        [HttpPost]
        public ActionResult Login(LoginModel model, string returnUrl)
        {
            if (ModelState.IsValid) //Required, string length etc
            {
                var userStore = new UserRepository();
                var user = userStore.FindUser(model.UserName, model.Password);
                if (user != null)
                {
                    //FormsAuthentication.SetAuthCookie(user.Name, false);
                    SetAuthCookie(user);
                    //redirect to returnUrl
                    if (!string.IsNullOrEmpty(returnUrl) &&
                        Url.IsLocalUrl(returnUrl) &&
                        !returnUrl.Equals("/Error/NotFound", StringComparison.OrdinalIgnoreCase))
                    {
                        return Redirect(returnUrl);
                    }
                    return Redirect("~/");
                }
                ModelState.AddModelError("UserName", "User or password not found");
            }
            return View(model);
        }

And here's where we set the authentication cookie, here putting our user object as Json into the userData field of the cookie.

        private void SetAuthCookie(User user)
        {
            var userData = JsonConvert.SerializeObject(user);
            var authTicket = new FormsAuthenticationTicket(
                  1, //version
                  user.Name,
                  DateTime.Now, //issue date
                  DateTime.Now.AddMinutes(30), //expiration
                  false,  //isPersistent
                  userData,
                  FormsAuthentication.FormsCookiePath); //cookie path
            var cookie = new HttpCookie(FormsAuthentication.FormsCookieName,
                                        FormsAuthentication.Encrypt(authTicket));
            Response.Cookies.Add(cookie);
        }

Finally, we hook up the PostAuthenticationRequest event. Normal forms authentication will have recognised the authentication cookie and created a GenericPrincipal and FormsIdentity. We unpack the userData field, and create our custom principal.

        protected void Application_PostAuthenticateRequest(Object sender, EventArgs e)
        {
            var context = HttpContext.Current;
            if (context.User == null || !context.User.Identity.IsAuthenticated)
            {
                return;
            }

            var formsIdentity = context.User.Identity as FormsIdentity;
            if (formsIdentity == null)
            {
                return;
            }

            var id = formsIdentity;
            var ticket = id.Ticket;
            var userData = ticket.UserData; // Get the stored user-data, in this case, our roles
            var user = JsonConvert.DeserializeObject<User>(userData);
            var customPrincipal = new UserPrincipal(formsIdentity, user.RolesList.ToArray());
            customPrincipal.Email = user.Email;
            Thread.CurrentPrincipal = Context.User = customPrincipal;
        }

The userdata is encrypted and safe from tampering, but it can make the cookie rather large.

.Net 4.5 making claims

Now in ASP.net 4.5, we have Windows Identity Foundation (WIF, pronounced "dub-i-f") and claims principals and identities. Usually this is discussed with "federation" and single-sign-on identity providers, but actually claims can be useful in "traditional" stand-alone websites like we've just discussed.

ClaimsPrincipals and Identities have a list of Claims. This can be just a property bag with names and values, but there are many standard claim names, defined by OASIS, in the ClaimTypes enum. In addition to ClaimTypes.Name and ClaimTypes.Role, there are Email, GivenName, Surname, DateOfBirth, MobilePhone and so on. These standard defined types mean libraries can discover these claims without defining common interfaces or contracts. But it is also extensible with application specific claims. The old fixed custom principal is starting to look old-fashioned.

The WIF session authentication module can take over from forms authentication, storing the claims in cookies. You don't need to use the federation aspects. The module handles cookies a little better than forms authentication- if it gets too large, it's chunked over several cookies. There is also a ReferenceMode = true, which keeps the claims data in server side memory and only sends a simple key in the cookie (it's obviously not webfarm safe).

FormsAuthentication with Claims

First, the configuration. You'll need to define the configuration sections. We are still using forms authentication, so keep the authentication mode=Forms. Add the WIF session authentication module, which will handle the cookie.

<configuration> 
  
  <configSections>
    <section name="system.identityModel"
             type="System.IdentityModel.Configuration.SystemIdentityModelSection,
             System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"
/>
    <section name="system.identityModel.services"
             type="System.IdentityModel.Services.Configuration.SystemIdentityModelServicesSection,
             System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"
/>
  </configSections>
  <system.web>
    <compilation debug="true" targetFramework="4.5" />
    <httpRuntime targetFramework="4.5" />
    <authentication mode="Forms">
      <forms loginUrl="~/Account/Login" />
    </authentication>
  </system.web>
  <system.webServer>
    <modules>
      <add name="SessionAuthenticationModule"
           type="System.IdentityModel.Services.SessionAuthenticationModule,
           System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"

           preCondition="managedHandler" />
    </modules>
  </system.webServer>

If you aren't using SSL, you need to add the following (note the configSection defined above):

  <system.identityModel.services>
    <federationConfiguration>
      <cookieHandler requireSsl="false" />
    </federationConfiguration>
  </system.identityModel.services>

In the login page, set the user properties into claims.

        private void SetClaimsCookie(User user)
        {
            var claims = new List<Claim>();
            claims.Add(new Claim(ClaimTypes.Name, user.Name));
            claims.Add(new Claim(ClaimTypes.Email, user.Email));
            foreach (var role in user.RolesList)
            {
                claims.Add(new Claim(ClaimTypes.Role, role));
            }
            //needs an authentication issuer otherwise not authenticated
            var claimsIdentity = new ClaimsIdentity(claims, "Forms");
            var claimsPrincipal = new ClaimsPrincipal(claimsIdentity);
            var sessionAuthenticationModule = FederatedAuthentication.SessionAuthenticationModule;
            var token = new SessionSecurityToken(claimsPrincipal);
            sessionAuthenticationModule.WriteSessionTokenToCookie(token);
        }

You don't need the PostAuthenticateRequest event- the WIF session module is doing that bit.

And that's it! [Authorize("Admin")] attributes work as normal. Retrieving the claims is simple.

var cp = (ClaimsPrincipal)User;
var email = cp.FindFirst(ClaimTypes.Email);

Logging out looks like this:

        public ActionResult SignOut()
        {
            var sessionAuthenticationModule = FederatedAuthentication.SessionAuthenticationModule;
            sessionAuthenticationModule.CookieHandler.Delete();

            //FormsAuthentication.SignOut();
            return Redirect("~/");
        }
posted on Friday, March 07, 2014 8:59:14 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Thursday, February 13, 2014

 

Using a database as a queue is a common requirement. An example is sending emails within a website- it can be slow, error-prone, and you don't want to delay returning a page to the user. So the server processing just queues a request in the database, and a worker process picks it up and tries to execute it.

The real problem is there may be more than one worker process, perhaps running on different servers. By using the table as a queue, they can avoid deadlocks or processing records multiple times.

Comprehensive breakdown of queuing including heap queues, FIFO and LIFO.

Table

Let's have a [Created] column, and an [IsProcessed] column. Alternatively we could just delete the rows when they are processed.

CREATE TABLE [dbo].[EmailRequests](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[EmailAddress] [nvarchar](250) NOT NULL,
	[Subject] [nvarchar](50) NOT NULL,
	[Body] [nvarchar](500) NOT NULL,
	[IsProcessed] [bit] NOT NULL,
	[Created] [datetime] NOT NULL,
 CONSTRAINT [PK_EmailRequests] PRIMARY KEY CLUSTERED (	[Id] ASC )
)

The INSERT is just a normal INSERT.

INSERT INTO [EmailRequests]
           ([EmailAddress],[Subject],[Body],[IsProcessed],[Created])
     VALUES
           ('test@email.com','Hello','Spam spam spam',0,CURRENT_TIMESTAMP)
Dequeue

This is a FIFO queue, but the order isn't strict (see this explanation)

  • The lock hints mean lock the row (as normal), but skip any existing locks (so avoiding deadlocks)
  • The OUTPUT clause with the CTE makes it all a single atomic operation
  • The inserted identifier includes UPDATEs and INSERTs. For DELETEs, there is a deleted identifier,
with cte as (
 select top(1) [Id], [IsProcessed], [EmailAddress], [Subject], [Body]
 from [EmailRequests] with (ROWLOCK, READPAST)
 where [IsProcessed]= 0
 order by [Created]
)
update cte
	set [IsProcessed] = 1
	output inserted.[Id], inserted.[EmailAddress], inserted.[Subject], inserted.[Body]

 

To make this a bit more realistic, you could add a [IsEmailSent] column, updated when the emailing succeeds. Only one worker de-queued the record and has the [Id] so this is straightforward. Then you need a process for dealing with records that are [IsProcessed] but not [IsEmailSent] (dequeued, but the email failed). You might retry (in which case, add a [RetryCount] counter up to a maximum), or have a manual alert (the email address is bogus, etc etc).

 

Remember resetting [IsProcessed] to 0 or infinitely retrying may poison the queue!

posted on Thursday, February 13, 2014 7:49:22 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Wednesday, February 12, 2014

Sometimes you want to save a record, but it may be an existing record (UPDATE) or a new one (INSERT). The pattern is sometimes called an “upsert” (update/insert).

You could try to do this the standard way you would via an ORM (SELECT to see if it exists, if it does, UPDATE, else INSERT). But if other users are updating at the same time, you will see concurrency errors or deadlocks.

First, let’s look at simpler SQL that is vulnerable to concurrency errors, than two ways of doing it safely.

Simple (not concurrent-safe!!)

UPDATE then check @@ROWCOUNT and INSERT if necessary. Only use this when the same record will not be created by two sources.

DECLARE @CategoryName NVARCHAR(15) = 'Dairy';
DECLARE @Description NVARCHAR(MAX) = 'Milk, cheese and yoghurts';
DECLARE @Id int = null;

UPDATE [Categories]
    SET [Description] = @Description,
        @Id = [CategoryID]
    WHERE [CategoryName] = @CategoryName;

IF @@ROWCOUNT = 0
    INSERT INTO [Categories]
               ([CategoryName]
               ,[Description])
         VALUES
               (@CategoryName
               ,@Description);
--if id is not set in UPDATE, then grab scope identity
SET @Id = ISNULL(@Id, CAST(SCOPE_IDENTITY() AS int));
--select it out
SELECT @Id AS Id;

This example grabs the affected Id too (whether identity insert or update).

Concurrent-safe

A more conventional IF NOT EXISTS... INSERT - ELSE - UPDATE. The lock hints protect for concurrency. The UPDLOCK and SERIALIZABLE hints are as suggested in this Sam Saffron blog post from 2007

IF NOT EXISTS(
        SELECT * FROM [Categories] WITH ( UPDLOCK, SERIALIZABLE )
        WHERE [CategoryName] = @CategoryName )
    BEGIN
        INSERT INTO [Categories]
                   ([CategoryName]
                   ,[Description])
             VALUES
                   (@CategoryName
                   ,@Description);   
        SET @Id = CAST(SCOPE_IDENTITY() AS int);
    END
ELSE
    BEGIN
        UPDATE [Categories]
            SET [Description] = @Description,
                @Id = [CategoryID]
        WHERE [CategoryName] = @CategoryName;
    END

SELECT @Id AS Id;

MERGE

Much the same as before, using the MERGE command. MERGE by itself is not concurrent-safe; you must still use lock hints.

MERGE INTO [Categories] WITH  ( UPDLOCK, SERIALIZABLE ) AS target
    --if/where part
    USING
        (SELECT @CategoryName, @Description ) AS source
            ([CategoryName], [Description])
        ON (target.CategoryName = source.CategoryName)
    --found, so update
    WHEN MATCHED THEN
        UPDATE SET [Description] = @Description,
                @Id = [CategoryID]
    --not found, so insert
    WHEN NOT MATCHED THEN
        INSERT ([CategoryName]
                   ,[Description])
             VALUES
                   (@CategoryName
                   ,@Description);

SET @Id = ISNULL(@Id, CAST(SCOPE_IDENTITY() AS int));

SELECT @Id AS Id;

Other Databases

MySQL has INSERT … ON DUPLICATE KEY UPDATE … and SELECT … FOR UPDATE.

Oracle has SELECT FOR UPDATE cursors

posted on Wednesday, February 12, 2014 8:32:28 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Thursday, August 08, 2013

I hit this error when a json client was calling an ASP MVC controller to get Json.

Exception Details: System.InvalidOperationException: The JSON request was too large to be deserialized.

The posted JSON is large, but not the default maximum of 4Mb.

There is a .Net security fix (http://support.microsoft.com/kb/2661403):

Microsoft security update MS11-100 limits the maximum number of form keys, files, and JSON members to 1000 in an HTTP request.

You can bypass this (at the risk of Denial of service) with an appSetting in web.config:


  <appSettings>     <add key="aspnet:MaxJsonDeserializerMembers" value="150000" />

posted on Thursday, August 08, 2013 1:24:22 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Thursday, June 13, 2013
The jQuery 2 branch does not support IE 6,7 or 8. Unless the site is exclusively targeted at mobile, or you have a very small and up-to-date audience, everyone should still use the jQuery 1.9+ branch.

Nuget insists that you want to update to jQuery 2.x

Doh.

The package should not have been updated from 1.x to 2.x. There should have been a separate package for jQuery 2, so .net websites continue to update on the 1.x branch.

There is a workaround.

You must manually change the packages.config in the project. Add a range of allowed versions:
 
<package id="jQuery" version="1.10.1" targetFramework="net45" allowedVersions="[1.7.1,2)" />


Square bracket "[" is "greater or equal to". So versions greater than 1.7.1 here...

Closing round bracket ")" is less than (not inclusive). So versions up to but not including 2.


posted on Thursday, June 13, 2013 9:53:11 AM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, May 31, 2013
WebAPI got updated 30 May 2013.

if you do a NuGet update you'll probably not be able to build afterwards.

WebAPI depends on WebAPI.WebHost
which depends on WebAPI.Core
which depends on Web.Client
which depends on Microsoft.Net.Http
which now depends on Microsoft.Bcl and Microsoft.Bcl.Build.

Microsoft.Bcl is a portability library which allows .Net 4 etc to use .Net 4.5 types. Apparently it has no effect on 4.5

But it (or at least the Bcl.Build) has an ugly bug when you try to build
Error    12    The "EnsureBindingRedirects" task failed unexpectedly.
System.NullReferenceException: Object reference not set to an instance of an object.
   at Roxel.BuildTasks.EnsureBindingRedirects.MergeBindingRedirectsFromElements(IEnumerable`1 dependentAssemblies)
   at Roxel.BuildTasks.EnsureBindingRedirects.Execute()
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
   at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__20.MoveNext()    Ems.WebApp


The fix is to add culture="neutral" to any binding redirects that are missing them. In the default MVC template they are missing for some, and you almost certainly haven't changed them.
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="System.Web.Helpers" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="2.0.0.0" />
      </dependentAssembly>
      <dependentAssembly>
        <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-4.0.0.0" newVersion="4.0.0.0" />
      </dependentAssembly>
Do a Rebuild (rather than a build) to ensure everything's loaded.

Hopefully there will be a update pretty soon.



posted on Friday, May 31, 2013 9:04:42 AM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, November 30, 2012

At work I can happily connect to my Azure-hosted Team Foundation Service. But I couldn't do it from home. It says it is looking up identity providers, but the live.com logon screen never shows up. I just see the dreaded TFS31003 error ("Either you have not entered the necessary credentials or your user account does not have permission to connect to Team Foundation Server ").

My home machines are Windows 8 and linked to my personal LiveIDs, not my work logon. Windows 8 likes to connect to lots of Skydrive and lots of other services, storing all those credentials. And Visual Studio picks those rather than allow me to add a new one. Deleting entries in the Windows credentials store didn't work.

How can I force Visual Studio to select the right logon?

In Visual Studio, View>Other Windows>Web Browser

In the browser, go to live.com, and log on.

On mine it automatically logged on with another of my logons, so I signed off, and then signed back in with the correct one.

Now when I tried to connect to the TFS service address, it works.

posted on Friday, November 30, 2012 6:07:20 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Saturday, June 30, 2012

MsTest, Nunit, MbUnit etc

Unit testing is now available in VS 2012 Express. In the paid-for SKUs, you can use other unit test frameworks, not just MSTest. For instance, NUnit, MbUnit and so on.

First, install an adapter via Extensions.

NunitTestAdapter

Then install the framework via Nuget- and start writing tests!

NunitTests

Test windows

TestExplorerThere's now only one, the Test Explorer. Most people only ever used the Test Results before, and Test Lists was unusable. It has a thin red or green bar (at last!), and simple splitting into Failed and Passed tests (now with timings).

There's a search with filters ("FullName: Domain.Tests.MyTest"). For solutions with large numbers of tests this might be awkward- I'd like to see more customization of the result tree (by project/folder or namespace), and more filters.

The results at the bottom of the window are summarized, but you can still click through to the test and stack.

There's a button to make every build (Ctl-Shift-B) also run the tests - "Run Tests After Build". Builds and test are run in the  background so it doesn't stop you coding (well, my underpowered 32bit laptop is less responsive, but it's vastly better than previous versions of VS). Not so good if you include some integration tests, but nice nonetheless.

RunTestsAfterBuild

Features

Unit tests support async tests. And code coverage (in Premium and Ultimate only) is much easier- no .testsettings, no having to select the dlls, just right click the tests (or add /enablecodecoverage on the vstest.console.exe command line).

CodeCoverage

You can't test private methods anymore (didn't use that anyway). You can't Generate Unit Tests from a method either. In VS2010 I used this a fair amount- I got a test project with the correct references, and the correct namespaces as well, even if the stub test it created normally had to be deleted and rewritten straight away. I'll miss that.

Key Mappings

Of course, they broke something. I always use Ctl+R-T to run the current test. Well, I hold Ctl and type R then T. Which just doesn't work in VS2012. You have to hold Ctl and type R, then release Ctl and press T. The combination I used, which turns out to be Ctl R,Ctl T, isn't mapped in VS2012. You can remap it manually. Very annoying.

CtrlRT

Microsoft.Fakes: Stubs and Shims

These are mocking classes, similar to Moq, NMock and RhinoMocks, and derived from the Pex Moles project. It's simpler than Moq: there's no "Setup(" or "Verify". It's also VS Ultimate only (not in Premium or Professional). Personally I much prefer to do simple manual stubs (implement an interface in the test project) than full mocking. Full mocks are powerful but you end up with loads of code setting up the tests (tight coupling), and they make it easy to add too many dependencies (just because you can test it doesn't mean you can forget all about the SRP).

Microsoft.Fakes isn't going to replace mocking frameworks (there's no behaviour verification). The shimming is very powerful (and dangerous), similar to TypeMock Isolator and other expensive tools.

To add Fakes, right click the references.

AddFakesAssembly

You'll get a reference to Microsoft.QualityTools.Testing.Fakes and a project folder called Fakes with an xml file in it. For example, in a web application, you'll probably want to fake System.Web so you handle all the HttpContext/ Request stuff.

Stubs are simple (MSDN). Let's stub our input object and initialize it's value. All properties are automatically prepared to return the defaults (0s or nulls). The stub method (in the .Net framework or a local assembly) has a "Stub" prefix.

        [Test]
        public void Test20()
        {
            //arrange
            var entity = new ClassLibrary1.Tasks.Fakes.StubEntity();
            entity.Value = 20;
            var processor = new Processor();

            //act
            var result = processor.Execute(entity);

            //assert
            Assert.That(result.Value, Is.EqualTo(20));
        }

Shims are more difficult and powerful (MSDN).

Here's a method we want to test:

    public class FileReader
    {
        public string ReadAllText(string path)
        {
            return System.IO.File.ReadAllText(path);
        }

Here's how we can test it. Here we have to fake the System reference (which includes mscorlib). Note for shims, we have to have a ShimsContext. The prefix is "Shim" and methods and properties are prepared with lambda functions.

        [Test]
        public void ReadAllTextTest()
        {
            //arrange
            string result;
            var reader = new FileReader();
            using (Microsoft.QualityTools.Testing.Fakes.
                ShimsContext.Create())
            {
                System.IO.Fakes.ShimFile.ReadAllTextString = (arg) => "x";

                //act
                result = reader.ReadAllText(@"X:\doesnotexist\notThere.txt");
            }

            //assert
            Assert.That(result, Is.EqualTo("x"));
        }
posted on Saturday, June 30, 2012 3:57:04 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, June 29, 2012

Last time at TechEd 2010 in Berlin was a little disappointing. After taking a year off, Microsoft moved this time to Amsterdam.

20120628_43020120627_417

Ok, I know we're not supposed to be sightseeing, but central Amsterdam is fun and easy to explore on foot. Another incidental but important point: Amsterdam in June is a lot warmer than Berlin in November. The weather was warm and humid under grey skies. The conference air conditioning struggled a little at times, so it was a little uncomfortable.

20120627_389

The venue was a little outside Amsterdam centre, but easily accessible by Metro. Finding rooms was, as always, a bit challenging, but by the middle of the week we had sort of worked it out.

I bet 90% of attendees tried to press the big buttons on the check-in screens. Turns out they weren't touch screens, you had to use the mouse. This at an event promoting Windows 8's touch screen abilities. Even more lame was that Wi-Fi was down for all Tuesday.

The bag was certainly nicer than 2010's. No T-shirt or other swag. This is the week that Google's developer event gave every attendee a new phone, a new tablet, and their new media streamer device. No Surface tablets or Nokia phones here. This is the first place I've seen anyone else with a Windows Phone- there were quite a few around. Three quarters of attendees had iPhones or Androids though. This is about as faithful an audience that Microsoft can get, and Windows Phone is in a minority. That's pretty bad for Microsoft and Nokia.

We'll get TechNet subscriptions, but they don't contain Visual Studio. For developers, who generally feel like second-class citizens at these events, it disappointing. Give us a cheap tablet with Windows 8 RC to play with, and we might be a little less sceptical about Metro. If we can't get enthusiastic about it, no-one will be. Why not DVDs with the RCs of Windows 8 and Visual Studio 2012 just to save us some bandwidth?

There were competitions to win Lumia 900s in the expo, but otherwise swag was disappointing there too.

20120626 dinner1 Dinners were vast - the scale is always impressive. Good food, too.

20120627_412The delegate party was at the Amsterdam arena, and that was a pretty good venue. Plenty of beer, cheese, and other nibbles, and huge screens to show the football. The music was way too loud, though (we're old boring gits, not teenagers).

20120627_0915 AmsterdamArena TechEd2012

20120626_387The keynotes on Tuesday and Wednesday heavily promoted Windows Server 2012 (Tues) and Windows 8/ metro (Wed). The key message from the 2nd keynote seems to be that Windows 8 scales uniformly up from Phone to tablet to desktop. Which is a different story to Apple, who have a clear distinction between iOS and the OS X versions (and MS's own previous CE/ full Windows split). A few glitches when the Metro gestures didn't work properly, which was amusing and disturbing.

My impression is that while it might work well on a small touchscreen, it's not obvious or easily discoverable for a desktop. The demo applications look nice, but most internal and third party business applications look like crap and there's nothing to make the average developer into a decent UX designer. Even properly designed metro apps use lots of whitespace, and have a low information density. Being chromeless actually makes it more difficult to understand what you can do (the Windows Phone IE has tabs, but I still haven't figured out how to switch between them- I think Windows 8 metro IE is the same). The limited choice of full screen or side-by-side docking is just inadequate for a lot of normal PC users.

There were some good sessions. Honestly I'm not sure I learned a lot new, but it most sessions were a good review and clarification of what's current (see my Azure posting). All the information is already on the internet, so a few sessions turned out to be quite boring. But it can be hard to keep up when searches bring up old blog posts that are way out of date, and the sheer range of things that are going on. Best session was Scott Gu's Azure introduction (some of the following Azure sessions were repeating the same information, and were quite dull as a result). Mads Kristensen from the asp.net team was great too, showing current Visual Studio work that only existed on his computer.

The last one was disappointing because there wasn't any new stuff coming out. This year there is, so overall it was worthwhile (thanks to my company for paying for it!). I wonder if these conferences will still be relevant much longer. I'm not arguing that we should get lots of free swag, but they do have to give a compelling reason to physically attend. apart from all the free beer, of course.

posted on Friday, June 29, 2012 10:16:38 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]