# Tuesday, January 26, 2016

VS 2015 Update 1 had big problems on one of my machines.

This was helpful: How to restore Visual Studio 2015 after the Update 1 (dependency dance)

A MS engineering guy has a full solution (basically a new devenv.config): here

posted on Tuesday, January 26, 2016 10:13:31 AM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Tuesday, January 19, 2016

A link dump (slides and source) from NDC London, January 2016

ASP.Net 5

Workshop by David Fowler, Damian Edwards. See docs: https://docs.asp.net/en/latest/

Asp.Net 5 authorization workshop by Barry Dorrans

Asp.net 5 Razor taghelpers
Source code: https://github.com/aspnet/Mvc/tree/dev/src/Microsoft.AspNet.Mvc.TagHelpers

Asp.Net 5 Security - slides, by Barry Dorrans

Dominick Baier on Asp 5/MVC6 security

Brock Allen introduction to Identity Server

Other .net

Jon Skeet on immutability in .net

Ian Cooper on service discovery

https://vimeo.com/144824826 - Video of Jimmy Bogard's "Full Stack REST" talk, from another conference (Nov 2015)


https://speakerdeck.com/tastapod/business-mapping - Dan North on Business Mapping
Agile doesn't scale, so this maps people skills (and aspirations) to business KPIs

http://www.slideshare.net/Kevlin/solid-deconstruction - Kelvin Henney on SOLID (yes, open/closed is confusing, but it's really just don't publish public API prematurely)

http://bodil.lol/generators/#0 - Bodil Stokke on Javascript generators

posted on Tuesday, January 19, 2016 1:21:31 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Monday, January 11, 2016

I'm at the NDC London conference, doing the Asp.net 5 introduction by Microsoft's Damian Edwards and David Fowler, who are the guys leading the team writing it. I've downloaded a couple of previous milestones, but it was clear the API was changing radically, so I didn't follow too closely. Now we're on release candidate 1, with 2 due in Feb and the final release at the end of March 2016.

Workshop files are here: https://github.com/damianedwards/aspnet5-workshop

Asp.Net 5 docs are here: https://docs.asp.net/en/latest/

My first impressions from the first day:

It's still changing a lot- we're working off the CI builds nuget feeds, so it's RC1+update 1+lots of changes. RC2 has some big changes still to do, including renaming stuff. It's still too early to start playing in depth, let alone thinking about doing real work in it. There's a lot that isn't there at all. Surprisingly, no MsTest, you have to use XUnit.

But. it's not long until it will be out. And it has some major changes, as big a shift as MVC was from web forms. Developers will have to spend a lot of time learning a new framework.

Asp MVC and WebApi are pretty stable, simple and powerful, but System.Web is a huge beast with legacy going back to ASP classic compatibility. Asp 5 doesn't use System.Web, although some concepts are copied over (there's still something called HttpContext, it just has a lot less in it). So it's faster, more consistent, more portable - for cloud, and cross-platform.

The main changes (from the first day):

There's a new project structure. The website is built into a wwwroot folder - the old VS publish is redundant. On the solution level, there's a global.json (the project list), although .sln still exists. On the project level, there's a project.json and (not seen in VS) an .xproj file that between them replace the .csproj. You can also add other json files- hosting, appsettings.

It's based on nuget packages, so it's easily deployed side-by-side. This downside is the packages are very granular and manual. Visual Studio auto-suggests from nuget feeds while entering the dependencies in project.json. To make it easier, there will be standard packages with simple names that bring in a bunch of other packages (but they aren't there yet). On the MVC level, the Microsoft.AspNet.Mvc package is probably the one you want, unless you want to individually reference Microsoft.AspNet.Mvc.Core and half a dozen other packages for the precise combination of features you're using. Still, we've swapped dll hell for version hell when packages start updating independently.

There is a plan for zero day patching from Windows Update (presumably this is where the GAC comes back). The proposal is that packages are supported for 12 months- so when an update comes out, you have a year to update. In most organizations, applications get released and the team disperses, and no updates are ever done. With Asp.net 5, that could be a problem.

It's bootstrapped just like a classic console- with a public static void Main(string[] args) method. (This was hidden away in IIS before). IIS will now use the static void Main, and more things get done in asp and it's Kestrel server (Kestrel now sits behind IIS, as well as being able to run standalone. HttpListener sounds like it's gone).

Dependency Injection is built in and pervasive. It's used when configuring your asp.net 5 environment, in preference to base classes or interfaces.  In controllers it's not too bad, but in your startup class and middleware you don't have any way to discover what you can do. You have a Configure method which is magically called with the arguments you specify- say, an IApplicationBuilder or an ILoggerFactory. We're going to be relying on the documentation website (and I hope the docs cover updates and and are versioned.)

posted on Monday, January 11, 2016 10:33:05 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Tuesday, August 04, 2015
One of my 4 machines immediately offered to upgrade to Windows 10. The rest are just sitting there, probably because Microsoft is stacking up upgrades over weeks or months.

It's simple to force an update.

Use the Windows 10 media creation tool: http://www.microsoft.com/en-us/software-download/windows10
Download it (it's 18Meg), and run it. Select "Upgrade this PC now". It downloads the multiple Gigs of Win10, then installs.

Privacy settings

Near the end of installation, there will be a screen "Get Going Fast" with the default button "Use Express Settings". Instead click the small link "Customise settings" (it's "Customize" in US versions, "Customise" in UK).

On "Customise Settings" first page, turn off "use page prediction" (2nd option), and at least the first 2 "Connectivity" options below that (automatically connect to open hotspots, networks shared by contacts).
"Smartscreen" and "Send error and diagnostic information" are acceptable.

On "Customise Settings" second page ("Personalisation" and "Location"), turn off all of them.

If you use a local account, it will show "Make it yours" which will convert you to a Microsoft (hotmail/outlook.com) account. Click the small link at the bottom "Skip this step".

Edge is the default browser

Settings > System > Default apps
You can change the Browser, Music, Photo and other defaults here.

Chrome and Firefox will both complain they are no longer default browser- Windows 10 will open the default apps page for you.


Settings is the new control panel. If you followed the defaults, you can fix them here

If you accidentally switched to a Microsoft account, Accounts > Your account > "Sign in with a local account instead".

In Privacy, fix the options under "General", "Location", and "Speech, inking and typing".

In Network & Internet > Wi-Fi > Manage Wi-Fi Settings > "Wi-Fi Sense" turn off both options (Connect to open hotspots, connect to networks shared by contacts).

In Update & Security > Windows Update > Advanced Options > Choose how updates are delivered > make sure you don't use "PCs on my local network, and PCs on the internet" (the "PCs on my local network" should be okay on a home network).

posted on Tuesday, August 04, 2015 9:17:38 AM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Sunday, March 29, 2015

I finally moved my project "database schema reader" from Codeplex to Github.

Github import was simple and quick. I didn't import the historical issues (the project went on Codeplex in 2010, and it was a couple of years old then). I just added an "upstream" remote so I can still push to Codeplex and keep it updated. 

Codeplex comments and discussions are disabled at the moment; "we're anticipating that this functionality will be re-enabled early next week"! The codeplex blog hasn't been updated in 2 years, and the "codeplex" discussions are full of spam (fortunately I've never had spam in the project discussions). There are many third-party services that integrate with Github, but nothing links into Codeplex.

The main advantage of the move to Github was hooking into Appveyor for CI builds. It took a few builds to experiment with things. I'm using database integration tests with SqlServer, SQLite, PostgreSql and MySql. Now these are working, the tests are more portable - I can easily recreate the databases. The packaging of the builds (dlls into a zip, plus a nuget packages) was also easy, and can create releases within Github. The appveyor.yml file isn't the easiest way to configure builds, and I soon figured out it's better to use the website UI.

Overall, much better than TFS (ugh!) and TeamCity builds. I played a little with Travis CI (building in Mono), but for now it's not worth pursuing.

I tried Coverity for code analysis, which flagged a few (minor) things, but didn't seem to add a lot more than FxCop.

I looked at Coveralls,  a code coverage service. Code coverage is 77%, incidentally, which I think is quite good for something that includes code to support Oracle, DB2, Ingres, 3 different Sybase databases, VistaDb, Firebird and Intersystems Cache (for now, these must continue to be tested on my machine only.). I don't believe code coverage is very useful, so for now I won't include this into the Appveyor build.

I'm very impressed, and very happy with Appveyor.

posted on Sunday, March 29, 2015 5:41:03 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Wednesday, July 23, 2014


The mscorlib System.Version class accepts 2, 3 or 4 integers to represent a version.
var v = new Version(2014010110, 7383783, 38989899, 893839893);
Console.WriteLine(v.ToString()); //shows 2014010110.7383783.38989899.893839893

The values are Major.Minor[.Build[.Revision]]

As we'll see shortly, actual assembly versions are much more limited!


Semantic versioning can be mapped into the .net scheme.

In SemVer, the scheme is Major.Minor.Patch.

  • Major is breaking changes
  • Minor is backwards compatible changes including additions
  • Patch is backwards compatible bug fixes.

So .net Build is equivalent to semver Patch, and revision, which is optional anyway, is disregarded. (The original .net convention was that build used the same source with different symbols).

Version attributes

The version attributes are normally in Properties/AssemblyInfo.cs (but could be anywhere).
You can also access AssemblyVersion and AssemblyFileVersion via the project properties - application - [Assembly Information...] button.

There are 3:

//CLR uses this as the version 
[assembly: AssemblyVersion("")]

//Not used by CLR, often the specific build
[assembly: AssemblyFileVersion("")]

//If not present, == AssemblyVersion. 
[assembly: AssemblyInformationalVersion("v1.0 RC")]

AssemblyInformationalVersion used to error if it wasn't a System.Version (all ints). Since Visual Studio 2010, you can put in free-form strings, which is useful for tags like "RC".

To access these via code:

var executingAssembly = Assembly.GetExecutingAssembly();
var ver = executingAssembly.GetName().Version; //AssemblyVersion
var fv = System.Diagnostics.FileVersionInfo.GetVersionInfo(executingAssembly.Location);
Console.WriteLine(fv.FileVersion); //AssemblyFileVersion
Console.WriteLine(fv.ProductVersion); //AssemblyInformationalVersion

There is also a fv.ProductMajorPart and fv.ProductMinorPart but these aren't populated if the AssemblyInformationalVersion can't be parsed into a System.Version.

The values- major, minor, build, revision - are ints, up to 2,147,483,647. But there's a big gotcha. For operating system reasons, the compiler limits values to 65,534, int16.

For AssemblyVersion, you get a CSC Error: "Error emitting 'System.Reflection.AssemblyVersionAttribute' attribute -- 'The version specified '65536.65535.65535.65535' is invalid'"

For AssemblyFileVersion, you get a CSC Warning    "Assembly generation -- The version '65536.65535.65535.65535' specified for the 'file version' is not in the normal 'major.minor.build.revision' format". It will build, at least.

Versioning Strategy

  • For all version types major and minor parts should be manually set.
  • To simplify CLR versioning, we don't need to increment the AssemblyVersion except once, manually, at final release. For AssemblyVersion, just set major and minor (and perhaps build for semver). Normally build and revision will always be 0.0.  We don't want any version numbers changing during developer builds, or even continuous integration builds unless they are automatically deployed to test.
  • When a dll is published/deployed, we should increment the AssemblyFileVersion.
  • We should be able to trace back to the build.

There are several candidates for traceable build and revision numbers, but none are "semver" (both build and revision are significant).

  • Increment by date, as in wildcards (below): build is days since a specific date, revision is seconds since midnight. But there is no obvious connection between the dll and the build on the build-server.
  • Date, except we can't fit "year-month-day-hour-minute-second" into an int16. You could overflow it: build is mmdd, revision is hhmm.
  • Build name. TFS uses a buildDef_yyyymmdd.n format for the build name.
  • Changeset number if numeric, and it is less than 65535.

Both build name and changeset number might be better set in AssemblyInformationalVersion.


For AssemblyVersion only, you can use wildcards for build and revision.

If you use it for file version, you get a warning:
CSC : warning CS1607: Assembly generation -- The version '1.2.0.*' specified for the 'file version' is not in the normal 'major.minor.build.revision' format

  • AssemblyVersion build = number of days since 01/01/2000.
  • AssemblyVersion revision = number of seconds since midnight.

If you build twice without changing, the revision goes up. If you build the next day without changes, the build goes up.

Wildcards are pretty useless.

Build Tasks

Build tasks run after source control get-latest, before compilation. They find the AssemblyInfo.cs files, flip the readonly flag, and find and replace the AssemblyFileVersion, then compile. The changed AssemblyInfo file should not be checked in. The process is not run in developer builds, only in "publish" builds.

MSBuild Extension Pack is a set of msbuild tasks, which is also available as a Nuget package (MSBuild.Extension.Pack). One task, MSBuild.ExtensionPack.VisualStudio.TfsVersion, edits the AssemblyFileVersion given a date or tfs-format build name.

Another project, Community TFS Build Extensions, made by some of the same people, hooks up into TFS 2012/2013 xaml workflows and includes a TfsVersion build activity.

posted on Wednesday, July 23, 2014 8:22:40 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]
# Friday, March 07, 2014

Forms Authentication is ASP.net is simple but the FormsIdentity and GenericPrincipal/RolePrincipal are a little too simple. All we get are IIdentity.Name and IPrincipal.IsInRole(x)

Most real applications need a bit more, like the user's full name or email address, or domain-specific data.

Custom Principal

The usual way to do this was to create a custom principal, the UserData field in the forms authentication cookie, and the asp.net pipeline event "PostAuthenticateRequest".

Here's our custom principal:

    public class UserPrincipal : GenericPrincipal
        public UserPrincipal(IIdentity identity, string[] roles)
            : base(identity, roles)

        public string Email { get; set; }

Here's the login action. Instead of the normal FormsAuthentication.SetAuthCookie, we do it manually (see below):

        public ActionResult Login(LoginModel model, string returnUrl)
            if (ModelState.IsValid) //Required, string length etc
                var userStore = new UserRepository();
                var user = userStore.FindUser(model.UserName, model.Password);
                if (user != null)
                    //FormsAuthentication.SetAuthCookie(user.Name, false);
                    //redirect to returnUrl
                    if (!string.IsNullOrEmpty(returnUrl) &&
                        Url.IsLocalUrl(returnUrl) &&
                        !returnUrl.Equals("/Error/NotFound", StringComparison.OrdinalIgnoreCase))
                        return Redirect(returnUrl);
                    return Redirect("~/");
                ModelState.AddModelError("UserName", "User or password not found");
            return View(model);

And here's where we set the authentication cookie, here putting our user object as Json into the userData field of the cookie.

        private void SetAuthCookie(User user)
            var userData = JsonConvert.SerializeObject(user);
            var authTicket = new FormsAuthenticationTicket(
                  1, //version
                  DateTime.Now, //issue date
                  DateTime.Now.AddMinutes(30), //expiration
                  false,  //isPersistent
                  FormsAuthentication.FormsCookiePath); //cookie path
            var cookie = new HttpCookie(FormsAuthentication.FormsCookieName,

Finally, we hook up the PostAuthenticationRequest event. Normal forms authentication will have recognised the authentication cookie and created a GenericPrincipal and FormsIdentity. We unpack the userData field, and create our custom principal.

        protected void Application_PostAuthenticateRequest(Object sender, EventArgs e)
            var context = HttpContext.Current;
            if (context.User == null || !context.User.Identity.IsAuthenticated)

            var formsIdentity = context.User.Identity as FormsIdentity;
            if (formsIdentity == null)

            var id = formsIdentity;
            var ticket = id.Ticket;
            var userData = ticket.UserData; // Get the stored user-data, in this case, our roles
            var user = JsonConvert.DeserializeObject<User>(userData);
            var customPrincipal = new UserPrincipal(formsIdentity, user.RolesList.ToArray());
            customPrincipal.Email = user.Email;
            Thread.CurrentPrincipal = Context.User = customPrincipal;

The userdata is encrypted and safe from tampering, but it can make the cookie rather large.

.Net 4.5 making claims

Now in ASP.net 4.5, we have Windows Identity Foundation (WIF, pronounced "dub-i-f") and claims principals and identities. Usually this is discussed with "federation" and single-sign-on identity providers, but actually claims can be useful in "traditional" stand-alone websites like we've just discussed.

ClaimsPrincipals and Identities have a list of Claims. This can be just a property bag with names and values, but there are many standard claim names, defined by OASIS, in the ClaimTypes enum. In addition to ClaimTypes.Name and ClaimTypes.Role, there are Email, GivenName, Surname, DateOfBirth, MobilePhone and so on. These standard defined types mean libraries can discover these claims without defining common interfaces or contracts. But it is also extensible with application specific claims. The old fixed custom principal is starting to look old-fashioned.

The WIF session authentication module can take over from forms authentication, storing the claims in cookies. You don't need to use the federation aspects. The module handles cookies a little better than forms authentication- if it gets too large, it's chunked over several cookies. There is also a ReferenceMode = true, which keeps the claims data in server side memory and only sends a simple key in the cookie (it's obviously not webfarm safe).

FormsAuthentication with Claims

First, the configuration. You'll need to define the configuration sections. We are still using forms authentication, so keep the authentication mode=Forms. Add the WIF session authentication module, which will handle the cookie.

    <section name="system.identityModel"
             System.IdentityModel, Version=, Culture=neutral, PublicKeyToken=B77A5C561934E089"
    <section name="system.identityModel.services"
             System.IdentityModel.Services, Version=, Culture=neutral, PublicKeyToken=B77A5C561934E089"
    <compilation debug="true" targetFramework="4.5" />
    <httpRuntime targetFramework="4.5" />
    <authentication mode="Forms">
      <forms loginUrl="~/Account/Login" />
      <add name="SessionAuthenticationModule"
           System.IdentityModel.Services, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089"

           preCondition="managedHandler" />

If you aren't using SSL, you need to add the following (note the configSection defined above):

      <cookieHandler requireSsl="false" />

In the login page, set the user properties into claims.

        private void SetClaimsCookie(User user)
            var claims = new List<Claim>();
            claims.Add(new Claim(ClaimTypes.Name, user.Name));
            claims.Add(new Claim(ClaimTypes.Email, user.Email));
            foreach (var role in user.RolesList)
                claims.Add(new Claim(ClaimTypes.Role, role));
            //needs an authentication issuer otherwise not authenticated
            var claimsIdentity = new ClaimsIdentity(claims, "Forms");
            var claimsPrincipal = new ClaimsPrincipal(claimsIdentity);
            var sessionAuthenticationModule = FederatedAuthentication.SessionAuthenticationModule;
            var token = new SessionSecurityToken(claimsPrincipal);

You don't need the PostAuthenticateRequest event- the WIF session module is doing that bit.

And that's it! [Authorize("Admin")] attributes work as normal. Retrieving the claims is simple.

var cp = (ClaimsPrincipal)User;
var email = cp.FindFirst(ClaimTypes.Email);

Logging out looks like this:

        public ActionResult SignOut()
            var sessionAuthenticationModule = FederatedAuthentication.SessionAuthenticationModule;

            return Redirect("~/");
posted on Friday, March 07, 2014 8:59:14 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Thursday, February 13, 2014


Using a database as a queue is a common requirement. An example is sending emails within a website- it can be slow, error-prone, and you don't want to delay returning a page to the user. So the server processing just queues a request in the database, and a worker process picks it up and tries to execute it.

The real problem is there may be more than one worker process, perhaps running on different servers. By using the table as a queue, they can avoid deadlocks or processing records multiple times.

Comprehensive breakdown of queuing including heap queues, FIFO and LIFO.


Let's have a [Created] column, and an [IsProcessed] column. Alternatively we could just delete the rows when they are processed.

CREATE TABLE [dbo].[EmailRequests](
	[Id] [int] IDENTITY(1,1) NOT NULL,
	[EmailAddress] [nvarchar](250) NOT NULL,
	[Subject] [nvarchar](50) NOT NULL,
	[Body] [nvarchar](500) NOT NULL,
	[IsProcessed] [bit] NOT NULL,
	[Created] [datetime] NOT NULL,

The INSERT is just a normal INSERT.

INSERT INTO [EmailRequests]
           ('test@email.com','Hello','Spam spam spam',0,CURRENT_TIMESTAMP)

This is a FIFO queue, but the order isn't strict (see this explanation)

  • The lock hints mean lock the row (as normal), but skip any existing locks (so avoiding deadlocks)
  • The OUTPUT clause with the CTE makes it all a single atomic operation
  • The inserted identifier includes UPDATEs and INSERTs. For DELETEs, there is a deleted identifier,
with cte as (
 select top(1) [Id], [IsProcessed], [EmailAddress], [Subject], [Body]
 from [EmailRequests] with (ROWLOCK, READPAST)
 where [IsProcessed]= 0
 order by [Created]
update cte
	set [IsProcessed] = 1
	output inserted.[Id], inserted.[EmailAddress], inserted.[Subject], inserted.[Body]


To make this a bit more realistic, you could add a [IsEmailSent] column, updated when the emailing succeeds. Only one worker de-queued the record and has the [Id] so this is straightforward. Then you need a process for dealing with records that are [IsProcessed] but not [IsEmailSent] (dequeued, but the email failed). You might retry (in which case, add a [RetryCount] counter up to a maximum), or have a manual alert (the email address is bogus, etc etc).


Remember resetting [IsProcessed] to 0 or infinitely retrying may poison the queue!

posted on Thursday, February 13, 2014 7:49:22 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Wednesday, February 12, 2014

Sometimes you want to save a record, but it may be an existing record (UPDATE) or a new one (INSERT). The pattern is sometimes called an “upsert” (update/insert).

You could try to do this the standard way you would via an ORM (SELECT to see if it exists, if it does, UPDATE, else INSERT). But if other users are updating at the same time, you will see concurrency errors or deadlocks.

First, let’s look at simpler SQL that is vulnerable to concurrency errors, than two ways of doing it safely.

Simple (not concurrent-safe!!)

UPDATE then check @@ROWCOUNT and INSERT if necessary. Only use this when the same record will not be created by two sources.

DECLARE @CategoryName NVARCHAR(15) = 'Dairy';
DECLARE @Description NVARCHAR(MAX) = 'Milk, cheese and yoghurts';
DECLARE @Id int = null;

UPDATE [Categories]
    SET [Description] = @Description,
        @Id = [CategoryID]
    WHERE [CategoryName] = @CategoryName;

    INSERT INTO [Categories]
--if id is not set in UPDATE, then grab scope identity
--select it out

This example grabs the affected Id too (whether identity insert or update).


A more conventional IF NOT EXISTS... INSERT - ELSE - UPDATE. The lock hints protect for concurrency. The UPDLOCK and SERIALIZABLE hints are as suggested in this Sam Saffron blog post from 2007

        WHERE [CategoryName] = @CategoryName )
        INSERT INTO [Categories]
        SET @Id = CAST(SCOPE_IDENTITY() AS int);
        UPDATE [Categories]
            SET [Description] = @Description,
                @Id = [CategoryID]
        WHERE [CategoryName] = @CategoryName;



Much the same as before, using the MERGE command. MERGE by itself is not concurrent-safe; you must still use lock hints.

    --if/where part
        (SELECT @CategoryName, @Description ) AS source
            ([CategoryName], [Description])
        ON (target.CategoryName = source.CategoryName)
    --found, so update
        UPDATE SET [Description] = @Description,
                @Id = [CategoryID]
    --not found, so insert
        INSERT ([CategoryName]



Other Databases


Oracle has SELECT FOR UPDATE cursors

posted on Wednesday, February 12, 2014 8:32:28 PM (Romance Standard Time, UTC+01:00)  #    Comments [0]
# Thursday, August 08, 2013

I hit this error when a json client was calling an ASP MVC controller to get Json.

Exception Details: System.InvalidOperationException: The JSON request was too large to be deserialized.

The posted JSON is large, but not the default maximum of 4Mb.

There is a .Net security fix (http://support.microsoft.com/kb/2661403):

Microsoft security update MS11-100 limits the maximum number of form keys, files, and JSON members to 1000 in an HTTP request.

You can bypass this (at the risk of Denial of service) with an appSetting in web.config:

  <appSettings>     <add key="aspnet:MaxJsonDeserializerMembers" value="150000" />

posted on Thursday, August 08, 2013 1:24:22 PM (Romance Daylight Time, UTC+02:00)  #    Comments [0]