Custom Validation Attributes and Client-Side Validation For Idiots Like Me

Just join the what to what?

This is one of those posts that I’ve written more the benefit of Future Me than anyone else. Future Me is a forgetful fellow, it seems, so this will hopefully save him some wretched googling while trying to recall that one crucial detail that makes this just work. If it helps you in a similar way then that’s nice.

First of all, this assumes that you already have Unobtrusive Javascript set up and working in your project. There are plenty of guides on that and I’m confident even Future Me can handle it. It also assumes you’re already familiar with creating custom validation attributes. What it does concern is writing a jQuery adapter to do the validation client-side. There are plenty of guides on this, but what I’m presenting here is the bare minimum required for simple validators.

Let’s write a tiny amount of code

The first thing to do is modify your existing validation attribute class. This is done on in two places:

The class itself needs to inherit IClientValidatable:

public class MyLovelyCustomValidator : ValidationAttribute, IClientValidatable

And the GetClientValidationRules class from that interface needs implementing:

public IEnumerable GetClientValidationRules(ModelMetadata metadata, ControllerContext context)
            var rule = new ModelClientValidationRule();
            rule.ErrorMessage = FormatErrorMessage(metadata.GetDisplayName());
            rule.ValidationType = "myadaptername";
            yield return rule;

The FormatErrorMessage return being added to the rule above comes from the ValidationAttribute your custom validator inherits. You may have overridden it, you may not. That’s unimportant here.

More important is the ValidationType being added. I’ve set it to myadaptername in this example because it will be used as the name of your jQuery adapter too.

Now we’ll write the adapter. Create a new .js file somewhere sensible in your project and add this:


There are a number of methods available for adding adapters. There’s a really good summary of them (and a more in-depth examination of custom validation) at Brad Wilson’s excellent if now slightly ancient blog post.

I’ve chosen addBool(adapterName, [ruleName]) because this is very simple validation based solely on the input value, which is what this validator does. Note that it has an optional ruleName parameter which we’re not using. The rule name could be one of the built-in jQuery validation rules (eg mandatory) or it could be one of your own. If it’s omitted the adapterName is used instead. Since I’m writing my own specifically for this adapter, I’ve omitted it for simplicity’s sake.

Next we add the validator to the same file:

$.validator.addMethod("myadaptername", function(value) {
    return value > 0; //your implementation here

Again this is very simple. We pass in the adapter name, and a function to perform the validation. The full signature for the function is function (value, element, params), but as we don’t need element (our validator is only testing the value itself) and we have no additional parameters, we can omit them.

Finally, do your validation within the function. Include the .js file in the page containing your form, and assuming validation was working fine before, it should now validate the field using your custom validator client-side too.

As an afterword, the reason this example is so simple is because I have a validator that tests whether a nullable decimal has a value greater than zero. However hopefully you, or Future Me, will find it a useful basis to build more complex validators from.

Microsoft Dynamics CRM Integration

Fun for all the family

Gripping headline, I know. My current contract has involved communicating with a third-party Dynamics CRM, and while there is plenty of advice out there on the internet, I didn’t find anything that completely worked. So this, for now at least, will fill that gap.

We’re going to need a couple of things before we start:

  • The service URL and credentials for the Dynamics instance.
  • The MS Dynamics SDK. I’m using the 2011 version; you may need a different one.

The SDK contains the CrmSvcUtil.exe command line tool, which will generate code for early-binding to the CRM objects. Getting the correct parameters for this was the hardest part of the integration, and essentially this is the meat of this blog post.

You’ll find CrmSvcUtil.exe in the bin folder in the SDK. Open a command prompt there with admin privileges, then construct your command using this form:

CrmSvcUtil.exe /codeCustomization:"Microsoft.Xrm.Client.CodeGeneration.CodeCustomization, Microsoft.Xrm.Client.CodeGeneration" /out:Xrm.cs /url:[DYNAMICS SERVICE URL] /username:[DYNAMICS USER NAME] /password:[DYNAMICS PASSWORD] /namespace:Xrm /serviceContextName:XrmServiceContext

The parameters you’ll need to set for your particular instance are:

  • [DYNAMICS SERVICE URL] – You can find this in your dynamics management site, under Settings > Customization > Developer Resources. You want the SOAP Organization Service. This will be of the form https://[YOUR-CRM-NAME].[LOCATION] This example contains 2011 in the URL because I’m using the 2011 version. [YOUR-CRM-NAME] will have been set when your instance was created. [LOCATION] depends on the physical location of your instance and the identity provider being used. For example, mine is crm4.
  • [DYNAMICS USER NAME] – This will be the email address of the account you are using to access Dynamics.
  • [DYNAMICS PASSWORD] – The password for the same account, duh.

Note that I’ve set the out parameter to be Xrm.cs. This will dump all the generated code into that file in the SDK/Bin folder. If you’re going to regenerate the code often you may want to set the output path to somewhere in your project, but I’m trying to keep things simple in this example.

With everything in place, hit return and wait while the utility does its stuff. It will probably sit there looking like it’s doing nothing for a little while. Be patient, and eventually it will start spewing out logs like something from The Matrix. If you’ve got a large organization this could take some time.

At the end of the process you should have a massive pile of autogenerated code. In my case it ran to over 100,000 LOC which Visual Studio choked on when I tried to open it. So if you need to look at it, use something like Notepad++ instead.

If it isn’t already there, copy the generated file to somewhere suitable in the project. We will reference it in the next step.

If you’re still awake

The XRM SDK contains the very useful CrmConnection. This provides a way of getting authentication details from a connection string and reusing this connection throughout the application. This may sound counter-intuitive. Surely it would be better to have multiple connections? The problem the single shared connection solves is that authenticating the connection takes a while. Rather than go through the process for every transaction, the shared connection means subsequent transactions are pre-authenticated. The CrmConnection is thread-safe, and performance issues with multiple transactions are avoided by creating a new XrmServiceContext with the connection for each transaction.

Let’s get started by adding the Microsoft Dynamics CRM 2015 SDK client and portal assemblies via nuget. You may have noticed that’s the 2015 version. I couldn’t find an exact equivalent for 2011, and my best guess didn’t work. So this may be the part where this blog post falls apart and causes great anguish, but so far I’ve had no issues using the 2015 one.

Now we have the XRM objects we need, let’s add a connection string. Add it to web.config, in the usual place where connection strings live:

The tokenised parts are exactly the same as described in the CrmSvcUtil parameters above.

To use the connection across your application you’ll most likely want to set it up using your dependency injection weapon of choice. I’m using StructureMap:

container.For().Singleton().Use(ctx => new CrmConnection("CrmConnection"));

Note that I’m creating it as a singleton because we want to share it across the application. It’s thread-safe, remember.

Now we’re ready to use it. Here’s an extremely basic example, which retrieves a contact from the ContactSet:

public class XrmExample
	private readonly CrmConnection _crmConnection;

	public XrmExample(CrmConnection crmConnection)
		_crmConnection = crmConnection;

	public Contact GetContact(string username)
		using (var xrm = new XrmServiceContext(_crmConnection))
			var contact = xrm.ContactSet.FirstOrDefault(x => x.EMailAddress1 == username.ToLower());
			if (contact != null)
				return contact;
		return null;

EMailAddress1 probably doesn’t exist in your instance, but you get the idea. The main points are that we’re getting the CrmConnection singleton and using it to create a new XrmServiceContext for our transaction. Repeat this pattern throughout your application and all will be well.

Creating GUIDs using Resharper

I once embarrassed myself by recommending a GUID generator site to a colleague. “That’s great,” he said, “but I just use the one built into Visual Studio.”


The Visual Studio tool was a bit clunky, but it became part of my regular toolkit. Until I started using the 2013 Community edition, that is. It would seem to have disappeared from there, or at least isn’t on the usual Tools menu. A quick google should find where it’s moved to, I thought, but instead I discovered something even better. You probably know about it already and I’m embarrassing myself all over again, but in case you don’t and on the remote chance that I’m not:

You can create a new GUID using Resharper by typing nguid and pressing tab.

That’s it. So much simpler than the old Visual Studio tool.

Quick Summary of Dojo Mixins in EPiServer

Scratching behind Dojo’s ears

There are some excellent blog posts about EPiServer’s implementation of Dojo Dijits, most of which have been collected on David Knipe’s equally excellent blog.

I’ve been scratching around Dojo a bit myself lately and found the above examples invaluable. I also made my own notes regarding the use of mixins as it wasn’t clear at first where everything was coming from. Essentially all I’ve done is describe the most common mixins used in an EPiServer dijit widget. Hopefully someone will find these in some way useful.


The essential mixin. Every widget you build will use this unless you’re up to something really quite unusually scary. It provides the base dojo functionality which further mixins and your own code can build on.


In most circumstances you will also use this mixin. This provides the more commonly used base widget functionality, and crucially also provides the lifecycle events you will probably want to subscribe to at some point. These are detailed in the linked reference guide, but of special note is postCreate. This is called after the widget has been rendered, making it a very useful time to introduce your own code.


You will probably want to use this too as it saves you from the complexity of implementing Dojo’s buildRendering method yourself. Markup for your widget can be supplied inline or via a file. For the latter option you will need to reference the dojo/text! plugin.


Not essential, but useful. While simple use-cases can be adequately catered for using innerHtml directly, this mixin abstracts some of the fiddlier stuff away to provide easier cross-browser compatibility.


Used alongside dojo/html, this can be used to convert DOM nodes directly into dijits or widgets. Note that this declarative use isn’t recommended, mostly due to potential performance issues. Generally it’s best to leave it for Dojo to use in the background (it’s a requirement of _TemplatedMixin so the odds are you’ll be using it somewhere even if you don’t realise it.)


The first non-standard Dojo mixin you’re likely to use with EPiServer. This unsurprisingly provides access to some basic content information, largely encompassing what we’d expect to get from a ContentData object in the back-end code. It also provides some events which are useful for hooking into when the current context changes.

Scratch deeper

EPiServer provides a good selection of mixins to use in your widgets. It’s always worth checking through them before embarking on rolling your own functionality as it may be that the heavy lifting has already been done for you.

SSL Termination Using Application Request Routing and URL Rewrite


It’s common practice to proxy to a group of webservers in a load-balanced production environment. And in such a setup, it’s also common to terminate SSL at the load-balancer. This has two advantages: firstly, it means only one SSL certificate installation is required; secondly, only the load-balancer needs to worry its pretty little head with decrypting incoming requests.

However this can have some disadvantages, chief of which is that any code that relies on the the HTTPS server variable, such as checks for IsSecureConnection will no longer work as it will never be set for the requests proxied to the actual web server node. This is because the web server nodes will only ever see HTTP traffic as SSL is being terminated at the load-balancer. I mean, duh. This is commonly worked around by setting a request header (eg SSL) at the load-balancer for any HTTPS traffic received there. The application’s code then checks for the presence of this header instead of the HTTPS server variable.

So What’s the Problem?

The problem is that load-balancing is often only done on production and some staging environments, depending on budget. Developer environments for example typically have a self-signed certificate bound directly to their local site. In this arrangement IsSecureConnection will function as expected. This is great, but it does leave a worrying gap between development and production environments as we now have to check one thing (IsSecureConnection) in some environments and another (our custom header, eg SSL) on production. This in itself wouldn’t be so bad if there was a way of replicating the SSL termination behaviour locally.

ARR Jim Lad

Application Request Routing to the rescue! Using this IIS7 feature and a couple of Url Rewrite rules we can create our own proxy with SSL termination and custom headers. Perhaps unsurprisingly, you’ll need Application Request Routing (ARR) and Url Rewrite installed to do this, so fire up Web Platform Installer and sort that out if you haven’t already.

ARR relies on the IIS default website to proxy requests through. This is whichever site has a wildcard host mapping. If you’ve deleted it for some reason, just create another site with a wildcard host mapping. You’ll also want a wildcard host mapping for HTTPS traffic on this site, as it will be the proxy for your actual application. Use the certificate from your application for this.

Your application presumably already has some bindings. Most likely one for HTTP and one for HTTPS, bound to your application’s host name. Delete the HTTPS binding. It is no longer required as all your application’s requests will be over HTTP once SSL termination is in place. You also need to change the HTTP host binding to something else. The actual name doesn’t matter, but for example if the host name was previously mywebsite, change it to something like mywebsiteterminated. Remember to add the new host name to your hosts file!

Configuring ARR

This part is quite easy. Open IIS Manager and click on the server root (typically the computer name). Under IIS, you should see the Application Request Routing icon:

ARR icon

Double click it. You will see what is most likely an empty table of Application Request Routing Cache information. Ignore it, and look at the Actions column on the right-hand side. Under Proxy is Server Proxy Settings. Click this to open them.


The important details here are:

  • Enable proxy. Unsurprisingly (I hope!) this should be checked.
  • Leave Use URL Rewrite to inspect incoming requests unchecked.
  • Enable SSL offloading should be checked.

The final step is to add URL Rewrite rules to the proxy site. Since the proxy site exists solely to handle SSL offloading, its web.config will contain only the rewrite rules:

<?xml version="1.0" encoding="UTF-8"?>
<rule name="Set header for HTTPS traffic" stopProcessing="true">
<match url="(.*)" />
<add input="{HTTPS}" pattern="on" />
<set name="HTTP_SSL" value="true" />
<action type="Rewrite" url="http://mywebsiteterminated/{R:1}" />
<rule name="HTTP traffic redirect" stopProcessing="true">
<match url="(.*)" />
<add input="{HTTPS}" pattern="off" ignoreCase="true" />
<action type="Rewrite" url="http://mywebsiteterminated/{R:1}" />

As is hopefully clear from their names, these rules set a header servervariable and redirect to the actual website over HTTP. One point to note is that the servervariable is set to HTTP_SSL. There is a convention whereby header servervariables are prefixed with HTTP_. Including this in our rule ensures it can be accessed via the servervariables collection on the site proper.


You should now be able to replicate SSL offloading in your local environment. The key steps to this process are:

  • Create a proxy website in IIS
  • Configure ARR to use the proxy website
  • Add rewrite rules to proxy site to add a custom header and redirect to the real website over HTTP

Yet Another SOLID Principles Piece


Barely a day goes by without someone mentioning SOLID principles these days.

“Hello colleagues,” I say, “I am going to the coffee machine to get some coffee. Would you also like some coffee?”
“That depends,” reply my colleagues, “on whether the coffee will made according to SOLID principles.”
I consider this for a moment before replying. “Don’t be ridiculous. It’s a cup of coffee not a software engineering project.”

And there the matter ends.


As far as acronyms go, SOLID is towards the SPLINK end of the convolution spectrum. In this article I’m going to go through each of the five principles in a way that will hopefully make them more memorable.

S is for Single Responsibility

This one’s pretty straightforward, or at least initially appears to be so. A class should have a single responsibility, or in layman’s terms, should have only one job to do. But wait! Slavishly following the principle as described would lead to lots of classes with single methods inside, which will make your code somewhat obscure. There’s a qualifier to the Single Responsibility Principle, and it is that a class should have only a single reason to change. This is perhaps a little harder to understand, so let’s have an example.


Imagine we have a class called Squirrel. This contains two methods:

  • CountNuts() – Returns the number of nuts the squirrel currently has.
  • ClimbTree(Nuts nuts) – Checks the number of nuts with a call to CountNuts() and only climbs the tree if it is over a given threshold. Climbing trees requires squirrel fuel.

This violates the Single Responsibility Principle because there are two reasons for the Squirrel class to change:

  • The way that nuts are counted might change, eg different nuts may be given different weightings
  • The nut threshold to climb a tree might change

To fix this, as a bare minimum the ClimbTree(Nuts nuts) method should be moved to its own class, and given an additional Squirrel parameter so that the Squirrel object can be passed into it: ClimbTree(Squirrel squirrel, Nuts nuts).

O is for Open / Closed

Which is it, open or closed? Are we talking about a programming principle or a CD drawer here? This is definitely one of the more obscurely named parts of the acronym. It may as well be called Owls.


Owls, being very wise birds, are open to the idea of being extended but closed to modification. Ask a barn owl to change into a tawny owl it will quite rightly tell you to sling your hoo-k. However ask it nicely to hoot like a tawny owl and it will happily oblige.

Let’s say we have an Owl class containing a single method: Hoot(OwlType owlType)

This method takes the type of owl (barn or tawny) and returns the appropriate hoot. Logic within the Hoot method constructs the hoot depending on which type of owl is supplied.

There is a problem with this. If we want our owl to be able to hoot like a snowy owl, we will have to modify the Owl class. This breaks the Open / Close principle because our class should be closed to modifications. Instead, it should be open to being extended. In this case we should do this by changing our Hoot method to accept an interface instead of an owl type: Hoot(IOwl owl).

The interface IOwl has a method called Hoot, and it is this which is called by the Hoot(IOwl owl) method. We then have concrete implementations of IOwl for barn and tawny owls, each of which have their own implementation of Hoot. With this structure in place, adding the ability to hoot like a snowy owl is simply a matter of creating a new TawnyOwl class which inherits IOwl.

L is for Liskov Substitution

Professor Barbara Liskov is one of the first women in the US to receive a doctorate in computer science. She has been doing this shit since the ’60s and has probably forgotten more than I will ever know. Take a look at her website. Gosh, it looks like the 1990s doesn’t it? Don’t laugh, she has more important things to do than worry about that grey background. Look closer. Look at her CV. It’s 32 pages long but 30 of those pages are publications and academic contributions to computer science. It’s fair to say that she’s pretty awesome.

That’s all great, but L for Liskov doesn’t tell us much about what Liskov Substitution actually is. In Prof. Liskov’s paper with c-author Jeannette Wing summarises it as:

Let Φ(x) be a property provable about objects x of type T. Then Φ(y) should be true for objects y of type S where S is a subtype of T.

It’s not exactly plain English but this is from an academic paper. Give your head a good scratch and you can make sense of it. In the world of C# Liskov Substitution can be described more plainly like this:

A class derived from a base class, interface or similar structure must be interchangeable with its base class or interface, etc.


Let’s say you have some Llamas. Everyone knows that Llamas love Libraries. They are nature’s most avid readers, with a strong penchant for Literature. Unfortunately being quite Large creatures there are only so many llamas that can fit inside a library at once. We can call our collection of Llamas from the Library class and get it return its count to control the number of llamas per library.

But wait – some alpacas want to join the library. They are nature’s second most avid reader and in many other ways are similar to llamas. In fact the only way in which they differ that matters to us is that they’re about half the size of a llama. This means that more alpacas can fit into a single library.

It’s tempting to simply create a new Alpaca class with derived from the existing Llama class, and give it a Size property. This is set to 0.5. However doing so would break the Liskov Substitution Principle. Although our Library class can access the Alpaca’s Size, it cannot do the same for a Llama because the Llama class doesn’t have a Size property. The base class (Llama) cannot be substituted for its derived class (Alpaca).

One way to fix this is to add Size to the Llama class and override it in Alpaca. However this may not be the ideal solution, particularly if the Library one day decides to admit Sheep, nature’s third most avid readers. They could have all sorts of things which are different to Llamas. In that case it makes more sense to make a new base class or interface which includes the Size property, and derive Llama, Alpaca and Sheep from that.

I is for Interface Segregation

This one is more straightforward and at its heart is the idea that interfaces should contain the minimum required members. That way anything using that interface doesn’t need to concern itself with members it doesn’t use.


Let’s say with a class called Insect. This contains many members such as NumberOfWings and StingStrength. If we want to Sting() another object using our Insect class, we could just pass the whole Insect in as a parameter, like this: Sting(Insect insect). However, doing so exposes all the other members of the Insect class to the Sting method, when all we need to know about is StingStrength. We can reduce this exposure by creating an IStingable interface with a single member, StingStrength, and implementing it in the Insect class. We can then pass this into Sting like so: Sting(IStingable insect)

Similarly, if we have a Fly(Insect insect) method, we can create an IFlyable interface which contains the single member NumberOfWings, and use this like so: Fly(IFlyable insect)

Now we have two very tight interfaces for our Insect class which segregate its behaviour so that client code is only concerned with the parts it needs access to.

D is for Dependency Inversion

A common mistake is to construct high-level classes using concrete classes from further down the class hierarchy. What does this mean? Well, let’s look at everyone’s favourite seaborne scamp, the dolphin.


We have a higher-level class called Tricks which at the moment contains a single method, DoTrick(Dolphin dolphin). This will work, but becomes problematic if we want another animal to do a trick. As written, the high-level class is dependent on a low-level class. We need to redesign Tricks so that both it and the low-level class instead depend on an interface, ITrickable.

Now the DoTrick method is defined as DoTrick(ITrickable trick) and the Dolphin class inherits from ITrickable. Instead of the high-level class depending on a low-level class, both classes now depend on an interface. This is dependency inversion.


In conclusion then,

  • Squirrels
  • Owls
  • Llamas
  • Insects
  • Dolphins

I hope you’ve enjoyed this somewhat frivolous look at SOLID Principles. For a more in-depth look at the subject, I recommend these sites:

Bamboo Version Labelling

We’ve recently switched from CruiseControl.NET to Atlassian Bamboo for our CI. This was partly borne out of frustration with CC.NET’s XML-based configuration, but we also use Jira, Confluence and BitBucket, so the integration between these products and Bamboo had some appeal.

Naturally the conversion hasn’t been completely smooth. One thing that initially wasn’t particularly obvious was how to auto-increment a .NET project’s version number when building a deployment package, and how to then label the build in Bamboo with that version number.

Version Increment

The easiest way of adding build tasks to a Bamboo stage is to, well, add an MSBuild task. The clue’s in the name. There are plenty of pre-made build tasks out there. For incrementing the version number we’re going to use the MSBuild.ExtensionPack. This contains a wealth of build targets, including one called VersionNumber.

If all we want is to increment the version number, we just import the VersionNumber targets and set some attributes:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="SetAssemblyInfo" xmlns="">
<Import Project="..\packages\MSBuild.Extension.Pack.1.5.0\tools\net40\MSBuild.ExtensionPack.VersionNumber.targets"/>
<Target Name="SetAssemblyInfo">
AssemblyFileRevisionFormat="00" />

The paths above assume I have placed the saved the above as [ROOT]\SolutionFiles\BuildTasks\IncrementAssemblyVersion.csproj, where [ROOT] is the root of the solution. The structure is unimportant. What matters is that the paths to MSBuild.ExtensionPack.VersionNumber.targets and your project’s AssemblyInfo.cs are correct.

The attributes I’ve set will produce version numbers in the following format:


Where Date is the combination of zero-padded day of the month and month, eg 0208 if built on the 2nd of March. There is provision in VersionNumber.targets to format the version number using the year as well. I wouldn’t advise this as it will (dependent on the actual date) overflow the int which backs the version number.

The Number is a simple integer increment for each daily build. eg the first build for that day wil be 0, the next 1, and so on.

I like this as it gives some useful information about the build. It should be noted that for this to work you will need to set the masking in AssemblyInfo.cs appropriately. In this case, use “”

To use this file in Bamboo, add a new MSBuild task to the build stage, and supply it with the path the project file. eg


Labelling in Bamboo

It would also be nice to be able to see the version number for a build from Bamboo build list. This can be achieved using labels.

Now, before I proceed I have to hold up my hand and say that this feels like a bit of a kludge. However it was the only way I could see of achieving what I wanted, so I present it here in the hope that someone knows a better way and can tell me what it is.

In Bamboo, labels can be created by parsing the build log with regex. Yes, I know. I KNOW.

What I’ve done is add a build target which will write the current version number into the log file, so that it can be parsed as a label. I’ve reused the project file we created for the version increment for this, as it makes sense to write to the log immediately after setting it. The project file now contains:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="SetAssemblyInfo;RetrieveIdentities" xmlns="">
<Import Project="..\packages\MSBuild.Extension.Pack.1.5.0\tools\net40\MSBuild.ExtensionPack.VersionNumber.targets"/>
<Target Name="SetAssemblyInfo">
AssemblyFileRevisionFormat="00" />

<Target Name="RetrieveIdentities">
<GetAssemblyIdentity AssemblyFiles="..\..\MyProject.Interface.Web\bin\MyProject.Interface.Web.dll">
<Output TaskParameter="Assemblies" ItemName="AssemblyInfo"/>
<Message Text="MyProject.Interface.Web.dll has been set to Version_%(AssemblyInfo.Version)" />

It’s pretty straightforward. The RetrieveIdentities target is called after SetAssemblyInfo. It uses the GetAssemblyIdentity target to get the current version number from the specified .dll. It then writes a message to the log containing the version number.

To make use of this message, go to the Miscellaneous tab for your job in Bamboo. At the bottom is a section title Pattern Match Labelling. This is where we enter a regex pattern to retrieve out version number from the logs. The following pattern will do this:


Finally we just need to tell Bamboo to use the first match as the build label. Do so by entering \1 in the Labels box.

Using local blocks to refine EPiServer’s on-page editing experience

The on-page editing interface introduced with CMS 7 is, on the whole, quite nice.* It allows fast, intuitive editing of properties in most situations. However on busier pages it can all get a bit out of hand. The editor bounding boxes start to overlap and the screen begins to resemble that old Amiga demo that draws rectangles on the screen at (as then) mind-blowing speeds. There are also some properties which don’t lend themselves to on-page editing in isolation. Links for example, typically use two properties in tandem for the text and the URL. Fortunately there’s a way to tidy things up, and that is to use local blocks

Local blocks allow you to group properties together so that they can be edited together. As an example, let’s say we have a promotional box on our home page. It shows a title and a link over a background image and has the following properties:

  • Title
  • Image
  • Link URL
  • Link Text

If these are defined in the home page model, then on-page editing becomes tricky. If it’s enabled for the image, its bounding box will cover the title and link making them uneditable. And even without that, there isn’t a direct way editing the Link Url. We could just let the editors use the all properties view, but that’s not very friendly.

Instead, we can create a local block called, imaginatively called PromoBlock, add the properties there, then add the local block to the home page. Now we can do this in the home page’s view:

<div class="promo" @Html.EditAttributes(x => x.CurrentBlock.Promo)>
<div style="background: url(@Model.Promo.Image)>
… other promo markup …

Setting EditAttributes using the PromoBlock as a property will result in an edit box being drawn around the whole promo. Clicking on the promo will pop out the right-hand edit panel with all four of our PromoBlock properties there, ready to edit.

That isn’t all we can do though. We can also create a display template for the PromoBlock. To do this, create a new view in your project’s DisplayTemplates folder. This will typically be found at Views\Shared\DisplayTemplates. Give the View a model with a type of PromoBlock, and move all our PromoBlock markup from the home page’s view, like so:

@model PromoBlock

<div class="promo">
<div style="background: url(@Model.Image)>
… other promo markup …

Now we can reduce the promo block code in the home page to this:

@Html.PropertyFor(x => x.CurrentPage.PromoBlock)

Which is rather neat. Another advantage of this is that should the promo block be used elsewhere, there’s no need to duplicate the markup. Just add a local PromoBlock property to the page in question and use PropertyFor with it.

*Let’s gloss over the pain and anguish of Dojo for now.

Conditionally hiding properties from editors in CMS 7

Completely hiding properties from editors is simple enough – just add the [ScaffoldColumn(false)] attribute to the property. However there are times when I want to show the property in some situations, and hide it in others. A typical scenario is sharing a local block between several page types. For example, let’s say we have a local block called PromoBlock. PromoBlock has two properties:

  • Title
  • Image

It is used on two page types, LandingPage and ContentPage.

However it has become vital that the PromoBlock on LandingPage has an optional video. We could add the property VideoUrl to PromoBlock, but then it would be presented to editors on the ContentPage, and we don’t want a video there.

One option is to simply make another block for this purpose. In most situations this is the route I’d advise, but it isn’t always the most appropriate. My example is deliberately simplified, and in reality PromoBlock could be used on many pages and have complex behaviour. It could also already be in production and used on many pages, making its replacement a tedious editing task.

Another option is to programmatically hide the VideoUrl property on the ContentPage. In CMS 6 this was easily achieved using the EPiServer.UI.Edit.EditPanel.LoadedPage event. This is no longer available in CMS 7, so we need a different approach. That approach is to use an EditorDescriptor:

[EditorDescriptorRegistration(TargetType = typeof(Url))]
public class HidePromoVideoUrl : EditorDescriptor
  public override void ModifyMetadata(
    ExtendedMetadata metadata,
    IEnumerable<Attribute> attributes)
    base.ModifyMetadata(metadata, attributes);
    if (metadata.PropertyName == "VideoUrl" && metadata.Parent.ContainerType == typeof(ContentPage))
      metadata.ShowForEdit = false;

The main points of interest here are:

  • EditorDescriptorRegistation attribute needs its TargetType setting to the type of the property we are modifying. In this case it is a Url.
  • metaData.Parent.ContainerType gives the type of the page containing the local block which in turn contains the property we are modifying. metaData.ContainerType would give the type of the block itself, in this case PromoBlock.
  • Once we’ve determined we’re in the right place, hiding the property is a simple matter of setting metadata.ShowForEdit to false.

Restricting blocks in content areas

EPiServer 7.5 introduces the AllowedTypes attribute. This accepts an array of types, effectively making a whitelist of blocks that can be added to a ContentArea. An editor attempting to drag and drop a block not included in the type array will see the block turn grey and will not be able to place it. Here’s an example of its use:

         Name = "My Content Area",
         GroupName = SystemTabNames.Content,
         Order = 100)]
        [AllowedTypes(new[] { typeof(AllowedBlock), typeof(AlsoAllowedBlock) })]
        public virtual ContentArea MyContentArea { get; set; }

However there is a problem with this attribute. It only restricts block placement when dragging and dropping existing blocks in a content area. An editor can still create a new block directly on the content area, which is frankly a bit of a headache, as we can’t rely on the attribute to enforce block placement rules.

A crude workaround

I’ve worked around this issue by creating a custom validation attribute for content areas. This will prevent content from being saved if a content area contains a disallowed block. Here’s the code:

[AttributeUsage(AttributeTargets.Property, AllowMultiple = false)]
    public class AllowedBlocksAttribute : ValidationAttribute
        private readonly Type[] _allowedBlocks;

        private List AllowedBlockTypeFullNames
            get { return _allowedBlocks.Select(a => a.FullName).ToList(); }

        public AllowedBlocksAttribute(Type[] allowedBlocks)
            _allowedBlocks = allowedBlocks;
            ErrorMessage = "This content area can only accept the following block types: {0}";

        public override string FormatErrorMessage(string name)
            return string.Format(CultureInfo.CurrentCulture, ErrorMessage, FormattedAllowedBlockTypes);

        public override bool IsValid(object value)
            var contentArea = value as ContentArea;
            if (contentArea == null)
                return true;

            foreach (var item in contentArea.Items)
                if (!AllowedBlockTypeFullNames.Contains(item.GetContent().GetOriginalType().FullName))
                    return false;

            return true;

        private string FormattedAllowedBlockTypes 
                return string.Join(", ", _allowedBlocks.Select(s => s.ToString().Split('.').Last().ToCamelCase()));

This functions like any other validation attribute. If validation fails while saving content, a notification is displayed in the notification area at the top right of the page. It complements the existing AllowedTypes attribute, so ideally it should be used wherever that attribute is placed, eg

         Name = "My Content Area",
         GroupName = SystemTabNames.Content,
         Order = 100)]
        [AllowedTypes(new[] { typeof(AllowedBlock), typeof(AlsoAllowedBlock) })]
        [AllowedBlocks(new[] { typeof(AllowedBlock), typeof(AlsoAllowedBlock) })]
        public virtual ContentArea MyContentArea { get; set; }

Suggested improvements

There’s some scope for improvement here. Firstly, having to add two attributes with the same array of allowed types is somewhat clunky. Any suggestions as to how I could combine this with the existing attribute are welcome.

Secondly, although it stops content being saved with unwanted blocks, it doesn’t prevent an editor from creating said blocks. So although it acts as a final gatekeeper, it can lead to a frustrating experience for editors. It would be better if there’s a way of preventing the editor from creating the block in the first place. Ideally the disallowed blocks would not be available in the list of blocks when creating one on the content area, but I haven’t figured out a way of doing that yet.

Finally, you may have noticed that I’m crudely constructing a block name for display in the error message from its type name. This is because I couldn’t work out how to get the block name from its type definition. Now, surely there’s a way of doing that so I’d be grateful if someone could point me in the right direction. It would also mean I could get rid of this wee beastie:

        /// Splits a string on humps in a camel case word, eg camelCaseWord => camel Case Word
        /// The camel case string to split
        /// The string, with a space between every incidence of a lower case letter and an upper case letter
        public static string ToCamelCase(this string input)
            return System.Text.RegularExpressions.Regex.Replace(input, "(?<=[a-z])([A-Z])", " $1", System.Text.RegularExpressions.RegexOptions.Compiled).Trim();