Serialisely painful

When reading data from a .NET Core 3.0 API is much harder than it should be…

Towards the end of last year I fired up a new ASP.NET Core 3.0 project for a client of mine as my first foray into building a real full .NET Core business application from end to end rather than simply producing prototypes and making changes to v2.2 projects.

This particular piece of work consists of a .NET Core MVC Website and Web API, and so as part of this it will need to serialise/deserialise data to and from the API side of things.

Whereas traditionally with the .NET Framework and older versions of .NET Core we’d reach for our favourite 3rd party serialisation library, with version 3 we no longer have to, as more and more functionality is being baked into the product. With the introduction of ASP.NET Core 3.0 the default JSON serialiser has been changed from Newtonsoft.Json to the native System.Text.Json serialiser. It therefore made sense for to me to employ that instead of installing JSON.NET, especially as the native API reportedly outperforms in most circumstances.

This sounded like a great idea and I set about building out the comms code between the API and the client. However I immediately encountered a problem with a simple call to one of the API endpoints from my Web site:

The data was being sent back from the API but my model class was empty when deserialised, and since my web page was expecting the model to be populated, the site was erroring. This was very strange since my Models were identical on both the API and MVC Web site ends.

The MVC web site consistently failed to deserialise the API response using a model that was identical on both the API side and the Web app side.

The problem code

Lets demonstrate the issue I had by looking at some code that exhibits the problem:

The Model POCO class (identical in both API and MVC projects):

public class Client() 
{
	public Guid ClientId { get; set; } 
  	public string ClientName { get; set; } 
  	public string ClientAddress { get; set; } 
}

API code:

public async Task<ActionResult> GetClient(Guid clientId)
{ 
	var client = await _repository.GetClient(clientId);
	return Ok(client);
}

MVC code:

// Call API to get Client info
var response = await _httpClient.GetAsync($"Clients/{id}"))
using var responseStream = await response.Content.ReadAsStreamAsync();
var client = await JsonSerializer.DeserializeAsync<Client>(responseStream);

In the example above the model will fail to deserialise correctly and all properties will simply be set to null:

This was puzzling until I took a closer look at the serialised JSON…

{"clientId":"ea4e2b39-fb3f-4d7a-9329-00fe658e9dca","clientName":"Sherlock Holmes","clientAddress":"221B Baker Street"}

It seems the Web API was returning JSON in Camel case format but the the MVC Web application was attempting to deserialise it in Pascal case (or more accurately the same case as my model). This struck me as odd since I had set no explicit configuration of the serialiser in either project so it should have been using the default settings.

If you’re using Pascal case models, out of the box in .NET Core 3, your Web API will return your model as Camel case JSON, and the JsonSerializer will fail to deserialise it into an instance of the exact same model.

Why is this happening?

Lets look at the problem in detail, firstly from the (API) serialisation end as it returns data and secondly from the (MVC) Client end as it deserialises the API response:

1) API serialisation with Camel case settings

When setting up a new Web API project in .NET Core we typically set up our services in the Startup.cs class and call services.AddControllers(). Amongst other things this sets up the FormatterMappings needed to return data from controllers to their consumers including SystemTextJsonOutputFormatter, which is needed to return JSON string data.

In our earlier example, when we return the Ok() ActionResult from the API Controller Action Method, the SystemTextJsonOutputFormatter is employed to call the JsonSerializer and write out the data to the response. By default in MVC the JsonSerializer uses the Microsoft.AspNetCore.Mvc.JsonOptions class for deciding how to serialise the data back to the client.

The default JsonSerializerOptions in ASP.NET Core MVC 3.0 are set to the following:

PropertyNameCaseInsensitive = true;
PropertyNamingPolicy = JsonNamingPolicy.CamelCase;

Hence when our Controller endpoint returns data, we receive camel case JSON data back from the API.

2. Client deserialisation with case sensitive settings

Now when it comes to deserialising a response from a Web API in MVC, we might typically use an instance of the HttpClient class to make an HTTP GET call to an API endpoint. We would then call the new JsonSerializer.Deserialize() method to convert the response into an instance of our strongly typed Model.

However in .NET Core 3.0, when calling the JsonSerializer manually, by default it is initialised with case sensitive property deserialisation. You can see this for yourself by delving into the System.Text.Json source code. The result of this means that any JSON in the API response has to exactly match the casing of your target model, which is typically Pascal case, or your properties will not be deserialised.

How do we resolve this?

We have a few potential options to tackle this:

1. Use custom JsonSerializerOptions when deserialising

We can pass in our own JsonSerializerOptions to the JsonSerialiser in the MVC client whenever we carry out deserialisation, to force the routine to recognise Camel case:

var options = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };
var json = "{\"firstName\":\"John\",\"lastName\":\"Smith\"}";
var user = JsonSerializer.Deserialize<User>(json, options);

Alternatively in a similar manner, we can tell the JsonSerialiser that instead of a specific property naming policy, we just want to ignore case completely by passing in options with PropertyNameCaseInsensitive = true:

var options = new JsonSerializerOptions { PropertyNameCaseInsensitive = true };
var json = "{\"firstName\":\"John\",\"lastName\":\"Smith\"}";
var user = JsonSerializer.Deserialize<User>(json, options); 

This approach is obviously a little tedious and error prone as it means you end up having to remember to specify your custom options every time you wish to wish to deserialise some JSON. Not ideal.

You might be thinking “How can I globally set the default options for the JsonSerializer?

It would be nice if we could set our preferred approach globally for the MVC consumer project. You’d think we could just do something like this in our Startup.cs class ConfigureServices() method:

services.AddControllers()
	.AddJsonOptions(options => {
    	options.JsonSerializerOptions.PropertyNamingPolicy = JsonNamingPolicy.CamelCase)
    });

OR

services.AddControllers()
	.AddJsonOptions(options => {
    	options.JsonSerializerOptions.PropertyNameCaseInsensitive = true
   	});

But unfortunately not – this code only configures JsonSerializer options for MVC and Web API controllers. This will not configure JSON settings application wide or for any of your own code that calls the JsonSerializer methods.

Since System.Text.Json.JsonSerializer is a static class it carries no state and therefore there is no way to set this on a global project basis (unless you want to get dirty and use reflection, which I don’t :P)

2. Set attributes on your model classes

You can decorate you model properties on the client with the JsonPropertyNameAttribute. e.g. :

[JsonPropertyName("clientName")]
public string ClientName { get; set; } 

However rather you than me if you have a large number of Model classes.

3. Change the casing of your models from Pascal case to Camel case

Just no.

4. Use reflection

Double no.

But if you really have to, this code should sort you out.

5. Employ an extension method to deserialize the JSON with your custom settings

Using an extension method to deserialize the JSON with your custom settings is an option but again, this has the problem of having to enforce usage of the extension method which on larger projects with many developers might prove tricky.

6. Abstract the (de)serialiser behind an interface

We could wrap the JsonSerialiser behind an interface and ask the IoC container for an IJsonSerialiser. Our concrete representation could then set the appropriate JsonSerializerOptions and deserialise the data correctly.

This is not a bad option and it does mean we could more easily change the serialiser should we want to swap out System.Text.Json in future without breaking the consumers. Again this is all good as long as all developers are aware of the interface and don’t go off piste deserialising via JsonSerialiser directly.

7. Replace the Web API defaults for the OutputFormatter

We could replace the default options in the Web API. To do this add the following to your WebAPI Startup class in the ConfigureServices method:

services.AddControllers()
  .AddJsonOptions(o => o.JsonSerializerOptions.PropertyNamingPolicy = null);

This will override the defaults and set them back to the .Net Core default of serialising out the JSON with the same casing as your model. This is by far the easiest option as long as you don’t have a requirement for Camel case JSON to be returned from your API.

8. Sledgehammer approach

Go back to Newtonsoft’s JSON.NET

Surely there’s got to be a better way?

There has been some robust discussion around the subject here and whether the ability to set the JsonSerialiser defaults on an application wide basis should be part of .NET Core going forward. I would personally like to see this feature but as with everything it has trade-offs with performance that Microsoft are keen to stay on top of with their flagship language.

Notwithstanding the performance question, the ability to set the options globally for the JsonSerialiser has reportedly been added to the .NET 5.0 roadmap which is expected to be released in November 2020 but I’m yet to see any details on how this change might be implemented .

I hope this goes some way to clearing up why things can go wrong with the new JsonSerializer in .NET Core 3.0. This is a small issue but if this helps one person avoid the time I wasted on it, then it was worth posting 🙂

Useful reading

When Fast query won’t play ball…

A colleague recently asked me for advice on an issue he had whereby a fast query was not returning all the items from a particular folder in the content tree.

Now leave aside for a minute the fact that fast query was being used dubiously (it was an ancient legacy part of the system with no remit given to refactor right now).

The query in question was a simple descendant folder search based on the template GUID:

fast:/sitecore/content/Home/Whatever//*[@@templateid='<guid>']

The items in question were all published and present in the web database
A manual SQL query of the Items table in the web database brought back the items with no obvious oddities on the records.
We duplicated the existing missing item, published it, and that too was not brought back in the query

This rang a bell – I remembered reading some time ago about odd behaviour when the Descendants table was corrupted or in an inconsistent state.

According to the documentation if the FastQueryDescendantsDisabled setting has ever been changed you need to rebuild the Descendants table.

To do this, navigate to the Sitecore Control Panel -> Databases -> Clean Up Database and run the Clean Up Database wizard.

And whadda ya know – all results were now being returned. 😉

Note: This command actually does more than rebuild the Descendants table, it also cleans up invalid language data, orphaned fields and items and more. See this nice StackExchange post for the full list of tasks that get carried out.

Now to convince the powers that be, that fast needs to be killed 😛

Running the Sitecore Publishing Service in Docker

In my recent talk at Leeds Sitecore User Group this month, I demoed the Sitecore Publishing Service running in Docker and discussed how we can harness that for easy roll-out and migration to developers, through test environments and to the Cloud.

What is Docker?

Docker has really taken hold in the last few years with a massive amount of time and money being invested in the technology. I won’t go into detail here as there are hundreds of resources readily available online.

Microsoft are embracing Docker and investing heavily in it’s future with their .NET Core and Azure implementations. Their Docker documentation is definitely one of the better resources and a worthwhile read:

Not everyone is a fan of Docker and not everything needs ‘Dockerising.’ However Docker containers can be useful when you need: 

  • Consistency between environments with no variations between instances
  • Portability – An ability to share an environment
  • Isolation – You want your environment to be agnostic of, and isolated from the host machine / Operating System
  • Repeatability – When you have a lot of people wanting to set up the same thing

Why use Docker for the Publishing Service?

The Publishing Service is a good candidate for containerisation. For large development teams or teams with masses of infrastructure, individually setting up the SPS can mean a large investment in time and resources. As long as your environment supports it, Docker can reduce that cost significantly.

Building the image

First up: clone the repo here: https://github.com/mjftechnology/Sitecorium.Docker.PublishingService

The instructions are present in the Readme.md file but for your convenience:

  • Download the SPS service zip file version that matches your requirements and place in the assets folder
  • Rename the zip to “Sitecore Publishing Service.zip”
  • Edit the Connection strings in the Docker-compose.yml file to a SQL Server instance the image can access. (It will install the SPS table schema in these DBs)
  • Edit the hostname in the Docker-compose.yml file to a hostname of your choice or leave the default
  • Open a command prompt and type: docker-compose up –build -d. After some time the image should be built and the container will be started with the name publishingservice.
  • Add the hostname sitecore.docker.pubsvc (or your custom host) to your hosts file
  • From here on in you can run or tear down the container with two easy commands: docker-compose up -d and docker-compose down

How does it work?

The example consists of 2 main parts:

  1. The docker-compose YAML file
  2. The Dockerfile

Docker Compose is a handy tool for orchestrating containers. It’s not mandatory in order to use the SPS but I find it useful for supplying parameters to a Dockerfile and makes it easier to add companion containers later on. The docker-compose.yml file in the repo should be customised, allowing you to enter your Sitecore database connection strings and desired hostname for the Publishing Service instance.

The Dockerfile (text based instructions Docker uses to construct an image) used in my demo is where the heavy lifting takes place. It is based on the .NET Framework image from Microsoft which out of the box gives us:

  • Windows Server Core as the base OS
  • IIS 10 as the Web Server
  • .NET Framework
  • .NET Extensibility for IIS

*Note*

As the Publishing Service currently has a dependency on the .NET Framework and is therefore not fully cross platform we must use a windows image as a base. There is a plan for version 5 to be built on top of Sitecore Host (Sitecore’s service architecture going forward). In theory this will be fully .NET Core based and thus cross platform. This means we will be able to use a much smaller linux container as a base image. Until then we’re stuck with bloated Windows!

On top of this base Windows image the Dockerfile then instructs Docker to:

  • Install the .NET Core Hosting Bundle which in turn installs the .NET Core runtime, library and ASP.NET Core module. It also allows us to run the SPS in IIS.
  • Copy the SPS zip file from the assets folder into the image and unzip it.
  • Set the connection strings to the Sitecore databases using the values passed in from the docker-compose file.
  • Upgrade the database schema as per the SPS install instructions 
  • Install the SPS into IIS using the hostname in the compose file.

Running the container

Once the container is up and running, your host entry is set and the Sitecore config updated you should be able to view the container status page in the usual manner by navigating to the status URL in a browser to ensure the service is up and running:

http://<your host>/api/publishing/operations/status 

If the service is running you should see a status response of zero a la:

{ "status": 0 } 

Time to try a publish!

Job completed!
Image result for all good meme

Adding a Preview publishing target to the the Sitecore Publishing Service

Publishing speed out of the box has never been a strong point in Sitecore. A full site publish is akin to watching paint dry if your master database is even lightly used. Not only that, but any subsequent publish will be blocked by the first operation until it completes in its entirety. Not ideal but what can we do?

Step up the Sitecore Publishing Service (SPS), a separate site built on .NET Core designed to rapidly publish items via calls that go directly to the Sitecore databases.

If you’ve already decided that publishing is too slow and gone down the route of employing the Publishing Service, then bravo, good choice. If not what are you waiting for?!

The Publishing Service only ships with a default publish target – “Internet” (the web database). It’s therefore down to you to configure any custom targets such as Preview even if these are already set up in Sitecore.

Assumptions 

I’m going to assume you’ve already:

  • Installed and configured all the prerequisites, the Publishing Service and the Sitecore module package. If you get stuck, Stephen has a very nice guide here http://www.stephenpope.co.uk/publishing) – admittedly for v2.0 but pretty much all of the same info applies to v3 and 4)
  • Installed the .NET Core Windows Hosting  Bundle
  • Set up the IIS Site
  • Added a host entry to your hosts file
  • Set up your core, master and web connection strings in SPS config
  • Successfully published to the web database though the Sitecore UI

Configuring a new publishing target

The default publishing targets are configured in the file below relative to the installation folder:

 /config/sitecore/publishing/sc.publishing.xml

If you open this you’ll see a <Targets> node where the publish target configuration is stored. However the /config/sitecore folder contains the default files provided by Sitecore and as sc.publishing.xml forms part of that it should not be modified.

As with a traditional Sitecore site, rather than edit the default files provided we will patch the configuration we need on top in a separate patch file. 
For our purposes we will call this sc.preview.xml but you could call this sc.targets.xml or anything else you desire as long as it has a “sc” prefix. Where we choose to locate this patch file has an effect on when it is loaded.

Configuration files are loaded from the folder structure in the following order:
1) Files are loaded from /config/sitecore/
2) Files are loaded from /config/global/ (if it exists)
3) Files are loaded from /config/{environment}/ (if it exists)

Using the patch file below as a guide, for now drop the file in the /config/global/ folder. This will ensure it is always loaded in each type of environment:

<Settings>
    <Sitecore>
        <Publishing>
            <Services>
                <DefaultConnectionFactory>
                    <Options>
                        <Connections>
                            <Preview>
                                <Type>Sitecore.Framework.Publishing.Data.AdoNet.SqlDatabaseConnection, Sitecore.Framework.Publishing.Data</Type>
                                <LifeTime>Transient</LifeTime>
                                <Options>
                                    <ConnectionString>${Sitecore:Publishing:ConnectionStrings:Preview}</ConnectionString>
                                    <DefaultCommandTimeout>120</DefaultCommandTimeout>
                                    <Behaviours>
                                        <backend>sql-backend-default</backend>
                                        <api>sql-api-default</api>
                                    </Behaviours>
                                </Options>
                            </Preview>
                        </Connections>
                    </Options>
                </DefaultConnectionFactory>
                <StoreFactory>
                    <Options>
                        <Stores>
                            <Targets>
                                <Preview>
                                    <Type>Sitecore.Framework.Publishing.Data.TargetStore, Sitecore.Framework.Publishing.Data</Type>
                                    <ConnectionName>Preview</ConnectionName>
                                    <FeaturesListName>TargetStoreFeatures</FeaturesListName>
                                    <Id><!-- Your Preview target item Guid goes here--></Id>
                                    <ScDatabase>preview</ScDatabase>
                                </Preview>
                            </Targets>
                        </Stores>
                    </Options>
                </StoreFactory>
            </Services>
        </Publishing>
    </Sitecore>
</Settings>

As you can see from the XML, we set up a Connection element, a Target element which references this connection and specifies the Publishing target item itself in Sitecore.

Ensure you replace the comment in the XML below:

<Id><!-- Your Preview target item Guid goes here--></Id>

With the appropriate GUID from your Publish target:

Connection strings

As part of setting up the SPS you should find the /config/global/sc.connectionstrings.json file already present and populated. All you need to do is add the bolded preview line below and update the connection string appropriately:

 {
  "Sitecore": {
    "Publishing": {
      "ConnectionStrings": {
        "core": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreCore;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "master": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreMaster;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "web": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreWeb;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "preview": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecorePreview;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1"
      }
    }
  }
} 

Alternatively you can issue a command to the Publishing Host executable which will write the connection string line for you:

Sitecore.Framework.Publishing.Host configuration setconnectionstring preview id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecorePreview; 

If you use a Json or INI file for your connection string configuration, carry out the equivalent change.

Multiple Active Results Sets (MARS)

Multiple Active Result Sets (MARS) is a feature that works with SQL Server to allow the execution of multiple batches on a single connection. SPS requires the use of MARS – The setting has to form part of your connection strings or you will get an error:

System.InvalidOperationException: The connection does not support MultipleActiveResultSets. 

If you’re adding the Preview connection strings manually, ensure the following is present as part of the connection:

MultipleActiveResultSets=True 

If you’re using the setconnectionstring command line approach it will get added automatically.

Upgrade the schema

Now the connection strings are hooked up we need to ensure the Preview database contains the necessary schema for the Publishing Service to function. Drop to a command line and execute the line:

Sitecore.Framework.Publishing.Host.exe schema upgrade --force 

This should result in something similar to the following output:

Schema Upgrade  
Upgrading all databases to version [ 2 ]  
Database: [ localhost\SitecorePreview ] … COMPLETE [ v0 => v2 ]  
Database: [ localhost\SitecoreCore ] … SKIPPED (Already v2) 
Database: [ localhost\SitecoreMaster ] … SKIPPED (Already v2)   
Database: [ localhost\SitecoreWeb ] … SKIPPED (Already v2)

Testing the service

We can now fire up the Publishing Service from the console which allows us to see debug messages and more verbose logging than in production mode. To run in development mode we run the following command from the installation folder with the environment flag:

Sitecore.Framework.Publishing.Host.exe --environment development

Since were using the development environment, our XML config changes must exist in the /config/development or the /config/global folder or they will not be incorporated. When the service loads, the list of registered targets are output to the console. If all is well and the config has patched correctly you should see your new Preview target output as per the screenshot:

Happy days!

If all looks good, now is the time to head back into Sitecore and attempt to publish some content to the Preview database.

Ditch dev and go into prod!

If you managed to successfully publish in development mode, now is the time to ensure you can run in production mode in IIS. Ensure you have the sc.preview.xml config file located in either /config/production or the /config/global subfolder for it to take effect. Since we made changes to the SPS configuration, you must restart the application pool for the modifications to be recognised.

Check the service status

Next up you’ll want to check the service is running properly with your changes. Navigate to the following URL in a browser to ensure the service is up and running:

http://<your host>/api/publishing/operations/status

If all is ok you should see a response similar to this:

{ "status": 0 } 

Repeat the publish you did earlier and check the Publishing Service logs to ensure that the service published the item correctly.

Now bask as the Content Editors buy you many gifts for their new found ability to publish to Preview in super fast time 😉

Troubleshooting

If you encounter problems or the service won’t start, the first thing to check are the logs stored in the \logs subfolder of the application folder.

1. Error: "Could not resolve stores: 'Preview' of type target"

The error message above indicates your configuration has not been patched in correctly; check the file is present in the correct environment folder with the “sc.” prefix. For troubleshooting purposes you could take a backup of /config/sitecore/publishing/sc.publishing.xml and temporarily edit the original file with your configuration to help narrow down the issue. If the config works its probably a patching issue. Don’t forget to restart the changes after any configuration change!

2. System.AggregateException: One or more errors occurred. ---> System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
 Parameter name: No connection was registered with name 'Preview'

This means that the XML in your config patch is not quite right. The target configured must match the name of the connection XML so double check for typos in your patch file.

Reflections on SUGCON 2019

Sitecore User Group Conference

Having missed last year’s SUGCON in Berlin due to family commitments I had no excuse to visit the conference in London this year. Well, apart from a small family commitment in the form of my son due to arrive in a week! Still its only a 4 hour trip home should he decide to arrive early 😉

I took the opportunity to go in early on the morning of Day 1 and update my Sitecore certification to version 9.1 kindly stewarded by Tamas and the team. Once that was done it was time to grab some food and meet up with friends, old and new, and catch up with whats going on in the world of Sitecore.

Azure Devops – Donovan Brown

The first talk of the day was from Donovan Brown on Azure DevOps. This was an enlightening talk further strengthening the case for Azure DevOps in the coming months and years when it comes to dev/test workflow. I think it will only be a matter of time before even the most ardent of detractors will see the benefits Cloud technology can bring to repeat-ability, reliability and cost savings. That is, perhaps, until that fateful day when the providers decide to hit their captive audience with price rises 😛

I attended many other talks of high quality, but the stand out ones for me were:

Sitecore Host – Architecture and plugin – Kieran Marron and Stephen Pope

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Kieran-Marron-Stephen-Pope-Sitecore-Host-Architecture-Plugin-Design.pdf

Sitecore Host runs on .NET Core and acts as a base framework for other services e.g. Identity Server and Universal Tracker with the Publishing Service soon to be migrated too. Host takes care of many common concerns such as Dependency Injection, Configuration management, Route registration and more. The idea behind it is to provide a consistent experience for any application on the Host from installation to configuration even allowing command line configuration without the need for a User Interface. This is what we’ve been looking for a long time and I find it reassuring and exciting to some extent that Sitecore has embraced splitting out the monolith into its constituent parts. For years It felt like Sitecore was gradually falling behind as it remained so tightly coupled to the .NET Framework and had a huge deployment real estate. But over the past couple of years the the move to split out parts of the beast is rapidly gaining pace and I applaud this move. I will be interested to see what happens in the next few years and what candidates are picked to receive the Sitecore Host treatment. My only concern will be the whether the DevOps guys are sucking through gritted teeth at what will soon be landing on their plate. Perhaps with the adoption of Docker, this side of things might improve but lets wait and see…

Taming the Marketing Automation Engine – Nick Hills

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Nick-Hills-Taming-the-ma-engine.pdf

This was a timely talk for me as I’m currently looking at harnessing the power of Marketing Automation for my current client. Nick really cut to to the heart of the matter and delivered an engaging talk covering pitfalls and common issues.

Even as a Developer with a limited appreciation and grasp for Marketing, I’m always astounded by the potential and power of Marketing automation in Sitecore, so to Marketer’s it must be a dream. Such is the power and ease of use I really do think Sitecore should have a compelling demo video on their homepage; Perhaps a whistlestop tour employing a campaign to engage customers in a plan, upsell, cross-sell, I think it would really captivate Marketers from the off and let the customers roll in!

Nick covered Automation from a Marketer’s perspective and then from a developer’s viewpoint. For me this is the single area of Sitecore besides personalisation that stands out as having the ability to generate a massive difference in potential revenue for customers. One you see it’s ease of use and power, you can imagine the huge possibilities for revenue generation and addition of business value. e.g. The system could:

  • Send an email reminder to customers if they abandon a basket, potentially capturing some lost revenue.
  • Send post-purchase experience countdowns and attempt cross-sells/ upsells
  • Send a “customers also bought” email or employ Machine Learning to suggest items that match the buying habits and likely interests of the user.
  • Integrate with a weather API for companies that benefit from particular types of weather. It could be used to suggest bringing an umbrella or suncream or upselling a day trip such as a water park on a nice day or an aquarium on a wet day. A car hire company could use weather data to upsell a 4×4 vehicle for wet or snowy weather
  • And so on…

JSS immersion – lessons learned and looking ahead – Anastasiya Flynn

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Anastasiya-Flynn-JSS-Immersion.pdf

Anastasiya covered many lessons in her talk from scaffolding out components and structuring your project, to debugging techniques. As a complete n00b to JSS, I was hoping to glean hard won tips and tricks in this session that would kick-start my learning. However on reflection I would really have benefited from studying React and playing about with JSS beforehand as many of the concerns here were difficult to fully grasp without a base grounding in the tech.

She explained how the component factory worked (which went over my head having never used React or JSS). I intend to return to her slides when I get time to delve into JSS with either React or VueJS.

Debugging tips were shared in the form of the Chrome debugging tools and useful extensions for React or Vue though I was surprised to see console.log being recommended as a fallback option. I know its has always been a quick and easy way to output given information but it does make me wonder whether the stack is now so complex that we are still waiting for the debugging tools to catch up.

I’m looking forward to tinkering with this in my spare time but the big question still lingering in my mind is, what if you don’t want to go completely greenfield? Many clients don’t have the budget to scrap their site and build from scratch and I’ve not read much in the way of blogs on how to piecemeal migrate to JSS or combine JSS with traditional Sitecore sites. It’s an area I’m very keen to investigate…

PAAS it on – Learnings from a year of Sitecore on Azure Paas – Criss Titschinger

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Criss-Titschinger-PaaS-it-On-Learnings-From-a-Year-on-Azure-PaaS.pdf

I enjoyed this talk from Chris, what it brought home was how complicated a full Sitecore 9 deployment to the cloud can be versus Sitecore 8 especially in a Blue/Green setup. Criss brought many of his learning experiences to the table. A few highlights for me were:

  • Recommending the use of TFS2018/ Azure Dev Ops and Service principles rather than Publish Profiles. In Blue/Green setups it pays to be mindful of the fact that the CD server will be shared between both instances which obviously puts a bigger strain on the resources of that machine.
  • Diagnosing issues with a specific instance in a cluster has always been a pain. In the past I’ve used a HTTP Header to identify servers. Criss suggested using the ARRAffinity Cookie together with browser plugins like EditThisCookie to allow you to edit the cookie and load a page on each instance.
  • Azure portal gives you Availability and performance stats, CPU, Memory reports. You can even kill the process from here.
  • Application Insights is a great tool for searching logs by time, or for a specific exception but pay attention to the amount of data being harvested, especially from test servers as you will be charged for all the data.
  • Azure search is very easy to set up but it is limited to 1000 fields per index. This might necessitate excluding unnecessary fields in the ContentSearch configs otherwise building an index can hang. Azure Search can be expensive so it might be an idea to use Solr for local development purposes
  • Azure SQL audit / threat detection works very well and detects all failed logins or firewall anomalies.
  • Bandwidth can be pricey for bringing content down – think about media and putting it into a CDN to save costs.
  • If you cant find what you’re looking for in the Portal try resources.azure.com. The option to use HTTP2 was active in Resources 2 weeks before it was available in Portal.
  • Autoscaling sounds great in principle, but it means cold starts. And cold starts mean there is a period of time when your site is unavailable to serve requests, which is less than ideal. It is therefore advisable to reduce startup time as much as possible by streamlining Sitecore e.g. making use of the the Prefetch cache and Precompiling views.

I must admit it was quite worrying to hear Microsoft’s seeming indifference to swapping out the instances making up part of his cluster. With Sitecore’s cold start time, this was enough to cause downtime and I’m very surprised this is still an issue with Azure at this point in time. Issues like this are always a worry for me as it is all well and good recommending the cloud to a client but if they end up experiencing downtime and lost revenue through no fault of your own it looks bad. Stakeholders don’t care about the minutiae of machines or containers spinning up and down they only see lost revenue and bonuses disappearing into thin air and much shouting will occur. One of the audience did suggest some solutions to this after the talk but I don’t feel it should be up to the client to have to monitor services with Application Insights just because Microsoft decide rejig your services on a whim.
Personally before moving to the cloud I wouldn’t feel comfortable without producing a full end to end plan with rollback and fallback options. In my opinion it makes sense to retain your OnPrem setup to allow testing in parallel and it also allows a fallback option should the cloud setup not fully fulfill expectations early on. It takes years to gain a good reputation and minute to lose it.

10x your Sitecore development – Mark Cassidy

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Mark-Cassidy-10x-your-Sitecore-Development.pdf

It was a pleasure to finally hear one of Mark’s talk in person. A lot of what he described really resonated with me with regards to overengineering Sitecore solutions. Although often we try to produce the most flexible design for a client, beneath all the noise it is often time to market which is their number one priority. Get the product out the door then iterate. I am a big fan of this mentality as opposed to taking much longer to get the “perfect” solution before pushing it out the door. What was the perfect solution at the time often ends up changing and so the time spent up front is often money taken from the budget that would be better spent elsewhere. By trying to take our ego out of the equation as an architect in delivering all singing all dancing solutions we can deliver a product better suited to our clients needs much more rapidly.

Having said all that, although this approach works well for agency clients and small customers, with Enterprise clients and larger Sitecore solutions with a huge real estate, it can pay to slow things down. Often I will be tasked to work on a Sitecore site that has had years of development behind it and although time to market is still important here, other factors such as performance, and reliability under stress come to the fore. Nevertheless I usually like to offer clients a short middle and long term solution to a given problem.

I can appreciate the use of the native Sitecore API as a default approach especially when you have big teams of .NET Developers or even newly certified developers. I have seen people get lost quite easily when an ORM comes into the equation as it is not something they will find in the Sitecore documentation or training. It also hammers the point home of YAGNI (You aren’t going to need it). Do you actually need an ORM to achieve the MVP of the product or can it be shipped without? Personally I feel the advantages of automapping to strongly typed models and decoupling from the Sitecore API beneficial on most occasions but we should still stop and think. Mark put a very valid point across that the Sitecore API is very stable and hasn’t significantly changed in around 10 years. This is a big point for me that many architects seem to ignore. Take Glass for example, its a fantastic product provided free of charge and I use it regularly. However updating to version 5 from 4 does mean there is work to do from a development perspective and hence additional testing too. Can your client absorb the cost of the inevitable refactor? It’s something that we rarely talk about but I feel should be a consideration when choosing an ORM. We should set the expectation that it won’t be maintenance free forever. 

Using xDB at scale – Mike Edwards

This talk was a refreshing take from Mike on how to apply presentation and profile cards when dealing with a massive amount of content pages (think thousands). Solving this maintenance nightmare involved creating a centralised config item in Sitecore with a rules field. The rules engine is run against this field on every request and if the predicate evaluates to true, the profile card is assigned. 

Personalising content en masse also presents problems and Mike talked through using a centralised item with personalisation applied which is accessible to the Content Editors. These renderings get injected into the layout XML at run time via the mvc.getxmlbasedlayoutdefinition pipeline and I’ve seen this approach used to good effect by clients before. Although you take the hit on every request with judicious caching, you save the Content Editors weeks of time navigating through dialogs in the Content Editor.

In my experience making Content Editors life easier is often not number one priority when developing solutions. Developers are often far removed and often don’t meet the editors, when they should really should interact closely. Often the closest they get is a ticket in their sprint backlog or kanban board from the Content Editors requesting a feature or some additional functionality. By putting ourselves in the Content Editor’s shoes it can help to deal with their frustrations. I try to remind myself that a CMS is designed to be used by non technical users and I should try and empower people to do their jobs without constant developer support.

Sitecore 9 Architecture 101 Thomas Eldbrom

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Thomas-Eldblom-Sitecore-9-Architecture-101.pdf

Thomas started his talk with a cracking trip down memory lane showing how the Sitecore user interface has evolved over the years. I’d love to know more from people more about what it was like working with the early versions – Site Core, and SiteCore.

Thomas showed how the Sitecore real estate has expanded from a single role in version 4 to over 50 roles in 9.1. There are now 18 databases, 12 indexes and over 20 services. The advent of microservices has obviously caused this to balloon in recent years but it’s starting to become a challenge to stay on top of both development and architectural concerns. We can obviously mitigate this by investing in continuous learning and Martina Wehlander has done a staggering amount of work in making massive improvements to the Sitecore documentation over the past few years. It is much easier now to locate official documentation and you can also target the version of Sitecore you are interested easily to get version specific information. The Master Sitecore YouTube channel is also a valuable resource.

The talk progressed through how the 9.1 architecture works at a high level in terms of CM, CD, Authentication, Publishing, Processing and Tracking. Thomas discussed Sitecore Omni released as part of Sitecore 9.1 which is a range of products supporting headless scenarios where you want to decouple the delivery of the content from it’s rendering. This allows Front end developers to build out a site and interface while still be able to harness the power of Sitecore versus the traditional way heavily dependent on .NET Developer resources.

I must say I reflected a lot on this presentation and although the rate of progress is rapid and exciting in the Sitecore space, I can’t help but feel a little sad that the combined Sitecore Solution Architect / Developer role feels like its going to go the way of the dinosaur. There is simply becoming too much to the platform to be an expert in everything. Perhaps over time, it will become necessary to specialise in the same way that a Full Stack Developer can’t reasonably be an expert in the entire full stack nowadays. They can be excellent in some technologies and competent in others, able to quickly adapt, as they should, but the estate is simply too large to have an up to date intricate knowledge of all aspects. On the other hand I do love that there is so much to learn with this platform, it is constantly evolving and definitely not a dull technology stack from that point of view. 

I struggled with Sitecore on Docker so you don’t have to – Sean Holmesby

Having recently got into tinkering with Docker and Sitecore I found myself nodding along with Sean’s talk having gone through all the stages he did. I can really see the advantage of using Docker in Sitecore to spin up and down containers and environments quickly. Ideal for demos or sharing between developers, it cuts down on the time spent setting up environments. I can see Docker coming into its own in the near future as more and more Sitecore services are migrated to .NET Core. Linux containers have a small footprint so with .NET Core being cross platform it sounds like the perfect pairing. Many times I have known developers taking a week to set up their local environment (no joke). With Docker in the picture there is no longer any excuse – with a “docker compose up” you’re ready to go

Measure if you want to go faster – Jeremy Davis

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Jeremy-Davis-Measure-if-you-want-to-go-faster.pdf

I love a good talk on performance and metrics but I realised shortly after Jeremy started his talk I’d already watched the video last year! Doh! It was a nice recap anyway delivered by Jeremy succinctly, and I liked his judicious use of the VS debugging and profiling tools in VS 2017. I wholeheartedly agree with Jeremy when it comes to measuring the impact of our changes. How can you be confident your code performs if you don’t measure it? Historically I’ve tended to use JMeter/Gatling/DotTrace and a scattering of Sitecore tools to identify bottlenecks and the potential impact of code changes. I will be adding the Visual Studio Profiler to that (VS2019 caveat notwithstanding)

Sitecore 9.2 The Hidden Bits – Pieter Brinkman –

Peter rounded off the conference with a brief chat on the upcoming features in 9.2 which will be released this quarter. A few highlights for me were:

  • Horizon – the replacement for the editing experience in Sitecore. This will be tweaked based on feedback from MVPs and the community. I’m very much looking forward to seeing this in action.
  • Sitecore Host this will be upgraded to the latest version of .NET Core
  • JSS SXA and Sitecore Forms integration as mentioned by Adam Weber and Kam Figy in their presentation on JSS.
  • Further work on Helix principles with new samples and designs with talk of a simpler structure for projects in Visual Studio.
  • Rainbow serialisation  – merging changes with the current serialisation format which contains a file length has always been pain as it necessitates the file length to be updated manually post merge.
  • Personalisation report – This looks like it will provide an easy way of display details of the personalisation currently in effect. I have a feeling this will not only be useful for Marketers but useful to Developers for making performance optimisations also.
  • Sitecore Install Assistance – This is effectively a GUI wrapper for SIF providing easy installations for developers and non technical users and I’m looking forward to trying it out. Of all the things Peter talked about, I have to say I am most looking forward to SIA for reasons I’ve talked about here. SIA will initially be available for XP0 only but I feel that is where it is needed most and I will welcome it with open arms 🙂

Missed sessions
Unfortunately as many of the sessions ran in parallel I missed out on some interesting talks. However the slides are currently online, hopefully with accompanying videos shortly:  https://www.sugcon.eu/video-downloads/

I’m looking forward to watching some of the great talks I missed including:

Summing up

Having had time to reflect on the conference, the future of Sitecore looks very bright indeed. The massive strides in architecture and feature set demonstrated at SUGCON shows just how much effort is being put into the platform.

On a personal level I enjoyed visiting the Big Smoke. I am a big fan of architecture so I am always amazed at some of the buildings coming from my small backward town up north.

It was also great to catch up with friends and see what they’ve been up to personally and professionally, as well as meet new ones in a very positive environment. The coffee and food was decent and the sessions ran like clockwork thanks to the efficient team of Volunteers.

Looking forward to next year, wherever that may be!

Installing Sitecore? How hard can it be?

The sheer amount of work that has gone into creating the Sitecore Installation Framework (SIF) to date and it’s power and flexibility is mega impressive. It exhibits a masterclass in Powershell, and kudos to the team for the amount of effort and testing that must have gone into it.

However compared to the old days of spinning up a Sitecore instance in Sitecore Instance Manager (SIM) and having an installation up and running in minutes, the process has become fairly complex and laborious.

A local installation can often consist of executing the installer, Googling the error message then figuring out whether you need to manually roll back parts of the installation and rerun the whole installation, or issue a command to skip steps.

It must admit it does make me slightly uncomfortable to have to add the password for a SQL Server login with sysadmin privileges into a config file in plain text. These files can often end up not getting deleted when they should and can potentially have security connotations depending on the environment. Small niggles such as using a SQL user with a dollar character as part of their password can cause problems as this has to be escaped in the SIF script. Sounds trivial, but it’s another annoyance for users. Similarly there seems to be no way of automatically creating a certificate and binding it to the Sitecore instance at the time of writing (v2.1) or uninstalling.

Now it is very understandable that with the Sitecore real estate widening as it becomes more and more service oriented, there needed to be changes to the way the product installed, but for simple installations and demo purposes it I’m unsure if SIF is the right way to go. The fact SIF has it’s own channel in Sitecore Slack and a not insignificant number questions on Stack Exchange suggests that perhaps the way we approach installation might need to change in future.

The people I’m particularly thinking of here, are those who want to investigate and assess the product for evaluation purposes. If they can’t get it installed quickly and smoothly Sitecore risk alienating potential customers and hence losing out on valuable revenue.

In the short term the community (specifically Rob Ahnemann) has kindly stepped in to bridge the gap with a GUI wrapper in the form of SIF-less(!). This is obviously very useful for developers and demo purposes but not the adorementioned people who wil lprobably want to follow the officially documented way of installing.

Thinking longer term, if Sitecore continue down their pursuit of breaking out and abstracting parts of the system, many parts will be able to run in lightweight Linux Docker containers. Docker is not currently supported by Sitecore but I hope they will embrace it in the near future as people like Per Manniche Bering have made some great strides with it and I personally feel that this will be the direction developer, test, and even production setups go in the next few years.

This blog uses images designed by Freepik