Beware a bug in the latest Visual Studio – You may be debugging production

A small tale of caution when using Visual Studio 2019 and .NET Core – There is a bug present in the latest version whereby Visual Studio can unexpectedly ignore your environment variables with knock-on consequences.

I recently upgraded my version of Visual Studio 2019  to the latest (16.6.2) in order to resolve an issue with it repeatedly adding development environment variables to the web.config when debugging (and hence breaking production servers). 

Unfortunately that seems to have uncovered another bug 😞

This cropped up on a .NET Core project I’m currently working on, whereby debugging an MVC solution on my machine would show me data from the production system.

Why is this happening?

Typically when working with .NET Core in Visual Studio you will use a launchSettings.json file to set the ASPNETCORE_ENVIRONMENT variable to “Development” so when running in IIS Express for example, your settings all use development parameters.

A typical launchsettings.json file:

  "IIS Express": 
    "commandName": "IISExpress",      
    "launchBrowser": true,      
      "ASPNETCORE_ENVIRONMENT": "Development"      

Appsettings.json transforms are a common way of separating settings for different environments e.g. Development, Staging, Production. We typically do this by having a default appsettings.json file together with a bunch of environment specific appsettings.{Environment}.json files that will override the defaults depending on the current environment set.

Environment specific appsettings.json files are a convenient feature

Unfortunately in the latest version of Visual Studio 2019, if you have a web.config file present in your project then the environment variables set in launchsettings.json will be ignored and your environment will not be set correctly.

This means your environment specific appsettings.json will not be used and .NET core will use the settings in the default appsettings.json. In some instances this can include production settings. In Startup.cs env.IsDevelopment() will also incorrectly evaluate to false which could affect the middleware that gets run.

Steps to reproduce:

  1. dotnet new mvc
  2. dotnet new webconfig
  3. uncomment the <location> section in web.config
  4. debug the application using IISExpress

If like me, you’re working in a situation where you have network access to production servers from your development machine (probably not a good idea but possible with cloud servers and such like) you could inadvertently be connecting to production even though you are debugging your local solution.

This is obviously less than ideal if upgrading your tooling suddenly and unexpectedly runs the risk that developers could accidentally affect production data in some scenarios.

Workarounds that don’t work

  1. Set the ASPNETCORE_ENVIRONMENT variable through a Windows environment variable to “Development” 
  2. Add the following code before the HostBuilder code: 
	Environment.SetEnvironmentVariable("ASPNETCORE_ENVIRONMENT", "Development");

Unfortunately these workarounds don’t work as they set the environment to “Development” but the correct environment specific appsettings.json file is still not used.

Workarounds that do work

  1. Remove the web.config if not needed
  2. Move all your production settings out to their own appsettings.Production.json file for safety reasons and put your development settings in the appsettings.json file.

If you know a better and more sensible workaround until Microsoft release a bug fix please let me know!

Case Study – Upgrading Sitecore 9.0.1 to 9.3 on an enterprise site

A few months ago a client asked if I could upgrade their Sitecore 9.0 Update 1 real estate to the latest version 9.3. Sitecore upgrades always require a fair amount of due diligence, planning and attention to detail in execution. Unfortunately talk to anyone who has carried out a Sitecore upgrade before (especially one that skips intermediate releases) and they will tell you the process is often not as simple as it sounds. If you or your company have the budget I would definitely recommend employing someone who has planned and executed an upgrade before, as experience can count for a lot of time saved in planning, efficiency and debugging issues later on. 

In this blog I won’t be discussing every step necessary to upgrade Sitecore but rather pointing out problem areas and things to consider before you start. The Sitecore upgrade guide is very comprehensive and I heartily recommend reading it thoroughly beforehand, as well as keeping it close to hand during the upgrade.

As always in every upgrade I offered my advice and expertise, but the client had the final say over how they wanted the solution to look/behave and the way the platform was to be distributed. Sounds fair enough to me as I won’t be supporting it 😉.

Obviously a lot of this post will contain information pertaining to the client’s individual circumstances and setup, but you may find a useful nugget or two to help with your own upgrade 🙂

The planning stage

Scope of the upgrade

The Solution Architect’s preferred approach for this upgrade was as follows:

  • Employ Docker to host the Sitecore ancilliary services in Linux containers.
  • Move the Sitecore platform out from custom Nuget packages and employ a custom Powershell script to install it.
  • Deprioritise any issues that go beyond the ability to run up the site and load test it.
  • Minimal impact approach – upgrade as few components as possible in the first iteration. If it doesn’t need to change then don’t change it. Because of the size of the solution there are a large number of opportunities to improve the codebase but due to the lack of resources, this would prolong the project and possibly lose focus given I am only the only developer on the project.
  • Keep in the in-webroot approach as is – Again, change as little as possible to reach a working solution. Since the solution works inside the hosting folder keep this approach for now – the merits and drawbacks of that approach are a debate for another day.
  • Time and resources allocated to the upgrade project are for 1.5 people’s time, if that.
  • On going development from 10 other teams means the work will need to be integrated in source control with care.

Success criteria

  • Make it as easy as possible for the 50 or so developers working day to day on the solution, to continue their work seamlessly
  • Improve the laborious process new developers to the team have to endure when setting up the solution using a set of document manual steps.
  • Ensure the site was at least as performant on version 9.3 as it was pre-upgrade on 9.0.1.
  • Gather some metrics to gauge if possible whether 9.3 could allow analytics data collection to be switched on – currently due to the vast amount of traffic on the production site, the Sitecore 9.0.1 site struggles to run with analytics turned on.

The existing Sitecore real estate

This Sitecore solution in question was fairly large and established and ran the Visual Studio project inside the webroot. I’m not fully aware of the historical reasons for this, but I’m speculating that they preferred changes made to be reflected instantly in the website. I believe they also disliked the fact that the publish approach can leave a legacy of old files hanging about e.g. renamed assemblies or configuration files. Obviously this is solvable but the availability of Sitecore SMEs was limited and as in many businesses, most of the teams’ efforts were focused on revenue generating tasks, sometimes at the expense of Sitecore best practice.

For the current 9.0 setup the client used custom Nuget packages to distribute the Sitecore platform – that is, the Sitecore DLLs and files all wrapped up in Nuget packages. A Powershell script inside the Nuget package was employed to drop files in the correct location on installation. However these Powershell scripts are no longer executed on a Nuget install with PackageReference and are not supported going forward.The Solution Architect’s plan therefore was to reference Sitecore’s own set of Nuget packages going forward.

The proposed platform provisioning and developer experience

The proposal from the SA was for developers to deploy the Sitecore platform into their working folder via a custom Powershell script and check out the Sitecore solution from source control, allowing them to continue to work on the solution in the same manner they were accustomed to.

The Sitecore Experience Platform has a vast number of logical roles now in version 9.3 and as part of the upgrade the SA prepared the required Sitecore ancilliary services in Linux Docker containers including the following:

  • xConnect Collection
  • xConnect Collection Search
  • xConnect Search Indexer
  • xDB Processing
  • xDB Reporting
  • Marketing Automation Engine
  • Marketing Automation Operations
  • Marketing Automation Reporting
  • Reference Data
  • Solr

These ancillary containers would be installed on Developer’s machine by Powershell scripts as part of a workstation installation script.

Taking this approach was a step on the journey toward bringing the developers’ workstations closer to the production real estate and makes it much easier to setup a new developer workstation. This would provide a repeatable set of steps that the developers could use to deck their local installations and get back to a solid starting point. A point sorely lacking for years with the current manual set up of the Sitecore installation detailed in a list of workstation set up steps.

Encapsulating the Sitecore services in Linux containers was fantastic though I had my misgivings about the approach to platforming developer machines primarily due to the fact in my experience a Visual studio Clean command will clear out the bin folder of not just solution binaries but the platform binaries as well. This would require a redeploy of the Sitecore binaries. Gitignore also needs to be set perfectly to ignore all the platform files. I would have preferred to use Docker containers for Sitecore itself but unfortunately there was some resistance from the client to this approach and Windows/Linux containers may not have played nicely together.

My preferred approach is as per Sitecore’s guidance which is for developers to publish their changes to the site as and when necessary. This avoids unnecessary app pool recycles when saving code files in the solution. It saves time spent waiting around waiting for Sitecore to reload all its configuration. And when you multiply this time up by 50 developers, this can be a lot of wasted time! This approach often also more closely resembles the CI CD environment which means you can fail faster while still in development by spotting any potential issues earlier. Keeping separation between the files of your solution and the Sitecore platform files also helps to make upgrades easier by maintaining a clear boundary between the installation and the solution files.

I was surprised moving to the Publish outside of the webroot approach was not made a higher priority before now as anecdotally, the amount of complaining I heard from talking to developers about Sitecore’s slow startup time meant a lot of time was being wasted on the shop floor in app pool restarts. However in my experience in larger firms there’s all to often a higher priority / not enough time / not enough money to address such concerns…

The upgrade

So after establishing a plan of action it was time to crack on with the upgrade!

Undoing anomalies

As anyone that has done a Sitecore upgrade before can tell you, there are often many unexpected time sinks that crop up when you get stuck into the detail. To help mitigate this I dug around looking for any unwanted anomalies in the solution or pain points that might crop up later. One such anomaly that cropped up (probably as a result of running the solution in the webroot) was that the platform Sitecore.config file had been checked into source control at some point in the past and had been modified which is obviously a no-no. So I set about comparing this to the stock config file and moving any alterations out to patch files. The Sitecore.config file could then be deleted from source control and remain as part of the Platform installation

Upgrading the .NET Framework version

The first step was to upgrade the .NET Framework to support Sitecore 9.3. The new version supports .NET 4.7 upward as shown in the Compatibility table so I decided to go with .NET Framework 4.8. However with almost 50 projects in the solution this was tedious prospect.

Step up the Target Framework Migrator. This little gem takes the time and hassle out of upgrading each project by hand by going into the project properties and selecting the new version. I then went through and carried out a global find and replace on any Assembly bindings making reference to .NET 4.6.2.

Support packages

This client’s way of dealing with Sitecore Support patches was to wrap them up into custom Nuget packages and install them in the main Sitecore web project. Since 9.3 included all of these legacy fixes, they were now obsolete and we could just safely remove these from the project.

Installing the new Sitecore 9.3 Nuget packages

By now attempting to build the solutions resulted in a few errors 😉 :

Since Sitecore no longer publish the NoReferences nuget packages –   if you install the 9.3 packages you will get a lot of dependencies installed. Now with some projects in your solution e.g. a Foundation project, you may not want or need to bring in all and sundry. The way around this is to use the Dependency Behaviour option of the Nuget Package Manager. If we set the dependency behaviour to “IgnoreDependencies” and install the package, the dependencies will not be installed.
The Package Manager Console command to use is:

Install-Package -Id <package name> -IgnoreDependencies -ProjectName <project name>

Then it is a case of installing the packages you need, until your compiler errors start to diminish. 🙂

Dependent Nuget package version upgrades

Some of the Nuget packages in use in the solution were fairly old and the Sitecore 9.3 binaries were compiled against later versions than we were currently using. Therefore many of these needed to be upgraded to be at least the same version. Cases in point were:

  • Castle Windsor which we upgraded to version 4.0.0
  • Castle Core upgraded to 4.2.1
  • Microsoft.Aspnet.MVC upgraded to 5.2.4

Again I needed to globally search the assembly bindings here to make sure no bindings were redirecting to old versions of these binaries.

Breaking API changes

Something to be aware of when upgrading Sitecore is that there are often breaking changes to public APIs even between point releases which can make the upgrade a challenge. Usually these methods are marked as Obsolete in the previous version which should flag a compiler warning, however this doesn’t help you much if you are skipping point releases during an upgrade.
A few breaking changes I experienced to the Sitecore API that have occurred since version 9.01:

  • The Sitecore.Jobs.Job and Sitecore.Jobs.JobOptions classes have been removed in Sitecore 9.2. In their place you now have BaseJob and BaseJobOptions abstractions with DefaultJob and DefaultJobOptions concrete classes
  • We had some custom code that wrapped the static class Sitecore.Job.JobManager for unit testing purposes. This code required updating to accommodate the the changes to BaseJob and BaseJobOptions above to match the new method signatures. This will probably be a common problem across the board as Sitecore refactors more and more of the codebase as they introduce more abstractions in favour of static classes. Ideally we would rewrite the unit testing classes to use the abstractions if possible at this point, but we don’t always have the luxury of time or budget to do so. 
  • Many obsolete methods have been removed such as the static Sitecore.Caching.CacheManager.FindCacheByName(string name) and Sitecore.Eventing.EventManager.QueueEvent<TEvent>() methods. This meant I ended up having to rewrite someone else’s code which inevitably didn’t have any unit tests. Not fun. 😞
  • The Sitecore.Pipelines.HttpRequest.HttpRequestArgs.Context property has been removed in favour of HttpRequestArgs.HttpContext
  • The Solr ServiceBaseAddress configuration setting has been removed as of version 9.0 Update 2 and is now set as a connection string. Therefore any patch file present to override the setting ContentSearch.Solr.ServiceBaseAddress in  Sitecore.ContentSearch.Solr.DefaultIndexConfiguration.config can be deleted. The new connectionstring can be added to your connectionstring.config file like so: 
<add name="" connectionString="https://localhost:8994/solr" />

Pipeline processor adjustments

If you have custom pipeline processors that override some of the defaults in Sitecore you might be faced with some compiler errors; Since some of the built in base classes no longer have parameterless constructors you may have to call the base class with parameters from your custom processor classes or your code will no longer compile.
In addition you may get the following error at runtime and you will need to adjust the configuration:

Could not create instance of type: MyProcessor. No matching constructor was found.

Some of the custom processors that were patched into pipelines such as <mvc.getPageItem> and <mvc.renderRendering> now require an extra attribute of resolve=”true” adding to the XML config to allow the dependency injection mechanism in Sitecore to work in the base classes: 
e.g. For a custom method for Generating Cache keys the config patching had to change from:

  <processor type="MyProcessor, MyDLL"patch:instead="processor[@type='Sitecore.Mvc.Pipelines.Response.RenderRendering.GenerateCacheKey, Sitecore.Mvc']"/>


  <processor type="MyProcessor, MyDLL"patch:instead="processor[@type='Sitecore.Mvc.Pipelines.Response.RenderRendering.GenerateCacheKey, Sitecore.Mvc']" resolve="true" />

Get the solution compiling

My next aim was to get the solution compiling with the new DLLs. I find this is a tedious but necessary cycle of compiling the project, and watching the the error count slowly reduce as you tick off problem by problem. When this step is complete you are well on your way and your solution may run to some extent but  don’t get cracking out the champagne yet theres plenty left to do.

Caching changes

There has been a change in version 9.3 which automatically clears the rendering cache for sites which use it on publish, and does not require additional configuration as long as cacheHtml=true is set on your site in the Sites configuration. The publish:end event handler is configured like this in a standard installation:

<event name="publish:end">
  <handler type="Sitecore.Publishing.SmartHtmlCacheClearer, Sitecore.Kernel" method="ClearCache" resolve="true" />

If you don’t want the cache for a site to be cleared when you publish you can add the preventHtmlCacheClear attribute to the site definition like this:

<site name="custom_website" cacheHtml="true" preventHtmlCacheClear="true" … />

LinkManager changes

There have been changes in 9.3 as to how the LinkManager operates with  Sitecore.Links.UrlOptions becoming obsolete and new configuration added. I wrote about this but since then Volodymyr Hil has summed up the changes far better than I could have in his blog here.

Upgrading Glass Mapper

If you’re using Glass like this client does, it will require an update to version 5. Glass has many changes in v5 including a new way of accessing content from Sitecore using the new IMvcContextIRequestContext and IWebFormsContext abstractions.I had to make changes in this solution around IsLazy attributes on model classes as Glass lazy loads by default now.

I won’t detail all the changes as they are well documented:

What I will say is if time and budget is against you like it was for me being the only developer assigned to this upgrade for a few weeks, you can in fact stick with the old ISitecoreContext class. It will still work, despite being made obsolete in favour of the new contexts such as IMvcContext. Obviously this is not recommended as it will almost certainly be removed in the next version and you are just kicking the can down the road. But in my case the time or budget did not allow me to rewrite every class in the system to use the new abstractions.

The main problem I encountered after upgrading Glass was a bunch of run time errors due to a lack of virtual property modifiers in model classes. It appears that in old versions of Glass Mapper, model properties still map correctly even without the virtual modifier, but not any longer. This only manifests itself when the model is hydrated (ie. at runtime) so its tricky to locate if you have a lot of models. Which this client does! I found as many as I could by carrying out all the common journeys on the site. I’m sure there is a clever reg ex you could use to do a global search in Visual Studio. Alternatively you could harness an automated tool to send an HTTP request to every page of the site looking for 500 response errors.

Upgrading Unicorn

Unicorn required upgrading as 4.1.1 is the minimum version that supports Sitecore 9.3. This was very easy to upgrade as it formed part of a Serialisation Foundation module.

Analyse Web.config for changes

It is a good idea when upgrading to compare the current web.config to a standard OOTB web config for the current version. So I did this for v9.01 and noted that there were many customisations present. I then compared new 9.3 web.config with the 9.0.1 web.config and brought across the necessary changes.e.g. in the 9.1 release a change was made to the <authentication> node for the Identity Server changes, and so that needed to be carefully merged in as:

<authentication mode="None"></authentication>

Upgrade the Databases

I recommend taking a copy of your current Sitecore databases and running the appropriate SQL scripts on the databases as specified in the installation guide. This is usually one of the least problematic steps of an upgrade. But don’t get complacent yet 😛

Install the Sitecore upgrade package

Once the site will build and you can run it up without error you can install the content update package provided in the platform installation files. In my case this failed to complete without error as the base templates had been customised.

This is the kind of problem that is hard to anticipate in any Sitecore upgrade and is a reason contingency time needs to be allocated for unforeseen events and hidden issues. So at this point I had to branch off and investigate these templates and find out what has been changed. Of course there was no documentation present on why these changes had been made so it required deducing carefully whether the modifications were still needed.

Checking the assembly versions

Sitecore provides an Assembly list (See Release information -> Assembly list) of all the Sitecore binaries and their respective versions. I compared each binary to the Sitecore Assembly list to ensure the versions were correct.
There is a small mistake in this document – it contains 2 rows for PDFSharp dlls that have an erroneous “resource” string in the filename so just ignore that part:

Run it up!

With any luck, by now your solution will be working enough to navigate around the system and note down any problems that need resolving.

Check the Error logs

Once the site is up and running it’s time to delve straight into the Sitecore logs to clear up any errors that are being logged. If you’re not using it already Sitecore Log Analyzer is an old but gold tool for analysing the Sitecore error logs.

Solr Indexing Issue #1 – Missing index errors

If you’re not using SIF to install Solr (which you most likely won’t on a larger team – we’re using a Docker container here, remember) then the standard Sitecore config files in v9.3 will leave you with missing index errors in the log due to their configuration out of the box for the following indexes:

  • sitecore_testing_index
  • sitecore_suggested_test_index

The core name OOTB for sitecore_testing_index is set as follows:

<param desc="core">Sitecore-sitecore_testing_index<param>

The core name OOTB for sitecore_suggested_text_index is as follows:

<param desc="core">Sitecore-sitecore_suggested_test_index</param>

These names do not match the core names in SOLR so I had to create a patch for this to use the index id attribute as the core name as per the other indexes:

<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:patch="" xmlns:role="" xmlns:search="">  
  <sitecore role:require="Standalone or ContentManagement" search:require="Solr">    
          <!-- The OOTB Sitecore config has the incorrect core name set for the sitecore_testing_index and sitecore_suggested_test_index -->          
          <index id="sitecore_testing_index">            
            <param desc="core">$(id)</param>          
          <index id="sitecore_suggested_test_index">            
            <param desc="core">$(id)</param>          

This eliminated the index errors as the names now matched the SOLR cores.

Solr indexing issue #2 – ComputedIndexField error

SOLR was failing to index some items as it was generating an error when trying to return a URL as the result of a Computed Field:

207140 15:41:35 WARN  Could not compute value for ComputedIndexField: item_url Exception: System.NullReferenceExceptionMessage: Object reference not set to an instance of an object.Source: Sitecore.Kernel   at Sitecore.Links.LinkProvider.GetDefaultUrlBuilderOptions()   at CustomLinkProvider.GetItemUrl(Item item)

The Client were using a Link Provider switcher which chooses a custom LinkProvider class at run time to generate URLs for Sitecore items. This is a very common approach used by many users of Sitecore (SXA has made this much easier). Many implementations generate URLs differently depending on the context site. This one generated different URLs based on the item’s location within the content tree. 
As discussed earlier there have been changes to the LinkManager and the LinkProvider base class has changed since 9.0.1 internally. The LinkProvider.GetItemUrl method now uses an instance of the Sitecore.Links.UrlBuilders.ItemUrlBuilder class to build the URL. Internally the LinkProvider class has a new public Initialise method which is used to create an instance of the ItemUrlBuilder populated with the configuration from the <links><itemUrlBuilder> node in the Sitecore config. This new Intialize() method usually gets called by Sitecore via the LinkManager but not in our case. Unfortunately in this computed field the code was manually instantiating a copy of one of the custom provider implementations. This means we get errors when instantiating it in such a way.
There are 2 approaches to resolve this:

A light touch approach is to simply initalise the LinkProvider (lame but the code works):

var customLinkProvider = new CustomLinkProvider();
return customLinkProvider.GetItemUrl(item);

Or go via the standard LinkManager which does initialise the ItemUrlBuilder correctly:


After a change like this I always ensure I compare the generated URLs generate pre and post upgrade to ensure they match.

Solr Indexing issue #3 – Hexadecimal value is an invalid character
Another series of errors encountered in the Crawling log when reindexing the master core were:

Exception: System.ArgumentException
Message: '.', hexadecimal value 0x00, is an invalid character.
Message: '', hexadecimal value 0x01, is an invalid character
Message: '', hexadecimal value 0x08, is an invalid character.
Message: '', hexadecimal value 0x1D, is an invalid character.
Message: '', hexadecimal value 0x1F, is an invalid character.

There is some detailed discussion about this issue on the Stack Overflow post here:

Using this stack exchange post as a guide – I ran the following SQL to identify problem records: 

    SELECT ItemId, FieldId , Value FROM [dbo].[SharedFields] 
    SELECT ItemId, FieldId , Value FROM [dbo].[UnversionedFields]
    SELECT ItemId, FieldId , Value FROM [dbo].[VersionedFields]
  ) A
WHERE Value Like '%' + CHAR(0x00) + '%'
WHERE Value Like '%' + CHAR(0x01) + '%'
OR Value Like '%' + CHAR(0x08) + '%'
OR Value Like '%' + CHAR(0x29) + '%'                                                                           
OR Value Like '%' + CHAR(0x31) + '%'

However that brought back over half a million records.

I tried disabling the SOLR indexing for some file types by removing the <extension>pdf</extension> and <extension>doc</extension> inclusions inside the <mediaindexing> configuration in Sitecore.ContentSearch.Solr.DefaultIndexConfiguration.config but still experienced the errors when indexing.

Ultimately Sitecore Support recommended patching out all the <extension> inclusions which resolved the error.

And finally

Unfortunately I didn’t get to the see this project through to completion due to Covid-19 bringing the contract to an end but hopefully there are one or two things in here that can help you should you be about to attempt a similar upgrade. As always with Sitecore YMMV, so do your own due diligence.

Finally to leave you with a couple of tips:

  • Have a bottle of red wine close to hand
  • Keep the Sitecore upgrade guide close to hand – multiscreen setups are very useful here! 
  • Occasionally close Visual Studio and reopen it to get rid of transient errors where the IDE gets itself in a bit of a twist.
  • Sitecore Slack is a useful resource to have one hand, feel free to message me @sitecorium if you get stuck, I’ll do my best to help.

Good luck!

Serialisely painful

When reading data from a .NET Core 3.0 API is much harder than it should be…

Towards the end of last year I fired up a new ASP.NET Core 3.0 project for a client of mine as my first foray into building a real full .NET Core business application from end to end rather than simply producing prototypes and making changes to v2.2 projects.

This particular piece of work consists of a .NET Core MVC Website and Web API, and so as part of this it will need to serialise/deserialise data to and from the API side of things.

Whereas traditionally with the .NET Framework and older versions of .NET Core we’d reach for our favourite 3rd party serialisation library, with version 3 we no longer have to, as more and more functionality is being baked into the product. With the introduction of ASP.NET Core 3.0 the default JSON serialiser has been changed from Newtonsoft.Json to the native System.Text.Json serialiser. It therefore made sense for to me to employ that instead of installing JSON.NET, especially as the native API reportedly outperforms in most circumstances.

This sounded like a great idea and I set about building out the comms code between the API and the client. However I immediately encountered a problem with a simple call to one of the API endpoints from my Web site:

The data was being sent back from the API but my model class was empty when deserialised, and since my web page was expecting the model to be populated, the site was erroring. This was very strange since my Models were identical on both the API and MVC Web site ends.

The MVC web site consistently failed to deserialise the API response using a model that was identical on both the API side and the Web app side.

The problem code

Lets demonstrate the issue I had by looking at some code that exhibits the problem:

The Model POCO class (identical in both API and MVC projects):

public class Client() 
	public Guid ClientId { get; set; } 
  	public string ClientName { get; set; } 
  	public string ClientAddress { get; set; } 

API code:

public async Task<ActionResult> GetClient(Guid clientId)
	var client = await _repository.GetClient(clientId);
	return Ok(client);

MVC code:

// Call API to get Client info
var response = await _httpClient.GetAsync($"Clients/{id}"))
using var responseStream = await response.Content.ReadAsStreamAsync();
var client = await JsonSerializer.DeserializeAsync<Client>(responseStream);

In the example above the model will fail to deserialise correctly and all properties will simply be set to null:

This was puzzling until I took a closer look at the serialised JSON…

{"clientId":"ea4e2b39-fb3f-4d7a-9329-00fe658e9dca","clientName":"Sherlock Holmes","clientAddress":"221B Baker Street"}

It seems the Web API was returning JSON in Camel case format but the the MVC Web application was attempting to deserialise it in Pascal case (or more accurately the same case as my model). This struck me as odd since I had set no explicit configuration of the serialiser in either project so it should have been using the default settings.

If you’re using Pascal case models, out of the box in .NET Core 3, your Web API will return your model as Camel case JSON, and the JsonSerializer will fail to deserialise it into an instance of the exact same model.

Why is this happening?

Lets look at the problem in detail, firstly from the (API) serialisation end as it returns data and secondly from the (MVC) Client end as it deserialises the API response:

1) API serialisation with Camel case settings

When setting up a new Web API project in .NET Core we typically set up our services in the Startup.cs class and call services.AddControllers(). Amongst other things this sets up the FormatterMappings needed to return data from controllers to their consumers including SystemTextJsonOutputFormatter, which is needed to return JSON string data.

In our earlier example, when we return the Ok() ActionResult from the API Controller Action Method, the SystemTextJsonOutputFormatter is employed to call the JsonSerializer and write out the data to the response. By default in MVC the JsonSerializer uses the Microsoft.AspNetCore.Mvc.JsonOptions class for deciding how to serialise the data back to the client.

The default JsonSerializerOptions in ASP.NET Core MVC 3.0 are set to the following:

PropertyNameCaseInsensitive = true;
PropertyNamingPolicy = JsonNamingPolicy.CamelCase;

Hence when our Controller endpoint returns data, we receive camel case JSON data back from the API.

2. Client deserialisation with case sensitive settings

Now when it comes to deserialising a response from a Web API in MVC, we might typically use an instance of the HttpClient class to make an HTTP GET call to an API endpoint. We would then call the new JsonSerializer.Deserialize() method to convert the response into an instance of our strongly typed Model.

However in .NET Core 3.0, when calling the JsonSerializer manually, by default it is initialised with case sensitive property deserialisation. You can see this for yourself by delving into the System.Text.Json source code. The result of this means that any JSON in the API response has to exactly match the casing of your target model, which is typically Pascal case, or your properties will not be deserialised.

How do we resolve this?

We have a few potential options to tackle this:

1. Use custom JsonSerializerOptions when deserialising

We can pass in our own JsonSerializerOptions to the JsonSerialiser in the MVC client whenever we carry out deserialisation, to force the routine to recognise Camel case:

var options = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };
var json = "{\"firstName\":\"John\",\"lastName\":\"Smith\"}";
var user = JsonSerializer.Deserialize<User>(json, options);

Alternatively in a similar manner, we can tell the JsonSerialiser that instead of a specific property naming policy, we just want to ignore case completely by passing in options with PropertyNameCaseInsensitive = true:

var options = new JsonSerializerOptions { PropertyNameCaseInsensitive = true };
var json = "{\"firstName\":\"John\",\"lastName\":\"Smith\"}";
var user = JsonSerializer.Deserialize<User>(json, options); 

This approach is obviously a little tedious and error prone as it means you end up having to remember to specify your custom options every time you wish to wish to deserialise some JSON. Not ideal.

You might be thinking “How can I globally set the default options for the JsonSerializer?

It would be nice if we could set our preferred approach globally for the MVC consumer project. You’d think we could just do something like this in our Startup.cs class ConfigureServices() method:

	.AddJsonOptions(options => {
    	options.JsonSerializerOptions.PropertyNamingPolicy = JsonNamingPolicy.CamelCase)


	.AddJsonOptions(options => {
    	options.JsonSerializerOptions.PropertyNameCaseInsensitive = true

But unfortunately not – this code only configures JsonSerializer options for MVC and Web API controllers. This will not configure JSON settings application wide or for any of your own code that calls the JsonSerializer methods.

Since System.Text.Json.JsonSerializer is a static class it carries no state and therefore there is no way to set this on a global project basis (unless you want to get dirty and use reflection, which I don’t :P)

2. Set attributes on your model classes

You can decorate you model properties on the client with the JsonPropertyNameAttribute. e.g. :

public string ClientName { get; set; } 

However rather you than me if you have a large number of Model classes.

3. Change the casing of your models from Pascal case to Camel case

Just no.

4. Use reflection

Double no.

But if you really have to, this code should sort you out.

5. Employ an extension method to deserialize the JSON with your custom settings

Using an extension method to deserialize the JSON with your custom settings is an option but again, this has the problem of having to enforce usage of the extension method which on larger projects with many developers might prove tricky.

6. Abstract the (de)serialiser behind an interface

We could wrap the JsonSerialiser behind an interface and ask the IoC container for an IJsonSerialiser. Our concrete representation could then set the appropriate JsonSerializerOptions and deserialise the data correctly.

This is not a bad option and it does mean we could more easily change the serialiser should we want to swap out System.Text.Json in future without breaking the consumers. Again this is all good as long as all developers are aware of the interface and don’t go off piste deserialising via JsonSerialiser directly.

7. Replace the Web API defaults for the OutputFormatter

We could replace the default options in the Web API. To do this add the following to your WebAPI Startup class in the ConfigureServices method:

  .AddJsonOptions(o => o.JsonSerializerOptions.PropertyNamingPolicy = null);

This will override the defaults and set them back to the .Net Core default of serialising out the JSON with the same casing as your model. This is by far the easiest option as long as you don’t have a requirement for Camel case JSON to be returned from your API.

8. Sledgehammer approach

Go back to Newtonsoft’s JSON.NET

Surely there’s got to be a better way?

There has been some robust discussion around the subject here and whether the ability to set the JsonSerialiser defaults on an application wide basis should be part of .NET Core going forward. I would personally like to see this feature but as with everything it has trade-offs with performance that Microsoft are keen to stay on top of with their flagship language.

Notwithstanding the performance question, the ability to set the options globally for the JsonSerialiser has reportedly been added to the .NET 5.0 roadmap which is expected to be released in November 2020 but I’m yet to see any details on how this change might be implemented .

I hope this goes some way to clearing up why things can go wrong with the new JsonSerializer in .NET Core 3.0. This is a small issue but if this helps one person avoid the time I wasted on it, then it was worth posting 🙂

Useful reading

When Fast query won’t play ball…

A colleague recently asked me for advice on an issue he had whereby a fast query was not returning all the items from a particular folder in the content tree.

Now leave aside for a minute the fact that fast query was being used dubiously (it was an ancient legacy part of the system with no remit given to refactor right now).

The query in question was a simple descendant folder search based on the template GUID:


The items in question were all published and present in the web database
A manual SQL query of the Items table in the web database brought back the items with no obvious oddities on the records.
We duplicated the existing missing item, published it, and that too was not brought back in the query

This rang a bell – I remembered reading some time ago about odd behaviour when the Descendants table was corrupted or in an inconsistent state.

According to the documentation if the FastQueryDescendantsDisabled setting has ever been changed you need to rebuild the Descendants table.

To do this, navigate to the Sitecore Control Panel -> Databases -> Clean Up Database and run the Clean Up Database wizard.

And whadda ya know – all results were now being returned. 😉

Note: This command actually does more than rebuild the Descendants table, it also cleans up invalid language data, orphaned fields and items and more. See this nice StackExchange post for the full list of tasks that get carried out.

Now to convince the powers that be, that fast needs to be killed 😛

Running the Sitecore Publishing Service in Docker

In my recent talk at Leeds Sitecore User Group this month, I demoed the Sitecore Publishing Service running in Docker and discussed how we can harness that for easy roll-out and migration to developers, through test environments and to the Cloud.

What is Docker?

Docker has really taken hold in the last few years with a massive amount of time and money being invested in the technology. I won’t go into detail here as there are hundreds of resources readily available online.

Microsoft are embracing Docker and investing heavily in it’s future with their .NET Core and Azure implementations. Their Docker documentation is definitely one of the better resources and a worthwhile read:

Not everyone is a fan of Docker and not everything needs ‘Dockerising.’ However Docker containers can be useful when you need: 

  • Consistency between environments with no variations between instances
  • Portability – An ability to share an environment
  • Isolation – You want your environment to be agnostic of, and isolated from the host machine / Operating System
  • Repeatability – When you have a lot of people wanting to set up the same thing

Why use Docker for the Publishing Service?

The Publishing Service is a good candidate for containerisation. For large development teams or teams with masses of infrastructure, individually setting up the SPS can mean a large investment in time and resources. As long as your environment supports it, Docker can reduce that cost significantly.

Building the image

First up: clone the repo here:

The instructions are present in the file but for your convenience:

  • Download the SPS service zip file version that matches your requirements and place in the assets folder
  • Rename the zip to “Sitecore Publishing”
  • Edit the Connection strings in the Docker-compose.yml file to a SQL Server instance the image can access. (It will install the SPS table schema in these DBs)
  • Edit the hostname in the Docker-compose.yml file to a hostname of your choice or leave the default
  • Open a command prompt and type: docker-compose up –build -d. After some time the image should be built and the container will be started with the name publishingservice.
  • Add the hostname sitecore.docker.pubsvc (or your custom host) to your hosts file
  • From here on in you can run or tear down the container with two easy commands: docker-compose up -d and docker-compose down

How does it work?

The example consists of 2 main parts:

  1. The docker-compose YAML file
  2. The Dockerfile

Docker Compose is a handy tool for orchestrating containers. It’s not mandatory in order to use the SPS but I find it useful for supplying parameters to a Dockerfile and makes it easier to add companion containers later on. The docker-compose.yml file in the repo should be customised, allowing you to enter your Sitecore database connection strings and desired hostname for the Publishing Service instance.

The Dockerfile (text based instructions Docker uses to construct an image) used in my demo is where the heavy lifting takes place. It is based on the .NET Framework image from Microsoft which out of the box gives us:

  • Windows Server Core as the base OS
  • IIS 10 as the Web Server
  • .NET Framework
  • .NET Extensibility for IIS


As the Publishing Service currently has a dependency on the .NET Framework and is therefore not fully cross platform we must use a windows image as a base. There is a plan for version 5 to be built on top of Sitecore Host (Sitecore’s service architecture going forward). In theory this will be fully .NET Core based and thus cross platform. This means we will be able to use a much smaller linux container as a base image. Until then we’re stuck with bloated Windows!

On top of this base Windows image the Dockerfile then instructs Docker to:

  • Install the .NET Core Hosting Bundle which in turn installs the .NET Core runtime, library and ASP.NET Core module. It also allows us to run the SPS in IIS.
  • Copy the SPS zip file from the assets folder into the image and unzip it.
  • Set the connection strings to the Sitecore databases using the values passed in from the docker-compose file.
  • Upgrade the database schema as per the SPS install instructions 
  • Install the SPS into IIS using the hostname in the compose file.

Running the container

Once the container is up and running, your host entry is set and the Sitecore config updated you should be able to view the container status page in the usual manner by navigating to the status URL in a browser to ensure the service is up and running:

http://<your host>/api/publishing/operations/status 

If the service is running you should see a status response of zero a la:

{ "status": 0 } 

Time to try a publish!

Job completed!
Image result for all good meme

Adding a Preview publishing target to the the Sitecore Publishing Service

Publishing speed out of the box has never been a strong point in Sitecore. A full site publish is akin to watching paint dry if your master database is even lightly used. Not only that, but any subsequent publish will be blocked by the first operation until it completes in its entirety. Not ideal but what can we do?

Step up the Sitecore Publishing Service (SPS), a separate site built on .NET Core designed to rapidly publish items via calls that go directly to the Sitecore databases.

If you’ve already decided that publishing is too slow and gone down the route of employing the Publishing Service, then bravo, good choice. If not what are you waiting for?!

The Publishing Service only ships with a default publish target – “Internet” (the web database). It’s therefore down to you to configure any custom targets such as Preview even if these are already set up in Sitecore.


I’m going to assume you’ve already:

  • Installed and configured all the prerequisites, the Publishing Service and the Sitecore module package. If you get stuck, Stephen has a very nice guide here – admittedly for v2.0 but pretty much all of the same info applies to v3 and 4)
  • Installed the .NET Core Windows Hosting  Bundle
  • Set up the IIS Site
  • Added a host entry to your hosts file
  • Set up your core, master and web connection strings in SPS config
  • Successfully published to the web database though the Sitecore UI

Configuring a new publishing target

The default publishing targets are configured in the file below relative to the installation folder:


If you open this you’ll see a <Targets> node where the publish target configuration is stored. However the /config/sitecore folder contains the default files provided by Sitecore and as sc.publishing.xml forms part of that it should not be modified.

As with a traditional Sitecore site, rather than edit the default files provided we will patch the configuration we need on top in a separate patch file. 
For our purposes we will call this sc.preview.xml but you could call this sc.targets.xml or anything else you desire as long as it has a “sc” prefix. Where we choose to locate this patch file has an effect on when it is loaded.

Configuration files are loaded from the folder structure in the following order:
1) Files are loaded from /config/sitecore/
2) Files are loaded from /config/global/ (if it exists)
3) Files are loaded from /config/{environment}/ (if it exists)

Using the patch file below as a guide, for now drop the file in the /config/global/ folder. This will ensure it is always loaded in each type of environment:

                                <Type>Sitecore.Framework.Publishing.Data.AdoNet.SqlDatabaseConnection, Sitecore.Framework.Publishing.Data</Type>
                                    <Type>Sitecore.Framework.Publishing.Data.TargetStore, Sitecore.Framework.Publishing.Data</Type>
                                    <Id><!-- Your Preview target item Guid goes here--></Id>

As you can see from the XML, we set up a Connection element, a Target element which references this connection and specifies the Publishing target item itself in Sitecore.

Ensure you replace the comment in the XML below:

<Id><!-- Your Preview target item Guid goes here--></Id>

With the appropriate GUID from your Publish target:

Connection strings

As part of setting up the SPS you should find the /config/global/sc.connectionstrings.json file already present and populated. All you need to do is add the bolded preview line below and update the connection string appropriately:

  "Sitecore": {
    "Publishing": {
      "ConnectionStrings": {
        "core": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreCore;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "master": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreMaster;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "web": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreWeb;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "preview": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecorePreview;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1"

Alternatively you can issue a command to the Publishing Host executable which will write the connection string line for you:

Sitecore.Framework.Publishing.Host configuration setconnectionstring preview id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecorePreview; 

If you use a Json or INI file for your connection string configuration, carry out the equivalent change.

Multiple Active Results Sets (MARS)

Multiple Active Result Sets (MARS) is a feature that works with SQL Server to allow the execution of multiple batches on a single connection. SPS requires the use of MARS – The setting has to form part of your connection strings or you will get an error:

System.InvalidOperationException: The connection does not support MultipleActiveResultSets. 

If you’re adding the Preview connection strings manually, ensure the following is present as part of the connection:


If you’re using the setconnectionstring command line approach it will get added automatically.

Upgrade the schema

Now the connection strings are hooked up we need to ensure the Preview database contains the necessary schema for the Publishing Service to function. Drop to a command line and execute the line:

Sitecore.Framework.Publishing.Host.exe schema upgrade --force 

This should result in something similar to the following output:

Schema Upgrade  
Upgrading all databases to version [ 2 ]  
Database: [ localhost\SitecorePreview ] … COMPLETE [ v0 => v2 ]  
Database: [ localhost\SitecoreCore ] … SKIPPED (Already v2) 
Database: [ localhost\SitecoreMaster ] … SKIPPED (Already v2)   
Database: [ localhost\SitecoreWeb ] … SKIPPED (Already v2)

Testing the service

We can now fire up the Publishing Service from the console which allows us to see debug messages and more verbose logging than in production mode. To run in development mode we run the following command from the installation folder with the environment flag:

Sitecore.Framework.Publishing.Host.exe --environment development

Since were using the development environment, our XML config changes must exist in the /config/development or the /config/global folder or they will not be incorporated. When the service loads, the list of registered targets are output to the console. If all is well and the config has patched correctly you should see your new Preview target output as per the screenshot:

Happy days!

If all looks good, now is the time to head back into Sitecore and attempt to publish some content to the Preview database.

Ditch dev and go into prod!

If you managed to successfully publish in development mode, now is the time to ensure you can run in production mode in IIS. Ensure you have the sc.preview.xml config file located in either /config/production or the /config/global subfolder for it to take effect. Since we made changes to the SPS configuration, you must restart the application pool for the modifications to be recognised.

Check the service status

Next up you’ll want to check the service is running properly with your changes. Navigate to the following URL in a browser to ensure the service is up and running:

http://<your host>/api/publishing/operations/status

If all is ok you should see a response similar to this:

{ "status": 0 } 

Repeat the publish you did earlier and check the Publishing Service logs to ensure that the service published the item correctly.

Now bask as the Content Editors buy you many gifts for their new found ability to publish to Preview in super fast time 😉


If you encounter problems or the service won’t start, the first thing to check are the logs stored in the \logs subfolder of the application folder.

1. Error: "Could not resolve stores: 'Preview' of type target"

The error message above indicates your configuration has not been patched in correctly; check the file is present in the correct environment folder with the “sc.” prefix. For troubleshooting purposes you could take a backup of /config/sitecore/publishing/sc.publishing.xml and temporarily edit the original file with your configuration to help narrow down the issue. If the config works its probably a patching issue. Don’t forget to restart the changes after any configuration change!

2. System.AggregateException: One or more errors occurred. ---> System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
 Parameter name: No connection was registered with name 'Preview'

This means that the XML in your config patch is not quite right. The target configured must match the name of the connection XML so double check for typos in your patch file.

This blog uses images designed by Freepik