When Fast query won’t play ball…

A colleague recently asked me for advice on an issue he had whereby a fast query was not returning all the items from a particular folder in the content tree.

Now leave aside for a minute the fact that fast query was being used dubiously (it was an ancient legacy part of the system with no remit given to refactor right now).

The query in question was a simple descendant folder search based on the template GUID:

fast:/sitecore/content/Home/Whatever//*[@@templateid='<guid>']

The items in question were all published and present in the web database
A manual SQL query of the Items table in the web database brought back the items with no obvious oddities on the records.
We duplicated the existing missing item, published it, and that too was not brought back in the query

This rang a bell – I remembered reading some time ago about odd behaviour when the Descendants table was corrupted or in an inconsistent state.

According to the documentation if the FastQueryDescendantsDisabled setting has ever been changed you need to rebuild the Descendants table.

To do this, navigate to the Sitecore Control Panel -> Databases -> Clean Up Database and run the Clean Up Database wizard.

And whadda ya know – all results were now being returned. 😉

Note: This command actually does more than rebuild the Descendants table, it also cleans up invalid language data, orphaned fields and items and more. See this nice StackExchange post for the full list of tasks that get carried out.

Now to convince the powers that be, that fast needs to be killed 😛

Running the Sitecore Publishing Service in Docker

In my recent talk at Leeds Sitecore User Group this month, I demoed the Sitecore Publishing Service running in Docker and discussed how we can harness that for easy roll-out and migration to developers, through test environments and to the Cloud.

What is Docker?

Docker has really taken hold in the last few years with a massive amount of time and money being invested in the technology. I won’t go into detail here as there are hundreds of resources readily available online.

Microsoft are embracing Docker and investing heavily in it’s future with their .NET Core and Azure implementations. Their Docker documentation is definitely one of the better resources and a worthwhile read:

Not everyone is a fan of Docker and not everything needs ‘Dockerising.’ However Docker containers can be useful when you need: 

  • Consistency between environments with no variations between instances
  • Portability – An ability to share an environment
  • Isolation – You want your environment to be agnostic of, and isolated from the host machine / Operating System
  • Repeatability – When you have a lot of people wanting to set up the same thing

Why use Docker for the Publishing Service?

The Publishing Service is a good candidate for containerisation. For large development teams or teams with masses of infrastructure, individually setting up the SPS can mean a large investment in time and resources. As long as your environment supports it, Docker can reduce that cost significantly.

Building the image

First up: clone the repo here: https://github.com/mjftechnology/Sitecorium.Docker.PublishingService

The instructions are present in the Readme.md file but for your convenience:

  • Download the SPS service zip file version that matches your requirements and place in the assets folder
  • Rename the zip to “Sitecore Publishing Service.zip”
  • Edit the Connection strings in the Docker-compose.yml file to a SQL Server instance the image can access. (It will install the SPS table schema in these DBs)
  • Edit the hostname in the Docker-compose.yml file to a hostname of your choice or leave the default
  • Open a command prompt and type: docker-compose up –build -d. After some time the image should be built and the container will be started with the name publishingservice.
  • Add the hostname sitecore.docker.pubsvc (or your custom host) to your hosts file
  • From here on in you can run or tear down the container with two easy commands: docker-compose up -d and docker-compose down

How does it work?

The example consists of 2 main parts:

  1. The docker-compose YAML file
  2. The Dockerfile

Docker Compose is a handy tool for orchestrating containers. It’s not mandatory in order to use the SPS but I find it useful for supplying parameters to a Dockerfile and makes it easier to add companion containers later on. The docker-compose.yml file in the repo should be customised, allowing you to enter your Sitecore database connection strings and desired hostname for the Publishing Service instance.

The Dockerfile (text based instructions Docker uses to construct an image) used in my demo is where the heavy lifting takes place. It is based on the .NET Framework image from Microsoft which out of the box gives us:

  • Windows Server Core as the base OS
  • IIS 10 as the Web Server
  • .NET Framework
  • .NET Extensibility for IIS

*Note*

As the Publishing Service currently has a dependency on the .NET Framework and is therefore not fully cross platform we must use a windows image as a base. There is a plan for version 5 to be built on top of Sitecore Host (Sitecore’s service architecture going forward). In theory this will be fully .NET Core based and thus cross platform. This means we will be able to use a much smaller linux container as a base image. Until then we’re stuck with bloated Windows!

On top of this base Windows image the Dockerfile then instructs Docker to:

  • Install the .NET Core Hosting Bundle which in turn installs the .NET Core runtime, library and ASP.NET Core module. It also allows us to run the SPS in IIS.
  • Copy the SPS zip file from the assets folder into the image and unzip it.
  • Set the connection strings to the Sitecore databases using the values passed in from the docker-compose file.
  • Upgrade the database schema as per the SPS install instructions 
  • Install the SPS into IIS using the hostname in the compose file.

Running the container

Once the container is up and running, your host entry is set and the Sitecore config updated you should be able to view the container status page in the usual manner by navigating to the status URL in a browser to ensure the service is up and running:

http://<your host>/api/publishing/operations/status 

If the service is running you should see a status response of zero a la:

{ "status": 0 } 

Time to try a publish!

Job completed!
Image result for all good meme

Adding a Preview publishing target to the the Sitecore Publishing Service

Publishing speed out of the box has never been a strong point in Sitecore. A full site publish is akin to watching paint dry if your master database is even lightly used. Not only that, but any subsequent publish will be blocked by the first operation until it completes in its entirety. Not ideal but what can we do?

Step up the Sitecore Publishing Service (SPS), a separate site built on .NET Core designed to rapidly publish items via calls that go directly to the Sitecore databases.

If you’ve already decided that publishing is too slow and gone down the route of employing the Publishing Service, then bravo, good choice. If not what are you waiting for?!

The Publishing Service only ships with a default publish target – “Internet” (the web database). It’s therefore down to you to configure any custom targets such as Preview even if these are already set up in Sitecore.

Assumptions 

I’m going to assume you’ve already:

  • Installed and configured all the prerequisites, the Publishing Service and the Sitecore module package. If you get stuck, Stephen has a very nice guide here http://www.stephenpope.co.uk/publishing) – admittedly for v2.0 but pretty much all of the same info applies to v3 and 4)
  • Installed the .NET Core Windows Hosting  Bundle
  • Set up the IIS Site
  • Added a host entry to your hosts file
  • Set up your core, master and web connection strings in SPS config
  • Successfully published to the web database though the Sitecore UI

Configuring a new publishing target

The default publishing targets are configured in the file below relative to the installation folder:

 /config/sitecore/publishing/sc.publishing.xml

If you open this you’ll see a <Targets> node where the publish target configuration is stored. However the /config/sitecore folder contains the default files provided by Sitecore and as sc.publishing.xml forms part of that it should not be modified.

As with a traditional Sitecore site, rather than edit the default files provided we will patch the configuration we need on top in a separate patch file. 
For our purposes we will call this sc.preview.xml but you could call this sc.targets.xml or anything else you desire as long as it has a “sc” prefix. Where we choose to locate this patch file has an effect on when it is loaded.

Configuration files are loaded from the folder structure in the following order:
1) Files are loaded from /config/sitecore/
2) Files are loaded from /config/global/ (if it exists)
3) Files are loaded from /config/{environment}/ (if it exists)

Using the patch file below as a guide, for now drop the file in the /config/global/ folder. This will ensure it is always loaded in each type of environment:

<Settings>
    <Sitecore>
        <Publishing>
            <Services>
                <DefaultConnectionFactory>
                    <Options>
                        <Connections>
                            <Preview>
                                <Type>Sitecore.Framework.Publishing.Data.AdoNet.SqlDatabaseConnection, Sitecore.Framework.Publishing.Data</Type>
                                <LifeTime>Transient</LifeTime>
                                <Options>
                                    <ConnectionString>${Sitecore:Publishing:ConnectionStrings:Preview}</ConnectionString>
                                    <DefaultCommandTimeout>120</DefaultCommandTimeout>
                                    <Behaviours>
                                        <backend>sql-backend-default</backend>
                                        <api>sql-api-default</api>
                                    </Behaviours>
                                </Options>
                            </Preview>
                        </Connections>
                    </Options>
                </DefaultConnectionFactory>
                <StoreFactory>
                    <Options>
                        <Stores>
                            <Targets>
                                <Preview>
                                    <Type>Sitecore.Framework.Publishing.Data.TargetStore, Sitecore.Framework.Publishing.Data</Type>
                                    <ConnectionName>Preview</ConnectionName>
                                    <FeaturesListName>TargetStoreFeatures</FeaturesListName>
                                    <Id><!-- Your Preview target item Guid goes here--></Id>
                                    <ScDatabase>preview</ScDatabase>
                                </Preview>
                            </Targets>
                        </Stores>
                    </Options>
                </StoreFactory>
            </Services>
        </Publishing>
    </Sitecore>
</Settings>

As you can see from the XML, we set up a Connection element, a Target element which references this connection and specifies the Publishing target item itself in Sitecore.

Ensure you replace the comment in the XML below:

<Id><!-- Your Preview target item Guid goes here--></Id>

With the appropriate GUID from your Publish target:

Connection strings

As part of setting up the SPS you should find the /config/global/sc.connectionstrings.json file already present and populated. All you need to do is add the bolded preview line below and update the connection string appropriately:

 {
  "Sitecore": {
    "Publishing": {
      "ConnectionStrings": {
        "core": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreCore;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "master": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreMaster;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "web": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecoreWeb;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1",
        "preview": "user id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecorePreview;MultipleActiveResultSets=True;ConnectRetryCount=15;ConnectRetryInterval=1"
      }
    }
  }
} 

Alternatively you can issue a command to the Publishing Host executable which will write the connection string line for you:

Sitecore.Framework.Publishing.Host configuration setconnectionstring preview id=user;password=password;data source=(local)\SQLEXPRESS;database=SitecorePreview; 

If you use a Json or INI file for your connection string configuration, carry out the equivalent change.

Multiple Active Results Sets (MARS)

Multiple Active Result Sets (MARS) is a feature that works with SQL Server to allow the execution of multiple batches on a single connection. SPS requires the use of MARS – The setting has to form part of your connection strings or you will get an error:

System.InvalidOperationException: The connection does not support MultipleActiveResultSets. 

If you’re adding the Preview connection strings manually, ensure the following is present as part of the connection:

MultipleActiveResultSets=True 

If you’re using the setconnectionstring command line approach it will get added automatically.

Upgrade the schema

Now the connection strings are hooked up we need to ensure the Preview database contains the necessary schema for the Publishing Service to function. Drop to a command line and execute the line:

Sitecore.Framework.Publishing.Host.exe schema upgrade --force 

This should result in something similar to the following output:

Schema Upgrade  
Upgrading all databases to version [ 2 ]  
Database: [ localhost\SitecorePreview ] … COMPLETE [ v0 => v2 ]  
Database: [ localhost\SitecoreCore ] … SKIPPED (Already v2) 
Database: [ localhost\SitecoreMaster ] … SKIPPED (Already v2)   
Database: [ localhost\SitecoreWeb ] … SKIPPED (Already v2)

Testing the service

We can now fire up the Publishing Service from the console which allows us to see debug messages and more verbose logging than in production mode. To run in development mode we run the following command from the installation folder with the environment flag:

Sitecore.Framework.Publishing.Host.exe --environment development

Since were using the development environment, our XML config changes must exist in the /config/development or the /config/global folder or they will not be incorporated. When the service loads, the list of registered targets are output to the console. If all is well and the config has patched correctly you should see your new Preview target output as per the screenshot:

Happy days!

If all looks good, now is the time to head back into Sitecore and attempt to publish some content to the Preview database.

Ditch dev and go into prod!

If you managed to successfully publish in development mode, now is the time to ensure you can run in production mode in IIS. Ensure you have the sc.preview.xml config file located in either /config/production or the /config/global subfolder for it to take effect. Since we made changes to the SPS configuration, you must restart the application pool for the modifications to be recognised.

Check the service status

Next up you’ll want to check the service is running properly with your changes. Navigate to the following URL in a browser to ensure the service is up and running:

http://<your host>/api/publishing/operations/status

If all is ok you should see a response similar to this:

{ "status": 0 } 

Repeat the publish you did earlier and check the Publishing Service logs to ensure that the service published the item correctly.

Now bask as the Content Editors buy you many gifts for their new found ability to publish to Preview in super fast time 😉

Troubleshooting

If you encounter problems or the service won’t start, the first thing to check are the logs stored in the \logs subfolder of the application folder.

1. Error: "Could not resolve stores: 'Preview' of type target"

The error message above indicates your configuration has not been patched in correctly; check the file is present in the correct environment folder with the “sc.” prefix. For troubleshooting purposes you could take a backup of /config/sitecore/publishing/sc.publishing.xml and temporarily edit the original file with your configuration to help narrow down the issue. If the config works its probably a patching issue. Don’t forget to restart the changes after any configuration change!

2. System.AggregateException: One or more errors occurred. ---> System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
 Parameter name: No connection was registered with name 'Preview'

This means that the XML in your config patch is not quite right. The target configured must match the name of the connection XML so double check for typos in your patch file.

Reflections on SUGCON 2019

Sitecore User Group Conference

Having missed last year’s SUGCON in Berlin due to family commitments I had no excuse to visit the conference in London this year. Well, apart from a small family commitment in the form of my son due to arrive in a week! Still its only a 4 hour trip home should he decide to arrive early 😉

I took the opportunity to go in early on the morning of Day 1 and update my Sitecore certification to version 9.1 kindly stewarded by Tamas and the team. Once that was done it was time to grab some food and meet up with friends, old and new, and catch up with whats going on in the world of Sitecore.

Azure Devops – Donovan Brown

The first talk of the day was from Donovan Brown on Azure DevOps. This was an enlightening talk further strengthening the case for Azure DevOps in the coming months and years when it comes to dev/test workflow. I think it will only be a matter of time before even the most ardent of detractors will see the benefits Cloud technology can bring to repeat-ability, reliability and cost savings. That is, perhaps, until that fateful day when the providers decide to hit their captive audience with price rises 😛

I attended many other talks of high quality, but the stand out ones for me were:

Sitecore Host – Architecture and plugin – Kieran Marron and Stephen Pope

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Kieran-Marron-Stephen-Pope-Sitecore-Host-Architecture-Plugin-Design.pdf

Sitecore Host runs on .NET Core and acts as a base framework for other services e.g. Identity Server and Universal Tracker with the Publishing Service soon to be migrated too. Host takes care of many common concerns such as Dependency Injection, Configuration management, Route registration and more. The idea behind it is to provide a consistent experience for any application on the Host from installation to configuration even allowing command line configuration without the need for a User Interface. This is what we’ve been looking for a long time and I find it reassuring and exciting to some extent that Sitecore has embraced splitting out the monolith into its constituent parts. For years It felt like Sitecore was gradually falling behind as it remained so tightly coupled to the .NET Framework and had a huge deployment real estate. But over the past couple of years the the move to split out parts of the beast is rapidly gaining pace and I applaud this move. I will be interested to see what happens in the next few years and what candidates are picked to receive the Sitecore Host treatment. My only concern will be the whether the DevOps guys are sucking through gritted teeth at what will soon be landing on their plate. Perhaps with the adoption of Docker, this side of things might improve but lets wait and see…

Taming the Marketing Automation Engine – Nick Hills

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Nick-Hills-Taming-the-ma-engine.pdf

This was a timely talk for me as I’m currently looking at harnessing the power of Marketing Automation for my current client. Nick really cut to to the heart of the matter and delivered an engaging talk covering pitfalls and common issues.

Even as a Developer with a limited appreciation and grasp for Marketing, I’m always astounded by the potential and power of Marketing automation in Sitecore, so to Marketer’s it must be a dream. Such is the power and ease of use I really do think Sitecore should have a compelling demo video on their homepage; Perhaps a whistlestop tour employing a campaign to engage customers in a plan, upsell, cross-sell, I think it would really captivate Marketers from the off and let the customers roll in!

Nick covered Automation from a Marketer’s perspective and then from a developer’s viewpoint. For me this is the single area of Sitecore besides personalisation that stands out as having the ability to generate a massive difference in potential revenue for customers. One you see it’s ease of use and power, you can imagine the huge possibilities for revenue generation and addition of business value. e.g. The system could:

  • Send an email reminder to customers if they abandon a basket, potentially capturing some lost revenue.
  • Send post-purchase experience countdowns and attempt cross-sells/ upsells
  • Send a “customers also bought” email or employ Machine Learning to suggest items that match the buying habits and likely interests of the user.
  • Integrate with a weather API for companies that benefit from particular types of weather. It could be used to suggest bringing an umbrella or suncream or upselling a day trip such as a water park on a nice day or an aquarium on a wet day. A car hire company could use weather data to upsell a 4×4 vehicle for wet or snowy weather
  • And so on…

JSS immersion – lessons learned and looking ahead – Anastasiya Flynn

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Anastasiya-Flynn-JSS-Immersion.pdf

Anastasiya covered many lessons in her talk from scaffolding out components and structuring your project, to debugging techniques. As a complete n00b to JSS, I was hoping to glean hard won tips and tricks in this session that would kick-start my learning. However on reflection I would really have benefited from studying React and playing about with JSS beforehand as many of the concerns here were difficult to fully grasp without a base grounding in the tech.

She explained how the component factory worked (which went over my head having never used React or JSS). I intend to return to her slides when I get time to delve into JSS with either React or VueJS.

Debugging tips were shared in the form of the Chrome debugging tools and useful extensions for React or Vue though I was surprised to see console.log being recommended as a fallback option. I know its has always been a quick and easy way to output given information but it does make me wonder whether the stack is now so complex that we are still waiting for the debugging tools to catch up.

I’m looking forward to tinkering with this in my spare time but the big question still lingering in my mind is, what if you don’t want to go completely greenfield? Many clients don’t have the budget to scrap their site and build from scratch and I’ve not read much in the way of blogs on how to piecemeal migrate to JSS or combine JSS with traditional Sitecore sites. It’s an area I’m very keen to investigate…

PAAS it on – Learnings from a year of Sitecore on Azure Paas – Criss Titschinger

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Criss-Titschinger-PaaS-it-On-Learnings-From-a-Year-on-Azure-PaaS.pdf

I enjoyed this talk from Chris, what it brought home was how complicated a full Sitecore 9 deployment to the cloud can be versus Sitecore 8 especially in a Blue/Green setup. Criss brought many of his learning experiences to the table. A few highlights for me were:

  • Recommending the use of TFS2018/ Azure Dev Ops and Service principles rather than Publish Profiles. In Blue/Green setups it pays to be mindful of the fact that the CD server will be shared between both instances which obviously puts a bigger strain on the resources of that machine.
  • Diagnosing issues with a specific instance in a cluster has always been a pain. In the past I’ve used a HTTP Header to identify servers. Criss suggested using the ARRAffinity Cookie together with browser plugins like EditThisCookie to allow you to edit the cookie and load a page on each instance.
  • Azure portal gives you Availability and performance stats, CPU, Memory reports. You can even kill the process from here.
  • Application Insights is a great tool for searching logs by time, or for a specific exception but pay attention to the amount of data being harvested, especially from test servers as you will be charged for all the data.
  • Azure search is very easy to set up but it is limited to 1000 fields per index. This might necessitate excluding unnecessary fields in the ContentSearch configs otherwise building an index can hang. Azure Search can be expensive so it might be an idea to use Solr for local development purposes
  • Azure SQL audit / threat detection works very well and detects all failed logins or firewall anomalies.
  • Bandwidth can be pricey for bringing content down – think about media and putting it into a CDN to save costs.
  • If you cant find what you’re looking for in the Portal try resources.azure.com. The option to use HTTP2 was active in Resources 2 weeks before it was available in Portal.
  • Autoscaling sounds great in principle, but it means cold starts. And cold starts mean there is a period of time when your site is unavailable to serve requests, which is less than ideal. It is therefore advisable to reduce startup time as much as possible by streamlining Sitecore e.g. making use of the the Prefetch cache and Precompiling views.

I must admit it was quite worrying to hear Microsoft’s seeming indifference to swapping out the instances making up part of his cluster. With Sitecore’s cold start time, this was enough to cause downtime and I’m very surprised this is still an issue with Azure at this point in time. Issues like this are always a worry for me as it is all well and good recommending the cloud to a client but if they end up experiencing downtime and lost revenue through no fault of your own it looks bad. Stakeholders don’t care about the minutiae of machines or containers spinning up and down they only see lost revenue and bonuses disappearing into thin air and much shouting will occur. One of the audience did suggest some solutions to this after the talk but I don’t feel it should be up to the client to have to monitor services with Application Insights just because Microsoft decide rejig your services on a whim.
Personally before moving to the cloud I wouldn’t feel comfortable without producing a full end to end plan with rollback and fallback options. In my opinion it makes sense to retain your OnPrem setup to allow testing in parallel and it also allows a fallback option should the cloud setup not fully fulfill expectations early on. It takes years to gain a good reputation and minute to lose it.

10x your Sitecore development – Mark Cassidy

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Mark-Cassidy-10x-your-Sitecore-Development.pdf

It was a pleasure to finally hear one of Mark’s talk in person. A lot of what he described really resonated with me with regards to overengineering Sitecore solutions. Although often we try to produce the most flexible design for a client, beneath all the noise it is often time to market which is their number one priority. Get the product out the door then iterate. I am a big fan of this mentality as opposed to taking much longer to get the “perfect” solution before pushing it out the door. What was the perfect solution at the time often ends up changing and so the time spent up front is often money taken from the budget that would be better spent elsewhere. By trying to take our ego out of the equation as an architect in delivering all singing all dancing solutions we can deliver a product better suited to our clients needs much more rapidly.

Having said all that, although this approach works well for agency clients and small customers, with Enterprise clients and larger Sitecore solutions with a huge real estate, it can pay to slow things down. Often I will be tasked to work on a Sitecore site that has had years of development behind it and although time to market is still important here, other factors such as performance, and reliability under stress come to the fore. Nevertheless I usually like to offer clients a short middle and long term solution to a given problem.

I can appreciate the use of the native Sitecore API as a default approach especially when you have big teams of .NET Developers or even newly certified developers. I have seen people get lost quite easily when an ORM comes into the equation as it is not something they will find in the Sitecore documentation or training. It also hammers the point home of YAGNI (You aren’t going to need it). Do you actually need an ORM to achieve the MVP of the product or can it be shipped without? Personally I feel the advantages of automapping to strongly typed models and decoupling from the Sitecore API beneficial on most occasions but we should still stop and think. Mark put a very valid point across that the Sitecore API is very stable and hasn’t significantly changed in around 10 years. This is a big point for me that many architects seem to ignore. Take Glass for example, its a fantastic product provided free of charge and I use it regularly. However updating to version 5 from 4 does mean there is work to do from a development perspective and hence additional testing too. Can your client absorb the cost of the inevitable refactor? It’s something that we rarely talk about but I feel should be a consideration when choosing an ORM. We should set the expectation that it won’t be maintenance free forever. 

Using xDB at scale – Mike Edwards

This talk was a refreshing take from Mike on how to apply presentation and profile cards when dealing with a massive amount of content pages (think thousands). Solving this maintenance nightmare involved creating a centralised config item in Sitecore with a rules field. The rules engine is run against this field on every request and if the predicate evaluates to true, the profile card is assigned. 

Personalising content en masse also presents problems and Mike talked through using a centralised item with personalisation applied which is accessible to the Content Editors. These renderings get injected into the layout XML at run time via the mvc.getxmlbasedlayoutdefinition pipeline and I’ve seen this approach used to good effect by clients before. Although you take the hit on every request with judicious caching, you save the Content Editors weeks of time navigating through dialogs in the Content Editor.

In my experience making Content Editors life easier is often not number one priority when developing solutions. Developers are often far removed and often don’t meet the editors, when they should really should interact closely. Often the closest they get is a ticket in their sprint backlog or kanban board from the Content Editors requesting a feature or some additional functionality. By putting ourselves in the Content Editor’s shoes it can help to deal with their frustrations. I try to remind myself that a CMS is designed to be used by non technical users and I should try and empower people to do their jobs without constant developer support.

Sitecore 9 Architecture 101 Thomas Eldbrom

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Thomas-Eldblom-Sitecore-9-Architecture-101.pdf

Thomas started his talk with a cracking trip down memory lane showing how the Sitecore user interface has evolved over the years. I’d love to know more from people more about what it was like working with the early versions – Site Core, and SiteCore.

Thomas showed how the Sitecore real estate has expanded from a single role in version 4 to over 50 roles in 9.1. There are now 18 databases, 12 indexes and over 20 services. The advent of microservices has obviously caused this to balloon in recent years but it’s starting to become a challenge to stay on top of both development and architectural concerns. We can obviously mitigate this by investing in continuous learning and Martina Wehlander has done a staggering amount of work in making massive improvements to the Sitecore documentation over the past few years. It is much easier now to locate official documentation and you can also target the version of Sitecore you are interested easily to get version specific information. The Master Sitecore YouTube channel is also a valuable resource.

The talk progressed through how the 9.1 architecture works at a high level in terms of CM, CD, Authentication, Publishing, Processing and Tracking. Thomas discussed Sitecore Omni released as part of Sitecore 9.1 which is a range of products supporting headless scenarios where you want to decouple the delivery of the content from it’s rendering. This allows Front end developers to build out a site and interface while still be able to harness the power of Sitecore versus the traditional way heavily dependent on .NET Developer resources.

I must say I reflected a lot on this presentation and although the rate of progress is rapid and exciting in the Sitecore space, I can’t help but feel a little sad that the combined Sitecore Solution Architect / Developer role feels like its going to go the way of the dinosaur. There is simply becoming too much to the platform to be an expert in everything. Perhaps over time, it will become necessary to specialise in the same way that a Full Stack Developer can’t reasonably be an expert in the entire full stack nowadays. They can be excellent in some technologies and competent in others, able to quickly adapt, as they should, but the estate is simply too large to have an up to date intricate knowledge of all aspects. On the other hand I do love that there is so much to learn with this platform, it is constantly evolving and definitely not a dull technology stack from that point of view. 

I struggled with Sitecore on Docker so you don’t have to – Sean Holmesby

Having recently got into tinkering with Docker and Sitecore I found myself nodding along with Sean’s talk having gone through all the stages he did. I can really see the advantage of using Docker in Sitecore to spin up and down containers and environments quickly. Ideal for demos or sharing between developers, it cuts down on the time spent setting up environments. I can see Docker coming into its own in the near future as more and more Sitecore services are migrated to .NET Core. Linux containers have a small footprint so with .NET Core being cross platform it sounds like the perfect pairing. Many times I have known developers taking a week to set up their local environment (no joke). With Docker in the picture there is no longer any excuse – with a “docker compose up” you’re ready to go

Measure if you want to go faster – Jeremy Davis

https://www.sugcon.eu/wp-content/uploads/2019/04/SUGCON-Europe-2019-Jeremy-Davis-Measure-if-you-want-to-go-faster.pdf

I love a good talk on performance and metrics but I realised shortly after Jeremy started his talk I’d already watched the video last year! Doh! It was a nice recap anyway delivered by Jeremy succinctly, and I liked his judicious use of the VS debugging and profiling tools in VS 2017. I wholeheartedly agree with Jeremy when it comes to measuring the impact of our changes. How can you be confident your code performs if you don’t measure it? Historically I’ve tended to use JMeter/Gatling/DotTrace and a scattering of Sitecore tools to identify bottlenecks and the potential impact of code changes. I will be adding the Visual Studio Profiler to that (VS2019 caveat notwithstanding)

Sitecore 9.2 The Hidden Bits – Pieter Brinkman –

Peter rounded off the conference with a brief chat on the upcoming features in 9.2 which will be released this quarter. A few highlights for me were:

  • Horizon – the replacement for the editing experience in Sitecore. This will be tweaked based on feedback from MVPs and the community. I’m very much looking forward to seeing this in action.
  • Sitecore Host this will be upgraded to the latest version of .NET Core
  • JSS SXA and Sitecore Forms integration as mentioned by Adam Weber and Kam Figy in their presentation on JSS.
  • Further work on Helix principles with new samples and designs with talk of a simpler structure for projects in Visual Studio.
  • Rainbow serialisation  – merging changes with the current serialisation format which contains a file length has always been pain as it necessitates the file length to be updated manually post merge.
  • Personalisation report – This looks like it will provide an easy way of display details of the personalisation currently in effect. I have a feeling this will not only be useful for Marketers but useful to Developers for making performance optimisations also.
  • Sitecore Install Assistance – This is effectively a GUI wrapper for SIF providing easy installations for developers and non technical users and I’m looking forward to trying it out. Of all the things Peter talked about, I have to say I am most looking forward to SIA for reasons I’ve talked about here. SIA will initially be available for XP0 only but I feel that is where it is needed most and I will welcome it with open arms 🙂

Missed sessions
Unfortunately as many of the sessions ran in parallel I missed out on some interesting talks. However the slides are currently online, hopefully with accompanying videos shortly:  https://www.sugcon.eu/video-downloads/

I’m looking forward to watching some of the great talks I missed including:

Summing up

Having had time to reflect on the conference, the future of Sitecore looks very bright indeed. The massive strides in architecture and feature set demonstrated at SUGCON shows just how much effort is being put into the platform.

On a personal level I enjoyed visiting the Big Smoke. I am a big fan of architecture so I am always amazed at some of the buildings coming from my small backward town up north.

It was also great to catch up with friends and see what they’ve been up to personally and professionally, as well as meet new ones in a very positive environment. The coffee and food was decent and the sessions ran like clockwork thanks to the efficient team of Volunteers.

Looking forward to next year, wherever that may be!

Installing Sitecore? How hard can it be?

The sheer amount of work that has gone into creating the Sitecore Installation Framework (SIF) to date and it’s power and flexibility is mega impressive. It exhibits a masterclass in Powershell, and kudos to the team for the amount of effort and testing that must have gone into it.

However compared to the old days of spinning up a Sitecore instance in Sitecore Instance Manager (SIM) and having an installation up and running in minutes, the process has become fairly complex and laborious.

A local installation can often consist of executing the installer, Googling the error message then figuring out whether you need to manually roll back parts of the installation and rerun the whole installation, or issue a command to skip steps.

It must admit it does make me slightly uncomfortable to have to add the password for a SQL Server login with sysadmin privileges into a config file in plain text. These files can often end up not getting deleted when they should and can potentially have security connotations depending on the environment. Small niggles such as using a SQL user with a dollar character as part of their password can cause problems as this has to be escaped in the SIF script. Sounds trivial, but it’s another annoyance for users. Similarly there seems to be no way of automatically creating a certificate and binding it to the Sitecore instance at the time of writing (v2.1) or uninstalling.

Now it is very understandable that with the Sitecore real estate widening as it becomes more and more service oriented, there needed to be changes to the way the product installed, but for simple installations and demo purposes it I’m unsure if SIF is the right way to go. The fact SIF has it’s own channel in Sitecore Slack and a not insignificant number questions on Stack Exchange suggests that perhaps the way we approach installation might need to change in future.

The people I’m particularly thinking of here, are those who want to investigate and assess the product for evaluation purposes. If they can’t get it installed quickly and smoothly Sitecore risk alienating potential customers and hence losing out on valuable revenue.

In the short term the community (specifically Rob Ahnemann) has kindly stepped in to bridge the gap with a GUI wrapper in the form of SIF-less(!). This is obviously very useful for developers and demo purposes but not the adorementioned people who wil lprobably want to follow the officially documented way of installing.

Thinking longer term, if Sitecore continue down their pursuit of breaking out and abstracting parts of the system, many parts will be able to run in lightweight Linux Docker containers. Docker is not currently supported by Sitecore but I hope they will embrace it in the near future as people like Per Manniche Bering have made some great strides with it and I personally feel that this will be the direction developer, test, and even production setups go in the next few years.

Git gotcha

A little Git gotcha that I’ve seen crop up from time to time is an obtuse error encountered when trying to pull a branch or carry out a Git fetch command:

  • error: cannot lock ref ‘refs/remotes/origin/feature/xyz’: is at aff0047acaf8f324f70b9d7f71279f9ed5efb66d but expected 4b3db3223c87a8b0a06ed0f731aa15b5d8f6dfc8
  • ! 4b3db3223..e7a86e61afeature/xyz -> origin/feature/xyz (unable to update local ref)

This can affect developers during their day to day workflow using Git but where its most commonly noticed is on the build server when all the CI builds start failing, emails get sent, alarms go off and people get shouty. 😉

It’s not particularly obvious, but this can occur if there are two branches on origin with the same name but different capitalisation e.g.

  • feature/xyz
  • feature/Xyz

This only occurs on Operating Systems with case insensitive file systems such as Windows.

To resolve the issue rename or delete one of the branches and you’re good to go again.

Sitecore Virtual Developer Day 2019

Sitecore hosted their second Virtual Developer Day on 7th March. This was a free online event offering a series of webcasts on Sitecore related topics by a variety of speakers from the community. After filling in a simple registration form the, presentation link is sent to your Inbox and you can log in.

As a freelancer I personally find it worthwhile every so often to down tools and take a day out to watch and read up on some of the interesting content produced by the Sitecore community. Virtual Developer Day makes this easy by bringing together a bunch of presentations which were recorded as part of Sitecore Symposium in Florida last year.

Each video was scheduled to start at a preset time in one of four timezones. Once a video had started it immediately switched into a “on demand mode” which meant it could be streamed at your leisure thereafter.

If you value your time, (and you should!) it’s important to make the videos work for you. I recommend picking the topics/speakers you’re interested in and having something to do in between the videos, or potentially end up procrastinating and wasting large parts of the day. I had other client work to do alongside/in between the videos so it ended up being a productive “day off”. 

It didn’t take me long to figure out that rather than wait for the videos I was interested in to start I could switch the selected time zone to Australia time and watch any of the videos on demand rather than wait for them to be broadcast at a preset time.

The On24 webinar interface made for a nice experience – the video and accompanying slides were presented in resizable windows together with a built in twitter window for social interaction, the format worked well.

Picks of the days for me were:

  • Martina Welander‘s guide to privacy and GDPR in Sitecore was delivered enthusiastically on what might otherwise be a dry topic
  • Raul Jimenez presenting a great video on harnessing the power of Docker for Sitecore development
  • Sheetal Jain‘s engaging presentation on Salesforce integration with Sitecore.
  • Kelly Rusk‘s dulcit tones on his SIF Deep dive
  • Martin Davies‘ Helix Smells – detailing solutions to issues that often crop up when implementing Helix patterns in development

At the time of writing you can still register to view the videos and therefore stream them at your leisure.

Gems from the Marketplace – Sitecore Security Rights Reporting

In this series I’ll be mentioning some of those slightly lesser known modules that exist in the Sitecore Marketplace, which I’ve found particularly useful. There really is a plethora of useful modules on there, some quite old, some bang up to date, but hopefully you might find one or two as useful as I have.

The first one up is called the Sitecore Security Rights Reporting module by Jan Bluemink.

I found this module very helpful when trying to observe the current state of the Roles and Users in a particular Sitecore instance, with a view to rationalising the list of people with admin access and reorganising the role membership.

Accessible from the Security Tools menu, it allows you to easily view user role membership in a matrix format that you can drill down into, and saves a lot of time spent clicking into and out of the standard Sitecore dialog boxes.

Overview can be particularly useful when reorganising
or pruning a large number of roles and users

As a footnote Jan advises that the tool makes use of database queries that may put load on the server, so bear that in mind when running it anywhere other than locally. 

Jan has also recently updated the module for Sitecore 9.1.

Beware of View Compilation with Helix Features!

Totally sensationalist headline for a small issue but now I’ve got your attention… 😛

View precompilation is cool, it lets us catch errors at compile time, helping us to fail faster and we can avoid the performance hit of compilation at runtime since the views are already compiled. We no longer need to ship the cshtml files resulting in smaller deployments and it can also help solve issues with large numbers of physical view files. 

However I’ve noticed an interesting issue when creating Helix features, in that, in some situations the wrong view will be rendered for a component.

What?

Imagine we have two Helix Features, which output as assemblies FeatureA.dll and FeatureB.dll. They both have components (View or Controller rendering it doesn’t matter) that reference the same view name in their respective projects e.g. ~/Views/Test.aspx.
Despite the fact they are in two different assemblies, with view precompilation enabled, Feature B’s view will always be rendered out to screen when asking for FeatureA:

Controller Action method asks for View A and gets View B

This is obviously less than ideal as when adding Feature B, a developer would not envisage Feature A being affected by their change. The issue would not be caught unless we explicitly test Feature A is still working correctly when deploying Feature B (full regression test perhaps).

This occurs with the latest versions (at the time of writing) of two popular precompilation libraries available on Nuget:

Why?

Diving deep into the StackExchange.Precompilation library… 

When called upon to render a view the view engine code in the StackExchange code loops through all types in each loaded assembly and checks whether any of the types inherit or implement WebPageRenderingBase (which your compiled views will do)

It retrieves the [CompiledFromFileAttribute] from the compiled view class:

namespace ASP {
[CompiledFromFile("E:\\dev\\Sandbox\\Sitecore\\Components\\Test\\TestLayout\\Test.cshtml")] 
public class _Page_Components_Test_TestLayout_Test_cshtml : WebViewPage<TestGlassModel> 
{   
protected HttpApplication ApplicationInstance

This absolute path retrieved from the attribute is clearly unique between assemblies, but unfortunately it is converted into a virtual path, and the uniqueness is lost.

This virtual path is added as the key to a _views dictionary together with the compiled view type as per below:

private readonly Dictionary<string, Type> _views;           
foreach (var view in viewTypes)                
{                   
var attr = view.GetCustomAttribute<CompiledFromFileAttribute>();                   
  if (attr != null)                   
  {
      _views[MakeVirtualPath(attr.SourceFile, sourceDirectory)] = view; // Dictionary value can get overwritten here when MakeVirtualPath generates a string which is already present in the dictionary.                   
}               
}

The loop then continues through the list of assemblies.

The problem becomes apparent when two assemblies have the same generated virtual view path, the assembly that comes alphabetically last wins out and overwrites the value corresponding to the virtual path key string in the _views dictionary.

The Razor Generator code works in a similar way and uses a _mappings dictionary for the views also keyed on the virtual view path and hence exhibits the same problem.

By Decompiling the views we can immediately see the issue with the Razor Generator approach. They have identical virtual views stored in the [PageVirtualPath] attribute so are indistinguishable:

Now having two Helix Features with the same virtual view paths is likely to be a rare occurrence but it’s an interesting observation nonetheless. Its slightly more likely to crop up in those solutions where View compilation has existed in the project for a long period of time. The copying of physical views is not necessary, so the developers will never experience physical view copy clashes in local builds when adding new Features. Its also worth being observant when attempting to moduralise vertically sliced Sitecore solutions where views form part of a Component/Feature folder together with concerns such as the Controller and Viewmodel. Controllers returning a local view may end up with the same virtual path though still highly unlikely as hopefully the folder structure will be distinct enough.

The workaround is to ensure you have different folder paths and or view names for each Feature when using View compilation .

Sample demo

In order to demonstrate the issue with a code sample I’ve pushed a demo to Github which you can download and run with IIS Express.  I’ve removed the dependency on Sitecore for demo purposes since it is not a requirement in order to convey the problem.

How to verify what’s inside your Sitecore Rendering Cache

I recently found myself doing some performance optimisations around renderings in Sitecore, one of which involved caching. Having spent time cache tuning some renderings I wanted to prove the cache contained the HTML markup I expected . Unfortunately Sitecore doesn’t let you view the contents of the HTML cache without writing some custom code. 

Sitecore Rocks to the rescue! To view the keys in Sitecore rocks select the caches tab from the Sitecore Explorer and scroll down to “yoursitename[html]” and double click. The “Explore cache” window will appear and you should then be able to see the cache keys currently in the cache. Sitecore Rocks will display the cache keys but unfortunately not the cached HTML that is stored against them.

Sitecore Rocks allows you to see the cache keys in use but not the corresponding markup

So i decided to knock up a (very) quick and dirty admin page in the same vein as the standard Sitecore admin pages. This can simply be dropped in the Sitecore admin folder and when browsed to, will display a list of cache keys and respective cached HTML for a given site. I’ve submitted this page as a module in the Sitecore Marketplace that is easy to download and use. It’s also on Github.

Custom cache viewer allows you to view the markup stored against each cache key
Custom Cache Viewer allows you to view the markup stored against each cache key

To use:

  • Download the module from the Marketplace when approved or from Github
  • Copy the aspx file to the /sitecore/admin folder in your installation
  • Browse to /sitecore/admin/htmlcacheviewer.aspx
  • Select a cache from the list of site caches in the drop down listThe page will automatically update to show the current cache keys and values in the selected HTML cache 
  • To order the cache keys alphabetically ascending or descending click the cache key column header
  • For those cache keys that include a Sitecore GUID such as when Varying by Datasource a clickable link is generated that will dump you into the content editor and select the item in question.
  • To copy the HTML markup to the clipboard for further analysis click the copy button for the row you wish to copy.
  • The list of cache keys is retrieved when first selected, so to refresh to show the current state click the Refresh link under the drop down list of caches.

Let me know if you find it useful (or not!)