SharePoint Dragons

Nikander & Margriet on SharePoint

Issue loading the correct Microsoft.SharePoint.Client assembly from a PowerShell script

We experienced an issue with a PowerShell script that loaded several SharePoint assemblies such as Microsoft.SharePoint.Client.dll that were shipped along with the script in the same folder. The PS script loaded the assemblies, but loaded the WRONG version on some environments. We needed Microsoft.SharePoint.Client.dll version 15.0.4797.1000 or higher, and what we we got was version This happened on environments that didn’t contain the latest CU update so still had the older version in the GAC. Since we were in a situation where we were not allowed to install the CU update on the machine, but we still wanted to run code with the newer assembly version, we were trying to find a way to specify the exact assembly that we wanted to load from within a PS script. 

A quick tip: we quickly became annoyed with the fact that we needed to close the PS cmd prompt after loading assemblies, because the loaded dll’s are not unloaded from the app domain otherwise. Instead, we did the following:

1 – Open a PS cmd prompt

2 – Type ‘powershell’ which starts a new PowerShell session.

3 – Then, execute the code that loads the assembly and test it.

4 – Type ‘exit’ to cloase the PS session and unload the dll’s.

Repeat this procedure as much as needed. This way, you don’t have to close the PS cmd prompt itself anymore which is a time saver.

We came up with the following ways to load assemblies within a PS script:

– Via Add-Type -Path, which allows you yo specify the path to assembly DLL files that contain the needed types.

Example: Add-Type -Path ‘D:\LCTest\Microsoft.SharePoint.Client.dll’

– Via Add-Type -LiteralPath, which also allows you to specify the path to assembly files. The difference with the Path parameter, according to the documentation (, is that the value of the LiteralPath parameter is used exactly as typed and no characters are interpreted as wildcards.

Example: Add-Type -LiteralPath ‘D:\LCTest\LoisAndClark.Microsoft.SharePoint.Client.dll’

Please note: we altered the name of the SharePoint client assembly to LoisAndClark.Microsoft.SharePoint.Client.dll to make absolutely sure that we were loading the assembly we intended.

– Via reflection and the LoadFile method, which loads the contents of an assembly file.

Example: [Reflection.Assembly]::LoadFile(‘D:\LCTest\Microsoft.SharePoint.Client.dll’)

– Via reflection and the LoadFrom method, which loads an assembly given its file name or path.


– Via reflection and the Load method, which loads an assembly based on its FQDN:

[reflection.assembly]::Load(‘Microsoft.SharePoint.Client, Version=15.0.4797.1000, Culture=neutral, PublicKeyToken=71e9bce111e9429c’)

By the way, we found an easy way to check if the assembly was recent enough for our purposes by loading the assembly and executing the following PS code:

[Microsoft.SharePoint.Client.AuditMaskType] $test = 0

If that line of code worked, the assembly was new enough for us.

Although we now had various ways to load assemblies within a PS script none of them worked correctly, because they all loaded the old assembly version. The thing is that the various assembly loading methods imply that you can specify a specific assembly location, when in fact, you cannot. As soon as the code tries to load the assembly, the CLR checks if there is an assembly in the GAC with the same strong name. If there is, the CLR loads that one instead. We came up with a couple of ideas to try and circumvent this:

– Configuration via app.config

– Use ReflectionOnlyLoadFrom

– Remove strong name

– Uninstall SharePoint DLLs from GAC

– Use DevPath

Configuratie via app.config file
The CLR assembly probing mechanism can be influenced via an app.config file. Since we’re dealing with a PS script that loads assemblies, the next question is how you should load an app.config file. As it turns out, the app.config file can be loaded in the app domain that executes the PS code. Unfortunately, the app.config file cannot influence the behavior of the assembly resolving process and using the app config would only have a chance of succeeding if the assembly version (and not the assembly file version) would have been different. But that was not the case, so we had to abandon this idea.

Use the ReflectionOnlyLoadFrom method via reflection
As opposed to the other methods via reflection discussed previously, reflection does offer a method that allows you to load an assembly from a specific location. That method is called ReflectionOnlyLoadFrom and can be used like this:


However, this method was useless to us because assemblies that are loaded this way are not executable. You can use this method to find out information about a specific assembly but we were not interested in that.

Remove strong name
By definition assemblies that are NOT strong named don’t match with assemblies in the GAC. One approach would be to remove the strong name of the required SharePoint assemblies using tools such as This way, it should be possible to prevent the loading of the GAC version of the assembly. Although this should work and is valid as a train of thought in a brainstorm, we didn’t pursue this approach because it is silly. 

Remove SharePoint DLLs from the GAC
We had a test environment that shouldn’t contain SharePoint assemblies but did nonetheless. It was certain that those DLLs were not required for the machine, it was unclear how they got there and they couldn’t be updated to a newer version because of an obscure error that occurred. So, we tried what would happen if we’d remove the SharePoint DLLs from the GAC directly, assuming that if we succeeded in doing that the CLR would have no choice but to load the intended assembly version. Alas, every time we did that after a short while the DLLs that were removed from the GAC were restored. Allegedly, Windows Installer is responsible for this. It keeps track of a reference count and if the count is > 0 there are still applications depending on a dll. You can only really remove the assembly from the GAC if the reference count has reached 0. This is made clearer by removing a DLL via a Visual Studio command prompt, like so:

gacutil /u “microsoft.sharePoint.client,Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c”

This fails and the error message states the dll cannot be removed because there are applications that depend upon it. Since we were unsure which applications were still dependent upon the DLL (which should be deductible by exploring the registry), we had to abandon this approach.

The DevPath setting is a bit esoteric, but proved useful in our case. You can use it to indicate that a specific machine is a development machine thus allowing you to choose a specific path where assemblies should be loaded from thereby bypassing the GAC. Doing this also means that assembly version numbers are not taken into account anymore, the .NET assembly resolver just loads the first assembly it finds. You can set the DevPath setting by opening the machine.config file (C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config) and replace the <runtime /> element with: 


<developmentMode developerInstallation=”true” />


Once you’ve done that, the .NET assembly resolver looks if there is a DEVPATH system environment variable and uses that path to load assemblies (e.g. you can set the DEVPATH environment variable to D:\LCTest\).

This approach works, but you also have designated a machine as being a DEV machine. That would only be acceptable rarely.

You can force the correct assembly to be loaded from within a PS script bypassing the one located in the GAC, but you’ll have to jump through some hoops and live with concessions. In our case we decided the only valid way of going forward was to create a clean machine without any SharePoint DLLs in the GAC thereby ensuring that the issue where the wrong assembly version is loaded is prevented.

SharePoint 2013 on demand loading pattern

SharePoint has a JavaScript on-demand loading library, the SP.SOD library (more info on We find the following pattern useful to ensure that a custom JavaScript library called MyCustomLib.js is only loaded once and on demand. So in the pattern below, MyCustomLib.js is only loaded when SP.SOD.executeFunc() is executed.

RegisterSod(MyCustomLib.js’, ‘/sites/ OurTestSite/Style%20Library/Javascript/MyCustomLib.js’);

RegisterSod(AnotherCustomLib.js’, ‘/sites/OurTestSite/Style%20Library/Javascript/AnotherCustomLib.js’);

RegisterSodDep(“MyCustomLib.js”, “SP.js”);

RegisterSodDep(“MyCustomLib.js”, ” AnotherCustomLib.js”);

SP.SOD.executeFunc(MyCustomLib.js’, null, function () { LoisAndClark.CustomApplication.MyCustomLib.init(); });

How to create an OfficeDev PnP Provisioning engine extensibility provider

The OfficeDev PNP provisioning engine ( is able to create an XML template based on a given SharePoint site and then use that XML template to create new sites. Ootb, the provisioning engine contains a considerable amount of stuff it can do as detailed in the PNP provisioning schema ( The provisioning engine allows you to define extension points that allow you to add custom steps to the provisioning process, and in this article we’ll explain how to do it. 

First of all, it’s quite possible to get the OfficeDevPNPCore15 (for SharePoint 2013 on prem) or OfficeDevPNPCore16 (for SharePoint Online) NuGet packages and use the provisioning engine like that. We’ve found that there’s tremendous value in being able to step through and debug source code, so unless you’ve got a tool that allows you to debug 3rd party assemblies on the fly within Visual Studio we far more prefer to add the OfficeDevPnP.Core project itself to our own provisioning tool and add a project reference to it so we have access to all source code. You can either obtain the source code by creating a project based on OfficeDevPnP.Core.dll (for example, via Telerik JustDecompile at or get it directly by cloning it from the GitHub repository. This allows you to get much needed insights into the inner workings of the provisioning engine.

When building an extensibility provider, we’ve used 2 sources:

– The succinct article at

– The PNP provisioning schema at

The process of building an extensibility process goes like this:

1. Create a custom provider class that implements the IProvisioningExtensibilityProvider.

2. Add a custom provider section to the XML template.

3. Implement the logic of the custom provider.

The C# code of a custom provider class looks like this:

using OfficeDevPnP.Core.Framework.Provisioning.Extensibility;

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

using Microsoft.SharePoint.Client;

using OfficeDevPnP.Core.Framework.Provisioning.Model;

using System.Xml.Linq;

namespace MyTest.Providers


    public class CustomProvider : IProvisioningExtensibilityProvider


        public void ProcessRequest(ClientContext ctx, ProvisioningTemplate template, string configurationData)





Then, you need to adjust the XML generated by the provisioning engine and, if it’s not already there, add a custom <pnp:Providers> section. The <pnp:Providers> section needs to be placed within the <pnp:ProvisioningTemplate> section, and although the exact position doesn’t really seem to matter we place it pretty close to the end of the <pnp:ProvisioningTemplate> section. The <pnp:Providers> needs two attributes:

– Enabled, this is true or false and allows you to temporarily disable a custom provider.

– HandlerType, which expects the FQDN of the code that will be executed once the provisioning engine comes across this XML. It expects the following info: {Namespace + class name of extensibility provider}, {Assembly name}, {Version}, {Public key token, if the assembly is strong named}.

Within the <pnp:Providers> section, you can place anything you like as long as it’s valid XML. The following XML fragment is a minimal provisioning template that just creates a web property bag entry and executes a custom extensibility provider: 

<?xml version=”1.0″?>

<pnp:Provisioning xmlns:pnp=”“>

  <pnp:Preferences Generator=”OfficeDevPnP.Core, Version=2.2.1603.0, Culture=neutral, PublicKeyToken=3751622786b357c2″ />

  <pnp:Templates ID=”CONTAINER-TEMPLATE-[GUID]”>

    <pnp:ProvisioningTemplate ID=”TEMPLATE-[GUID]” Version=”1″>            


        <pnp:Provider Enabled=”true” HandlerType=”MyTest.Providers.CustomProvider, MyTest, Version=, Culture=neutral, PublicKeyToken=null”>


            <MyProviderConfiguration id=”SampleConfig” xmlns=”“>

              <ChildNode Attribute=”value”>TextContent</ChildNode>






<pnp:PropertyBagEntry Key=”lois” Value=”clark” Overwrite=”true” />





You can’t exert fine grained control over the exact execution point in the provisioning pipeline, but all extensibility providers are executed sequentially and almost at the end of the provisioning pipeline. Currently, only WebSettings (containing settings for the current web site such as a SiteLogo and Master page URL, see and PersistTemplateInfo (info about the provisioning template that gets persisted in a web property bag entry) are executed after your extensibility providers.

So what’s left to do is provide an implementation that of the ProcessRequest() method of the extensibility provider. It gets passed the SharePoint context and gets access to the XML in the custom <pnp:Provider> section. Your code will have to process that config info and use the current web to do something useful. The following code is a valid implementation:

using OfficeDevPnP.Core.Framework.Provisioning.Extensibility;

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

using Microsoft.SharePoint.Client;

using OfficeDevPnP.Core.Framework.Provisioning.Model;

using System.Xml.Linq;

namespace MyTest.Providers


    public class CustomProvider : IProvisioningExtensibilityProvider


        public void ProcessRequest(ClientContext ctx, ProvisioningTemplate template, string configurationData)


            ClientContext clientContext = ctx;

            Web web = ctx.Web;

            string configurationXml = configurationData;

            XNamespace ns = ““;

            XDocument doc = XDocument.Parse(configurationXml);

            string id = doc.Root.Attribute(“id”).Value;

            var childNode = doc.Root.Descendants(ns + “ChildNode”).FirstOrDefault();

            if (childNode != null)


                string innerValue = childNode.Value;

                string attr = childNode.Attribute(“Attribute”).Value;





Concluding, this means the extension points in the provisioning process aren’t exactly great, but it’s easy to do and at least you get the correct SharePoint context for free and have the opportunity to store all config info in a single place.

Bug in January 2016 CU for SharePoint 2013: adjusting external links in site pages

There’s a bug in the January 2016 CU where the CU erroneously updates external links. On a site page, we have a link to an external JavaScript library placed on a CDN like this:

· //

After the CU is done “fixing” it, this link has become an internal one:

· /js/jquery/jquery.js

Because of that, the page no longer is able to find the javascript file and the page fails. Links explicitly including the protocol are not molested in this way, so http// remains http// after CU installation. Let’s hope this bug is fixed in future updates, since we really want to leave out explicit protocols (like so: // We also would like the CU not to try to be too smart and stay away of the contents of our site pages. Btw, it also seems that the CU doesn’t touch similar references in page layouts.

SharePoint Debugging: Not without a trace

We’re always quite interested to see how other peoply try to solve SharePoint issues and thought it would be interesting to share a recent experience with MS support. In a case were list items got corrupted after a migration, MS support was interested in the following:

– An HTTP trace retrieved via Fiddler taken while the issue is reproduced via the browser.

– Relevant ULS log files.

– A memory dump of the SharePoint process retrieved via tttracer taken while the issue is reproduced.

To us, the latter choice is the most interesting one. Tttracer.exe refers to the Microsoft Time Tracel Tracing Tool (see and is a diagnostic tool that captures trace info and extends the WinDbg tool to load such trace files for further analysis. Tttracer allows you to select a specific process (or more) on your computer and collects info about it. At a later time, MS support is able to use such trace files and go back and forth in time to diagnose SharePoint processes before, during, and after issues.

Unfortunately, tttracer is not available outside Microsoft so of no immediate use to us. However, there were some steps in the trace capturing process that are good practices to follow anyway, such as:

1. If you’re interested in doing a memory dump, isolate a WFE that will be used for testing the issue.

2. If you’re interested in doing a memory dump, edit the host file on that WFE to ensure all SharePoint URL calls are directed to the WFE, and not to a load balancer.

3. Set ULS logging to verbose and put that info in a separate log file (via Set-SPLogLevel -TraceSeverity VerboseEx -EventSeverty Verbose and New-SPLogFile).

4. Reset IIS.

5. Reproduce the issue.

6. If you’re interested in doing a memory dump, find the process id of the application pool that hosts the SharePoint site where the issue occurs (by executing “%windir%\system32\inetsrv\appcmd list wps” on a command prompt).

8. Reproduce the issue.

9. Analyze all the info you retrieved.

We suspect your own troubleshooting may not be that different, and most likely will be more extensive than this, but for sure it won’t hurt to compare notes!

Profiling SharePoint databases

Of course messing with SharePoint databases is not supported but we’ve found there are times when we do want to take a closer look at SharePoint databases and see where certain information is stored or how long an operation takes at the database level. As we don’t do this that often, we thought it would be convenient to document the procedure for profiling SharePoint databases and also thought the write-up could be helpful for others.

Follow this procedure to start profiling databases:

1. Start SQL Server Profiler directly or via SQL Server Management Studio and then choose Tools > SQL Server Profiler.

2. Click File > New Trace. This opens the Connect to Server dialog window.

3. Enter the server name of the SharePoint database server or instance that you want to profile.

4. Click Connect.

5. This opens the Trace Properties dialog window.

6. Enter a valid Trace name.

7. In the Use the template drop down list, choose TSQL_Duration. This template is especially good for finding how long it takes to run SQL queries and stored procedures.

8. Click the Events Selection tab.

9. Select the Show all columns checkbox.

10. Check the DatabaseName column for both Stored Procedures and TSQL.

11. If you don’t know the exact name of the database(s) you want to profile, click Run.

Perform the UI actions that you want to investigate further, and click the Pause Selected Trace button. This gives you the chance to identify the names of the databases you’re interested in. Now that you’ve established that, you’re ready add a filter to profile only the databases you’re interested in and no more. This is a necessary step as the number of queries that are executed on a SharePoint database server are quite overwhelming. Typically, but not always, you’ll be most interested in the SP_Content_* databases.

Now follow the next procedure to add some filters:

1. Click the Clear Trace Window button.

2. Click File > Properties.

3. Click the Events Selection tab.

4. Click Column Filters. This opens the Edit Filter dialog window.

5. Select DatabaseName.

6. Click Like.

7. Enter the desired database name, e.g. %Content%.

8. Click Run.

Now you have a better chance to find out what’s taking so long and where specific information is stored.

Coding for kids

Margriet wrote an interesting blog post about getting kids in contact with programming. You can read more about it over here. In the Netherlands, there’s the option to join codeuur, in English, the article discusses a lot more options.

Browser chart site

Everybody needs a browser charting site to look up if a certain CSS, JavaScript or HTML 5 feature is supported or not, because it will save tons of time. We kinda like this one: It allows you to check if a feature is supported in a heartbeat, it allows you to compare multiple browsers and versions with each other, and it shows insights into usage info for your country!

A case of "easy": Virto SharePoint Bulk Operations Toolkit

Recently, we took a look at Virto’s SharePoint 2013 workflow activities kit and we were happy with what we saw. As a result, we also decided to take a lok at another Virto product, the Virto Bulk Operations Toolkit. This is a bundle of 8 components that all do something useful with bulk operations and is shipped with a nice bulk bundle price saving you over 40%. This is a feature-packed bundle that is definitely capable to handle some of the questions we or our customers have been struggling with. And, oh yeah, it comes with comprehensive and extensive documentation.

Let’s take a look at the 8 components.

1. Bulk Check In and Approve
As the name implies, this component is for SharePoint multiple files checkin and SharePoint multiple files approval. It allows end users to check in bulk groups of documents and has features such as adding comments to the bulk whilst doing a check-in, publishing or drafting versions if

approval is enabled, or doing bulk discards of multiple checked-out files. The user interface for doing this is a lot easier then what SharePoint is offering ootb. The next Figure shows an example (all pictures are taken with permission of Virto):


Bulk Check In and Approve has a comprehensive set of features such as:

  • Search support for check in and approve actions.
  • CAML query support for check in and approve actions.
  • Menus that are aware of current user permissions.
  • Default view type setting.
  • Ability to check-in documents in folders and subfolders.
  • Bulk discard of files.
  • Bulk check-in and approve capabilities.

2. Bulk Data Edit Web Part
This is a component for SharePoint multiple files editing. This is a request we’ve heard time and time again (although you have to be careful with this!): the possibility to do SharePoint bulk edit. In other words, to edit the same metadata field for a bulk of list items. This enables end users to specify metadata for a bulk of items in 1 go (supported in all types of lists and libraries)! The ootb datasheet view alleviates the problem somewhat, but it is nowhere near as easy to use as this component. See the following screenshot to get an impression.


The Bulk Data Edit Web Part has a comprehensive set of features such as:

  • Ability to create new terms.
  • Search support for edit scenarios.
  • CAML query support for edit scenarios.
  • Taxonomy options.
  • Bulk edit for list or site scopes.
  • Editing data of the same field for a group of list items.

3. Bulk File Copy and Move
This is a component for SharePoint multiple file copy and move. The copying (or more often: moving) of large amounts of list items and/or documents is usually something that end users request of administrators, who usually use a high-end migration tool to do the heavy lifting. This component is great in that it places the power of moving larger amounts of misplaced documents/list items in the hands of end users in the form of a separate web part or an additional action in the Action menu. Also check out the Figure below.


Bulk File Copy and Move has features such as:

  • Support for search for copy & move operations.
  • Support for CAML queries for copy & move operations.
  • Filtering possibilities for copy & move operations.
  • Support for using the same SharePoint views as in source libraries.

4. Bulk File Delete Web Part
This is a component fo SharePoint multiple file deletion. This is also a special one. As you would expect, it can delete multiple list items/documents from any type of list. But this component goes a significant step further in that it allows you to specify which

groups of items to delete via an advanced filtering mechanism (that allows you to delete by view, CAML queries, searches, etc). The Figure below shows an example of search support in deletion scenarios:


The Bulk File Delete Web Part has features such as:

  • Search support when deleting files.
  • CAML support when deleting files.
  • Flexible max amount of files that are displayed.
  • Filtering possibilities during file deletion.

5. Bulk File Download Web Part
This is a component for SharePoint multiple files download. This component allows end users to download files from a library and store it locally in a single .zip file. Again, it’s easy to select which files must be downloaded. This component has been around since 2007 and was probably more important then than it is now with all the offline capabilities of SharePoint, although we have seen a customer that periodically bundled documentation that was shipped to external partners, in which case this component would have been quite useful. It also works nice in combination with component nr. 6, the Bulk File Unzip Utility. Currently, we’re exploring these components to quickly fill some sites with relevant test data which seems to work out quite nicely. The next Figure nicely demonstrates what this web part looks like:


The Bulk File Download Web Part has a comprehensive set of features such as:

  • Extensive logging during download.
  • Automatic JavaScript minification.
  • Support for large files.
  • Support for async operations.
  • Choosing specific files for downloading.
  • Download a bulk of files in a single archive.

6. Bulk File Unzip Utility

This is a component for SharePoint multiple files unzipping. This component unpacks zip archives to document libraries, retaining the original structure. For end users migrating to SharePoint from another system, this can be extremely useful. This allows end users to pack their documents, stuff them in SharePoint, and then use the Bulk File Copy and Move and Bulk Data Edit Web Parts to get the documents organized! This is a nice example how the separate components within the bundle seem to enhance each other. The next Figure shows how easy it is: select an archive and click the Unpack button!


The Bulk File Unzip Utility has features such as:

  • Unpacking archives to SharePoint document libraries.
  • Preserving folder structures after unpacking.
  • Ability to retain original file creation/modification datetime! This is a really cool feature which can be very important in certain legal situations.
  • Auto-deletion of archives aftyer unpacking.
  • Unzip settings are scoped at either the library or site level.

7. Bulk File Upload Web Part
This is a component for SharePoint multiple files upload. In the past, we’ve written stuff about uploading bulk files to SharePoint and even have written our own little tool to do it. The bottom line always consists of the following points:

  • The standard SharePoint tools for uploading files in bulks to SharePoint are not advanced and reliable enough.
  • There are community tools that are better, but offer very limited support which is commonly not acceptable in enterprise scenarios.
  • There are high-end migration tools that do a great job at a great cost.

The Bulk File Upload Web Part has a comprehensive set of features such as:

  • Possibility to add field descriptions.
  • Show overwrite option.
  • The possibility to redirect all requests to the standard upload page to the Virto Bulk Upload (via an HTTP module).
  • Support for Content Organizer rules.
  • Possibility to resize jpeg images during upload.
  • Current user defaults for property values.
  • Maximum file size settings.
  • Allowed file type settings.
  • Upload of large files.
  • Partial uploading of operation is cancelled.

It seems that Virto fills a very nice niche here by offering a professional and supported solution at a very reasonable price (at least, that’s what we think). The next Figure shows an impression.


8. HTML 5 Bulk File Upload
Last but not least, this is a component for SharePoint multiple files upload. This component is a nice enhancement to the Bulk File Upload Web Part and is completely written in HTML 5. It is very similar to the Bulk File Upload Web Part except that it’s even better! This component is probably our favorite in the bundle. The next Figure shows what it looks like.


HTML 5 Bulk File Upload has features such as:

  • Uploading of bulk files.
  • Support for custom metadata.
  • Support for uploading files via drag-n-drop leveraging just HTML 5.
  • Ability to overwrite files during upload.
  • Display of file upload progress bar.
  • Restriction of file types that can be selected.
  • Limiting the max file upload size.
  • Support for unlimited amounts of content types and fields.
  • And, as all of Virto’s components, cross browser support for MSIE, FireFox, Chrome and Opera.

So if any of the features of the Virto Bulk Operations toolkit appeal to you, we won’t try to stop you checking it out at

jQuery Caret Plugin

Let’s give a positive endorsement for the jQuery Caret Plugin which can be found at This library allowed us to find the current cursor position in a text area with code like this:

targetElement = $(“#myID”);
var currentCursorPosition = targetElement.caret();

Contrary to the other solutions we’ve tried, this one also worked flawlessly in MSIE 8!