SharePoint Dragons

Nikander & Margriet on SharePoint

The scientific methodology for troubleshooting SharePoint

A methodology is a set of methods, rules or ideas that are important in a science or art. The scientific methodology for troubleshooting SharePoint is a set of ideas important for troubleshooting SharePoint issues based on the traditional scientific method for problem solving. This methodology comprises 7 steps, each step containing several ideas.

1.      Define the problem

Tips:

  • Talk to the person that found out about the issue.
  • If possible, talk to other people experiencing the same problem. They may approach the problem at different angles getting you different insights, or they may be able to provide more accurate details (such as: how long does the issue exist? When did it start?)
  • How long does the problem exist?
  • How bad is it (what’s the priority)?
  • Identify possible parties (and contact persons) that possibly suffer from either the problem or potentially the troubleshooting of the problem (business people, companies, departments). Also identify parties that may play a role in fixing the issue (dba, network admin, 3rd party vendor, etc).
  • Forget what you think you know, don’t assume much and doubt everything.
  • Most important of all: forget pressure, try to free yourself from emotions. This will improve your chances to solve the problem a lot.

Core tasks:

  • Determine what the expected behavior is.
  • Determine what is really happening.
  • Take the trouble to describe what the difference between both scenarios is?

2.      Do research

Tips:

  • Know your environment, know relevant code.
  • Read literature.
  • Post a forum question.
  • Discuss the problem with others.
  • Tell it to a rubber duck (which forces you to explain the problem thereby helping you to gain insights).
  • If you don’t make any progress within the hour, talk to others.
  • Contact other parties that can help you to get more insight into the problem.
  • Read the error message with attention, preferably aloud.
  • Collect data:
    • ULS logs
    • Event viewer general info
    • Event viewer SharePoint application logs
    • Health log database
    • Temporarily set monitoring logging to verbose
    • Turn on ASP.NET debugging settings
    • Decode add-in access token
    • Turn on developer dashboard
    • Debug issue (client-side JavaScript, C#, PS)
    • Fiddler trace logs
    • IIS logs (inc. request tracing)

Core tasks:

  • Reproduce the problem. Preferably in an environment where you can experiment safely.
  • If there is a working environment and an environment with a problem: keep calm, methodically check what is different and the solution will come!

 

Hypothesis

Establish a hypothesis about the problem.

Core tasks:

  • Ensure the hypothesis is testable.

Design the experiment

Tips:

  • Contact all parties that are impacted by the experiment (typically business).
  • Test hypothesis.
  • Run automated tests.
  • Comment code that is not needed to test the hypothesis.
  • Add break points.
  • Limit the number of variables you’re testing in a single experiment.
  • Try something funny (this probably won’t solve the issue, but let’s try it anyway).
  • Eliminate most likely causes, such as:
    • Recent code.
    • Custom code instead of MS code.
  • Do “half-splitting”: eliminate the problem by splitting it in half (comment code, determine if the problem is client-side or server-side, determine if the problem is environment specific and so on).
  • Don’t rush. Now is an easy time to break more. Take your time because often you need to temporarily break something to fix it.

Core task:

  • Execute the experiment.
  • Take rudimentary notes.
  • Try variations.

Gather data

Core task:

  • Establish the result of the experiment.
  • Take some distance and disconnect from emotions while you’re observing what is happening.
  • Gather log files.
  • Read error messages carefully. We repeat: Read error messages carefully.

Analyze results

Core tasks:

  • Is the problem solved?
  • Did you learn anything? If you didn’t, the whole troubleshooting experience was pretty useless. Remember, even in the unfortunate event where you didn’t solve the problem, there is still value in having learned from the experience.
  • Do you understand why the problem was solved?
  • Keep notes.
  • Add automated test that check for the issue.
  • Document failed experiments.

Draw conclusions

Tips:

  • Enjoy successfully solving a problem.
  • Harder to do: enjoy progress.
  • Write about it (blog?).
  • Update documentation.

Core tasks:

  • Determine the next step:
    • The problem is fixed.
    • You need to revisit one of the previous steps.
    • You need to get help from others (MS support, external help, another department, colleagues).

References

Over the course of time we’ve used lots of sources to improve our bag of tricks. Unfortunately, we didn’t keep track of those sources. So, if you feel you need to be credited but didn’t, we’re awfully sorry. Contact us and we’ll add you to the references section.

We recommend you to check out:

·        https://www.youtube.com/watch?v=h9YZXuUjyOs

 

 

Issues when adding SharePoint add-ins

We experienced two issues when adding SharePoint add-ins and thought to share.

Issue: Add-ins can be added to some sites but not to others

Cause: Certain add-in permissions require the activation of specific SharePoint features. For example, if your add-in requires the News Feed (Social) feature, the activation of the Sitefeed web scoped feature is required. If this is not the case, the add-in just won’t appear via Site Contents or via Site Contents > add an app. SharePoint won’t give you a hint via the UI that this is happening.

Issue: The adding of an add-in repeatedly fails

Diagnostic info: Checking the ULS log files indicates that adding the add-in results in a Critical error that states the following:

“Insufficient SQL database permissions for user ‘Name: NT AUTHORITY\IUSR SID: [some SID] ImpersonationLevel: Impersonation’ in database ‘SP_UsageAndHealth’ on SQL Server instance ‘[Some SQL instance name]’. Additional error information from SQL Server is included below.  The EXECUTE permission was denied on the object ‘prc_CountAppInstanceData’, database ‘SP_UsageAndHealth’, schema ‘dbo’.”

Cause: The error message isn’t helpful because it turns out it doesn’t provide a clue towards the real solution. In our case the problem was caused because the site quota limits had been reached that are associated to the site collection where we want to add the add-in. After fixing that, the add-in could be added successfully.

Issue loading the correct Microsoft.SharePoint.Client assembly from a PowerShell script

We experienced an issue with a PowerShell script that loaded several SharePoint assemblies such as Microsoft.SharePoint.Client.dll that were shipped along with the script in the same folder. The PS script loaded the assemblies, but loaded the WRONG version on some environments. We needed Microsoft.SharePoint.Client.dll version 15.0.4797.1000 or higher, and what we we got was version 15.0.0.0. This happened on environments that didn’t contain the latest CU update so still had the older version in the GAC. Since we were in a situation where we were not allowed to install the CU update on the machine, but we still wanted to run code with the newer assembly version, we were trying to find a way to specify the exact assembly that we wanted to load from within a PS script. 

A quick tip: we quickly became annoyed with the fact that we needed to close the PS cmd prompt after loading assemblies, because the loaded dll’s are not unloaded from the app domain otherwise. Instead, we did the following:

1 – Open a PS cmd prompt

2 – Type ‘powershell’ which starts a new PowerShell session.

3 – Then, execute the code that loads the assembly and test it.

4 – Type ‘exit’ to cloase the PS session and unload the dll’s.

Repeat this procedure as much as needed. This way, you don’t have to close the PS cmd prompt itself anymore which is a time saver.

We came up with the following ways to load assemblies within a PS script:

– Via Add-Type -Path, which allows you yo specify the path to assembly DLL files that contain the needed types.

Example: Add-Type -Path ‘D:\LCTest\Microsoft.SharePoint.Client.dll’

– Via Add-Type -LiteralPath, which also allows you to specify the path to assembly files. The difference with the Path parameter, according to the documentation (https://technet.microsoft.com/en-us/library/hh849914.aspx), is that the value of the LiteralPath parameter is used exactly as typed and no characters are interpreted as wildcards.

Example: Add-Type -LiteralPath ‘D:\LCTest\LoisAndClark.Microsoft.SharePoint.Client.dll’

Please note: we altered the name of the SharePoint client assembly to LoisAndClark.Microsoft.SharePoint.Client.dll to make absolutely sure that we were loading the assembly we intended.

– Via reflection and the LoadFile method, which loads the contents of an assembly file.

Example: [Reflection.Assembly]::LoadFile(‘D:\LCTest\Microsoft.SharePoint.Client.dll’)

– Via reflection and the LoadFrom method, which loads an assembly given its file name or path.

[Reflection.Assembly]::LoadFrom(‘D:\LCTest\LoisAndClark.Microsoft.SharePoint.Client.dll’)

– Via reflection and the Load method, which loads an assembly based on its FQDN:

[reflection.assembly]::Load(‘Microsoft.SharePoint.Client, Version=15.0.4797.1000, Culture=neutral, PublicKeyToken=71e9bce111e9429c’)

By the way, we found an easy way to check if the assembly was recent enough for our purposes by loading the assembly and executing the following PS code:

[Microsoft.SharePoint.Client.AuditMaskType] $test = 0

If that line of code worked, the assembly was new enough for us.

Although we now had various ways to load assemblies within a PS script none of them worked correctly, because they all loaded the old assembly version. The thing is that the various assembly loading methods imply that you can specify a specific assembly location, when in fact, you cannot. As soon as the code tries to load the assembly, the CLR checks if there is an assembly in the GAC with the same strong name. If there is, the CLR loads that one instead. We came up with a couple of ideas to try and circumvent this:

– Configuration via app.config

– Use ReflectionOnlyLoadFrom

– Remove strong name

– Uninstall SharePoint DLLs from GAC

– Use DevPath

Configuratie via app.config file
The CLR assembly probing mechanism can be influenced via an app.config file. Since we’re dealing with a PS script that loads assemblies, the next question is how you should load an app.config file. As it turns out, the app.config file can be loaded in the app domain that executes the PS code. Unfortunately, the app.config file cannot influence the behavior of the assembly resolving process and using the app config would only have a chance of succeeding if the assembly version (and not the assembly file version) would have been different. But that was not the case, so we had to abandon this idea.

Use the ReflectionOnlyLoadFrom method via reflection
As opposed to the other methods via reflection discussed previously, reflection does offer a method that allows you to load an assembly from a specific location. That method is called ReflectionOnlyLoadFrom and can be used like this:

[Reflection.Assembly]::ReflectionOnlyLoadFrom(‘D:\LCTest\Microsoft.SharePoint.Client.dll’)

However, this method was useless to us because assemblies that are loaded this way are not executable. You can use this method to find out information about a specific assembly but we were not interested in that.

Remove strong name
By definition assemblies that are NOT strong named don’t match with assemblies in the GAC. One approach would be to remove the strong name of the required SharePoint assemblies using tools such as http://www.nirsoft.net/dot_net_tools/snremove.zip. This way, it should be possible to prevent the loading of the GAC version of the assembly. Although this should work and is valid as a train of thought in a brainstorm, we didn’t pursue this approach because it is silly. 

Remove SharePoint DLLs from the GAC
We had a test environment that shouldn’t contain SharePoint assemblies but did nonetheless. It was certain that those DLLs were not required for the machine, it was unclear how they got there and they couldn’t be updated to a newer version because of an obscure error that occurred. So, we tried what would happen if we’d remove the SharePoint DLLs from the GAC directly, assuming that if we succeeded in doing that the CLR would have no choice but to load the intended assembly version. Alas, every time we did that after a short while the DLLs that were removed from the GAC were restored. Allegedly, Windows Installer is responsible for this. It keeps track of a reference count and if the count is > 0 there are still applications depending on a dll. You can only really remove the assembly from the GAC if the reference count has reached 0. This is made clearer by removing a DLL via a Visual Studio command prompt, like so:

gacutil /u “microsoft.sharePoint.client,Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c”

This fails and the error message states the dll cannot be removed because there are applications that depend upon it. Since we were unsure which applications were still dependent upon the DLL (which should be deductible by exploring the registry), we had to abandon this approach.

DevPath
The DevPath setting is a bit esoteric, but proved useful in our case. You can use it to indicate that a specific machine is a development machine thus allowing you to choose a specific path where assemblies should be loaded from thereby bypassing the GAC. Doing this also means that assembly version numbers are not taken into account anymore, the .NET assembly resolver just loads the first assembly it finds. You can set the DevPath setting by opening the machine.config file (C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config) and replace the <runtime /> element with: 

<runtime>

<developmentMode developerInstallation=”true” />

</runtime>

Once you’ve done that, the .NET assembly resolver looks if there is a DEVPATH system environment variable and uses that path to load assemblies (e.g. you can set the DEVPATH environment variable to D:\LCTest\).

This approach works, but you also have designated a machine as being a DEV machine. That would only be acceptable rarely.

Conclusion
You can force the correct assembly to be loaded from within a PS script bypassing the one located in the GAC, but you’ll have to jump through some hoops and live with concessions. In our case we decided the only valid way of going forward was to create a clean machine without any SharePoint DLLs in the GAC thereby ensuring that the issue where the wrong assembly version is loaded is prevented.

SharePoint 2013 on demand loading pattern

SharePoint has a JavaScript on-demand loading library, the SP.SOD library (more info on https://msdn.microsoft.com/en-us/library/office/ff410742(v=office.14).aspx). We find the following pattern useful to ensure that a custom JavaScript library called MyCustomLib.js is only loaded once and on demand. So in the pattern below, MyCustomLib.js is only loaded when SP.SOD.executeFunc() is executed.

RegisterSod(MyCustomLib.js’, ‘/sites/ OurTestSite/Style%20Library/Javascript/MyCustomLib.js’);

RegisterSod(AnotherCustomLib.js’, ‘/sites/OurTestSite/Style%20Library/Javascript/AnotherCustomLib.js’);

RegisterSodDep(“MyCustomLib.js”, “SP.js”);

RegisterSodDep(“MyCustomLib.js”, ” AnotherCustomLib.js”);

SP.SOD.executeFunc(MyCustomLib.js’, null, function () { LoisAndClark.CustomApplication.MyCustomLib.init(); });

How to create an OfficeDev PnP Provisioning engine extensibility provider

The OfficeDev PNP provisioning engine (https://github.com/officedev/pnp-sites-core) is able to create an XML template based on a given SharePoint site and then use that XML template to create new sites. Ootb, the provisioning engine contains a considerable amount of stuff it can do as detailed in the PNP provisioning schema (https://github.com/OfficeDev/PnP-Provisioning-Schema/blob/master/ProvisioningSchema-2015-12.md). The provisioning engine allows you to define extension points that allow you to add custom steps to the provisioning process, and in this article we’ll explain how to do it. 

First of all, it’s quite possible to get the OfficeDevPNPCore15 (for SharePoint 2013 on prem) or OfficeDevPNPCore16 (for SharePoint Online) NuGet packages and use the provisioning engine like that. We’ve found that there’s tremendous value in being able to step through and debug source code, so unless you’ve got a tool that allows you to debug 3rd party assemblies on the fly within Visual Studio we far more prefer to add the OfficeDevPnP.Core project itself to our own provisioning tool and add a project reference to it so we have access to all source code. You can either obtain the source code by creating a project based on OfficeDevPnP.Core.dll (for example, via Telerik JustDecompile at http://www.telerik.com/products/decompiler.aspx) or get it directly by cloning it from the GitHub repository. This allows you to get much needed insights into the inner workings of the provisioning engine.

When building an extensibility provider, we’ve used 2 sources:

– The succinct article at http://www.erwinmcm.com/using-an-extensibility-provider-with-the-pnp-provisioning-engine/

– The PNP provisioning schema at https://github.com/OfficeDev/PnP-Provisioning-Schema/blob/master/ProvisioningSchema-2015-12.md

The process of building an extensibility process goes like this:

1. Create a custom provider class that implements the IProvisioningExtensibilityProvider.

2. Add a custom provider section to the XML template.

3. Implement the logic of the custom provider.

The C# code of a custom provider class looks like this:

using OfficeDevPnP.Core.Framework.Provisioning.Extensibility;

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

using Microsoft.SharePoint.Client;

using OfficeDevPnP.Core.Framework.Provisioning.Model;

using System.Xml.Linq;

namespace MyTest.Providers

{        

    public class CustomProvider : IProvisioningExtensibilityProvider

    {

        public void ProcessRequest(ClientContext ctx, ProvisioningTemplate template, string configurationData)

        {

        }

    }

}

Then, you need to adjust the XML generated by the provisioning engine and, if it’s not already there, add a custom <pnp:Providers> section. The <pnp:Providers> section needs to be placed within the <pnp:ProvisioningTemplate> section, and although the exact position doesn’t really seem to matter we place it pretty close to the end of the <pnp:ProvisioningTemplate> section. The <pnp:Providers> needs two attributes:

– Enabled, this is true or false and allows you to temporarily disable a custom provider.

– HandlerType, which expects the FQDN of the code that will be executed once the provisioning engine comes across this XML. It expects the following info: {Namespace + class name of extensibility provider}, {Assembly name}, {Version}, {Public key token, if the assembly is strong named}.

Within the <pnp:Providers> section, you can place anything you like as long as it’s valid XML. The following XML fragment is a minimal provisioning template that just creates a web property bag entry and executes a custom extensibility provider: 

<?xml version=”1.0″?>

<pnp:Provisioning xmlns:pnp=”http://schemas.dev.office.com/PnP/2015/12/ProvisioningSchema“>

  <pnp:Preferences Generator=”OfficeDevPnP.Core, Version=2.2.1603.0, Culture=neutral, PublicKeyToken=3751622786b357c2″ />

  <pnp:Templates ID=”CONTAINER-TEMPLATE-[GUID]”>

    <pnp:ProvisioningTemplate ID=”TEMPLATE-[GUID]” Version=”1″>            

      <pnp:Providers>

        <pnp:Provider Enabled=”true” HandlerType=”MyTest.Providers.CustomProvider, MyTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null”>

          <pnp:Configuration>

            <MyProviderConfiguration id=”SampleConfig” xmlns=”http://schemas.loisandclark.eu/MyProviderConfiguration“>

              <ChildNode Attribute=”value”>TextContent</ChildNode>

            </MyProviderConfiguration>

          </pnp:Configuration>

        </pnp:Provider>

      </pnp:Providers>

      <pnp:PropertyBagEntries>

<pnp:PropertyBagEntry Key=”lois” Value=”clark” Overwrite=”true” />

      </pnp:PropertyBagEntries>  

    </pnp:ProvisioningTemplate>

  </pnp:Templates>    

</pnp:Provisioning>

You can’t exert fine grained control over the exact execution point in the provisioning pipeline, but all extensibility providers are executed sequentially and almost at the end of the provisioning pipeline. Currently, only WebSettings (containing settings for the current web site such as a SiteLogo and Master page URL, see https://github.com/OfficeDev/PnP-Provisioning-Schema/blob/master/ProvisioningSchema-2015-12.md#websettings) and PersistTemplateInfo (info about the provisioning template that gets persisted in a web property bag entry) are executed after your extensibility providers.

So what’s left to do is provide an implementation that of the ProcessRequest() method of the extensibility provider. It gets passed the SharePoint context and gets access to the XML in the custom <pnp:Provider> section. Your code will have to process that config info and use the current web to do something useful. The following code is a valid implementation:

using OfficeDevPnP.Core.Framework.Provisioning.Extensibility;

using System;

using System.Collections.Generic;

using System.Linq;

using System.Text;

using System.Threading.Tasks;

using Microsoft.SharePoint.Client;

using OfficeDevPnP.Core.Framework.Provisioning.Model;

using System.Xml.Linq;

namespace MyTest.Providers

{        

    public class CustomProvider : IProvisioningExtensibilityProvider

    {

        public void ProcessRequest(ClientContext ctx, ProvisioningTemplate template, string configurationData)

        {

            ClientContext clientContext = ctx;

            Web web = ctx.Web;

            string configurationXml = configurationData;

            XNamespace ns = “http://schemas.somecompany.com/MyProviderConfiguration“;

            XDocument doc = XDocument.Parse(configurationXml);

            string id = doc.Root.Attribute(“id”).Value;

            var childNode = doc.Root.Descendants(ns + “ChildNode”).FirstOrDefault();

            if (childNode != null)

            {

                string innerValue = childNode.Value;

                string attr = childNode.Attribute(“Attribute”).Value;

            }

        }

    }

}

Concluding, this means the extension points in the provisioning process aren’t exactly great, but it’s easy to do and at least you get the correct SharePoint context for free and have the opportunity to store all config info in a single place.

Bug in January 2016 CU for SharePoint 2013: adjusting external links in site pages

There’s a bug in the January 2016 CU where the CU erroneously updates external links. On a site page, we have a link to an external JavaScript library placed on a CDN like this:

· //cdn.ourcompany.com/js/jquery/jquery.js

After the CU is done “fixing” it, this link has become an internal one:

· /js/jquery/jquery.js

Because of that, the page no longer is able to find the javascript file and the page fails. Links explicitly including the protocol are not molested in this way, so http//cdn.ourcompany.com/js/jquery/jquery.js remains http//cdn.ourcompany.com/js/jquery/jquery.js after CU installation. Let’s hope this bug is fixed in future updates, since we really want to leave out explicit protocols (like so: //cdn.ourcompany.com/js/jquery/jquery.js). We also would like the CU not to try to be too smart and stay away of the contents of our site pages. Btw, it also seems that the CU doesn’t touch similar references in page layouts.

SharePoint Debugging: Not without a trace

We’re always quite interested to see how other peoply try to solve SharePoint issues and thought it would be interesting to share a recent experience with MS support. In a case were list items got corrupted after a migration, MS support was interested in the following:

– An HTTP trace retrieved via Fiddler taken while the issue is reproduced via the browser.

– Relevant ULS log files.

– A memory dump of the SharePoint process retrieved via tttracer taken while the issue is reproduced.

To us, the latter choice is the most interesting one. Tttracer.exe refers to the Microsoft Time Tracel Tracing Tool (see http://www.thewindowsclub.com/microsoft-time-travel-tracing-diagnostic) and is a diagnostic tool that captures trace info and extends the WinDbg tool to load such trace files for further analysis. Tttracer allows you to select a specific process (or more) on your computer and collects info about it. At a later time, MS support is able to use such trace files and go back and forth in time to diagnose SharePoint processes before, during, and after issues.

Unfortunately, tttracer is not available outside Microsoft so of no immediate use to us. However, there were some steps in the trace capturing process that are good practices to follow anyway, such as:

1. If you’re interested in doing a memory dump, isolate a WFE that will be used for testing the issue.

2. If you’re interested in doing a memory dump, edit the host file on that WFE to ensure all SharePoint URL calls are directed to the WFE, and not to a load balancer.

3. Set ULS logging to verbose and put that info in a separate log file (via Set-SPLogLevel -TraceSeverity VerboseEx -EventSeverty Verbose and New-SPLogFile).

4. Reset IIS.

5. Reproduce the issue.

6. If you’re interested in doing a memory dump, find the process id of the application pool that hosts the SharePoint site where the issue occurs (by executing “%windir%\system32\inetsrv\appcmd list wps” on a command prompt).

8. Reproduce the issue.

9. Analyze all the info you retrieved.

We suspect your own troubleshooting may not be that different, and most likely will be more extensive than this, but for sure it won’t hurt to compare notes!

Profiling SharePoint databases

Of course messing with SharePoint databases is not supported but we’ve found there are times when we do want to take a closer look at SharePoint databases and see where certain information is stored or how long an operation takes at the database level. As we don’t do this that often, we thought it would be convenient to document the procedure for profiling SharePoint databases and also thought the write-up could be helpful for others.

Follow this procedure to start profiling databases:

1. Start SQL Server Profiler directly or via SQL Server Management Studio and then choose Tools > SQL Server Profiler.

2. Click File > New Trace. This opens the Connect to Server dialog window.

3. Enter the server name of the SharePoint database server or instance that you want to profile.

4. Click Connect.

5. This opens the Trace Properties dialog window.

6. Enter a valid Trace name.

7. In the Use the template drop down list, choose TSQL_Duration. This template is especially good for finding how long it takes to run SQL queries and stored procedures.

8. Click the Events Selection tab.

9. Select the Show all columns checkbox.

10. Check the DatabaseName column for both Stored Procedures and TSQL.

11. If you don’t know the exact name of the database(s) you want to profile, click Run.

Perform the UI actions that you want to investigate further, and click the Pause Selected Trace button. This gives you the chance to identify the names of the databases you’re interested in. Now that you’ve established that, you’re ready add a filter to profile only the databases you’re interested in and no more. This is a necessary step as the number of queries that are executed on a SharePoint database server are quite overwhelming. Typically, but not always, you’ll be most interested in the SP_Content_* databases.

Now follow the next procedure to add some filters:

1. Click the Clear Trace Window button.

2. Click File > Properties.

3. Click the Events Selection tab.

4. Click Column Filters. This opens the Edit Filter dialog window.

5. Select DatabaseName.

6. Click Like.

7. Enter the desired database name, e.g. %Content%.

8. Click Run.

Now you have a better chance to find out what’s taking so long and where specific information is stored.

Coding for kids

Margriet wrote an interesting blog post about getting kids in contact with programming. You can read more about it over here. In the Netherlands, there’s the option to join codeuur, in English, the article discusses a lot more options.

Browser chart site

Everybody needs a browser charting site to look up if a certain CSS, JavaScript or HTML 5 feature is supported or not, because it will save tons of time. We kinda like this one: http://caniuse.com/ It allows you to check if a feature is supported in a heartbeat, it allows you to compare multiple browsers and versions with each other, and it shows insights into usage info for your country!