Continuous Lifecycle 2013: Continuous Deployment so einfach wie möglich

I presented “Continuous Deployment so einfach wie möglich” at the Continuous Lifecycle 2013 in Karlsruhe on 2013-11-12:

Die automatisierte kontinuierliche Aktualisierung von Web-Anwendungen ist von der Idee her sehr komfortabel, trifft aber in der Praxis auf die ein oder andere Schwierigkeit. Der Referent zeigt am Beispiel zweier ASP.NET-Projekte – mit sechs Jahren Erfahrung -, wie man eine minimalistische eigene Infrastruktur dafür aufbaut und pflegt. Dabei geht er den kompletten Weg vom Check-in des Entwicklers über Build und Tests bis zum Update der Web-Anwendung auf dem eigenen IIS, inklusive Datenbank-Schema und Konfigurationsdateien.

Slides are available here: Continuous Deployment so einfach wie möglich (PDF).

HttpWebRequest Cookie Handling

When I tried to automate the access to the web-based UI of the Dell DRAC 5 remote access controller, I stumbled upon a strange issue. System.Net.WebClient does not handle cookies on its own. You can force it to do so by intercepting the request and response handlers, GetWebRequest() and GetWebResponse(). There you have access to the HttpWebRequest and HttpWebResponse objects, and the respective CookieContainer and Cookies properties.

But given the nature of the DRAC 5, this is not enough. By default, HttpWebRequest interprets 302 (temporary redirect) responses and loads the next page. However, HttpWebRequest does not handle the cookies set on the 302 response. So you also have to disable AllowAutoRedirect to get the chance to save the cookies.

You can have a look at the code (using HttpWebRequest directly) in the github repository of DracWake.

Windows 8 is just fine

I’m using Windows 8 for software development (web and console applications) on a daily basis for two weeks now. At first I hesitated because everyone says the new GUI is unusable on non-touch devices such as my notebook. However, it didn’t change my workflow at all. I pin all the programs I usually use to the task bar, just as I do in Windows 7. And for all the programs I rarely use, I press the Windows key and type the program name, just as I do in Windows 7. The screen looks different, with all the fancy rectangles instead of the start menu, but input and result are the same, so I don’t care. And when I’m done, I close the lid of my notebook. All in all, I barely notice whether I’m on my Windows 8 or my Windows 7 machine.

So why should you switch to Windows 8? I don’t know. I did because I wanted to evaluate it on my own. Apart from this, there is neither a real benefit nor a drawback. I will continue to use it, but I won’t upgrade my other machines.

SharePoint: Refactoring

Whenever you add a new feature or fix a bug, the resulting code usually does the job, but it doesn’t look so great on the first attempt. There is nothing wrong with the code, at least speaking in terms of logic. Yet if you don’t brush up your code, you will become slower and slower, because of all the hard to maintain quick fixes and hacks. The improvement of code without a change in external behavior is called refactoring. In a SharePoint context, you typically want to rename a site column when the users indicate that they found a better name for whatever. You can do so using the update mechanisms from a previous installment of this series. But renaming the column itself is only one part of the job. There are usually quite a few places where you access this column in your code. First, you can read and write the column using the [] operator of SPListItem. Second, you can search for this column using CAML. In both cases, the column name is a string. Refactoring tools such as integrated in Visual Studio or plugged in by ReSharper do a great job when you rename .NET entities such as classes or properties, but they do not understand the meaning of string constants. This imposes a risk for rename operations, as you might miss some instances of the string constants. Your automated tests will catch them, but this is less comfortable than having an automated rename operation that simply works. .NET 3.5 introduced a solution for this. The Language Integrated Query, LINQ, allows you to write queries based on .NET entities. For SQL databases, the Entity Framework also maps the SQL structures to .NET classes, so you can access your fields using class properties. This enables refactoring support, not only for renaming but for many structural changes.

SharePoint 2010 includes LINQ-to-SharePoint. Its interface is independent of the SharePoint object model and relies a custom class hierarchy. The hierarchy is generated using spmetal.exe based on an existing SharePoint instance. This is problematic in several ways. First, spmetal has a few bugs and occasionally creates broken class mappings. You can fix the generated source code manually, but the next time the SharePoint structure changes, you have to generate the classes again and apply all the manual fixes a second time. Second, LINQ-to-SharePoint itself exhibits some strange behavior, such as omitting the time part in CAML queries for DateTime. It just does not emit the required IncludeTimeValue=”TRUE” parameter. You cannot work around this issue, since you cannot use your own CAML code in LINQ-to-SharePoint queries. This is where you have to fall back to the standard SharePoint object model. This is also the reason why you cannot migrate to LINQ-to-SharePoint incrementally, you have to switch whole code blocks at once.

You don’t have to rely on LINQ-to-SharePoint to profit from LINQ. The SharePoint object model is powerful enough to support all the operations, so you can add another layer on top of it just like LINQ-to-SharePoint does. We suggest a thin layer, just like the Micro-ORMS Dapper, Massive, or LINQ-to-SQL add on top of SQL. Using re-linq you can translate LINQ to CAML with very few lines of code. Mapping classes to SPListItem using reflection is equally simple, and as a result you get a easy to work with LINQ layer that is perfectly interoperable with the object model. Refrain from adding more functionality like object tracking. It’s usually not worth the effort, increases the complexity and requires your colleagues to learn new concepts. Simply translating LINQ to CAML and SPListItem to custom classes is easy to understand and does the job. 

We developed and implemented the data access layer at adesso. We might publish the implementation, be it open or closed source, but this is still undecided. If you are interested in this, please leave a comment. I am not the one who decides, but we are actively seeking opinions on this, so you will actually influence the outcome.

SharePoint: Dependencies

Direct access to resources has a negative impact on testability. Take the SharePoint log as an example. It is hard to test automatically whether your code writes appropriate log entries. First you have to parse the log files, and second you have to find the right moment to do so as the log is written asynchronous. Access to external web services is a second source of confusion, since it is usually outside of the scope of a test environment and therefore not controlled and not reproducible. Instead of accessing these resources directly, you can define an interface for the resource and use it instead. Then you can use stubs or mocks during tests.

But how do you know whether you run in the context of a test? You don’t have to. Avoid creating instances of collaborating resources inside your implementation. Let someone on the outside do this for you. 

The Microsoft Patterns & Practices recommend using the Service Locator pattern for this. A Service Locator is a class with static properties, where someone stores a resource and later someone else gets it. Outside the SharePoint world, this is well known as an anti pattern. First, the usage of resources is not transparent. You cannot see which resources a class needs without looking at the implementation. This yields surprising errors in tests when the implementation changes. If a reference to an additional resource is added, the test compiles just fine, but no one adds the respective implementation to the Service Locator and the test fails at runtime. Second, all resources have to be re-initialized before each test run, because you don’t know whether the previous tests changed something.

Outside of SharePoint, the most common solution is dependency injection. The most simple way is to pass all resources as arguments to the constructor. You could also inject resources to properties of the object, if you have no control over its construction. This way each class clearly states its dependencies and the external code is responsible for satisfying these. You usually use a dependency injection container such as Ninject. You register all kinds of resources with your container, and the container will resolve the dependency chains at runtime. For automated tests you can also use a combination of AutoFixture and Moq to resolve all dependencies using mock objects.

SharePoint does not support dependency injection by default. Controls, Pages and all other framework objects are created somewhere inside SharePoint with no way to inject additional parameters into the constructor. The same issue appears in ASP.NET, so you can have a look at the solutions there. Ninject.Web provides base classes for property based dependency injection. You can do the same for SharePoint base classes, which call Inject(this) in their constructor.

We developed and implemented the dependency injection helper classes at adesso. We might publish the implementation, be it open or closed source, but this is still undecided. If you are interested in this, please leave a comment. I am not the one who decides, but we are actively seeking opinions on this, so you will actually influence the outcome.

SharePoint: Isolating Test Code

The Visual Studio templates for SharePoint projects default to GAC deployment. This means that all the code you write is going to be installed to the Global Assembly Cache (GAC). At first sight this seems pretty useful. No matter from which application you want to access your assemblies, they will be found automatically. SharePoint runs your code in different contexts, from IIS web applications to timer jobs, so this has actual value. But the GAC is no silver bullet, and you will notice that when you start writing tests. .NET prefers to load assemblies from the GAC, no matter what. If you deploy your class library to the GAC in your WSP, then change the source code and run the tests, .NET will load the previous version from the GAC, even if you updated the assembly your test is linked to. This can happen in more or less obvious ways. If you add a new class method, you will get a runtime linker error. It might not be clear that the outdated version in the GAC is the cause, but at least it is out of question that something went wrong. If you only changed the implementation, this can lead to headaches. You can step through your code in the debugger, but it behaves not as stated there. You recompile, and on the next run it still does things you thought you fixed a few minutes ago.

You have four alternatives to deal with the situation. First, you can always retract your WSPs before running the tests. This is annoying, and it fails when you need it most: Given a tricky bug in your business logic, you usually think about nothing else. This is when you forget to retract your WSPs, which in turn leads to strange behavior, which leads to even more quirks in your mental model of the business logic.

The second solution is to use assembly versioning. If you link to a specific version of your assembly, your outdated version in the GAC will be ignored. This way has the same drawback as the previous one: You will forget it. And even if you have enough discipline to update the assembly version whenever you change your code, you will run into conflicts when your colleagues work on the same assembly. As soon as you check in your code into version control, you have a conflict. It is easy to resolve, but it is annoying.

The third way is to install your class library to the GAC whenever you update it. This works on your local developer machine, but as soon as you start building different releases on a continuous integration machine, you will get conflicts when the different versions overwrite each other in the GAC. Additionally you have to deal with the insufficient robustness of gacutil.exe from the Windows SDK. When you call it too often in a row, it will start throwing errors about insufficient rights. Adding pauses can resolve this, but pauses are what developers hate the most during test-driven development.

So basically you have to find a way to avoid the GAC at all, the fourth alternative. The first step it to separate your code into a SharePoint interface and the actual program logic. The SharePoint interface should be as minimalistic as possible, it should only delegate the calls to the testable program logic. This logic is compiled as a separate class library. Now the SharePoint interface has to be able to find the logic. You cannot use the GAC, since you would run into the previously described problems. A common solution for stand alone applications is to integrate the assembly using ILMerge. This enables separate tests of the logic, while only a single assembly is going to be deployed. This works fine for a single closed program. It breaks down when multiple SharePoint solutions refer to the same assemblies. When a class library is integrated into multiple assemblies, .NET treats the code as separate copies. Therefore all types are multiplied. Accordingly all static variables are multiplied. This confuses even experienced developers.

Now we know that we cannot put the program logic in the GAC, and we cannot integrate it into the SharePoint interface assemblies. Looking at the project settings in Visual Studio, you see that there is a second deployment target, the bin folder of the web application. This has implications for code security. All assemblies in the GAC run with full trust by default. All assemblies in the bin folder are subject to the trust level specified in the web.config. This defaults to wss_medium. This might be sufficient, but depending on your code, you could have to elevate the rights for your assembly. Call the admins of the SharePoint farm your code is going to be deployed to and find an acceptable trust level and CAS policy. On your local machine you can also set the trust level to Full to get the same rights you would have in the GAC.

SharePoint finds assemblies in the bin folder automatically for all code running in the context of the web interface, for example WebParts. But it won’t take long until you notice that it doesn’t look into the bin folder for feature event receivers when activating features in the Visual Studio F5 deployment. The vssphost4.exe simply does not know about the bin folder. It’s the same for all the other SharePoint related non-web processes, such as timer jobs. The solution is to tell SharePoint about the bin folder. When .NET is unable to find an assembly, the event AssemblyResolve in System.AppDomain is fired. You can add a hook and try to load the assembly from the bin folder:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using Microsoft.SharePoint.Administration;

public static class AssemblyResolver
  public static void RegisterResolver(SPWebApplication webApplication)
  {    RegisterResolver(webApplication.IisSettings[SPUrlZone.Default].Path.ToString());

  public static void RegisterResolver(string localWebApplicationPath)
  {    AppDomain.CurrentDomain.AssemblyResolve += delegate(object sender, ResolveEventArgs args) { return AssemblyResolve(args, localWebApplicationPath); };

  private static Assembly AssemblyResolve(ResolveEventArgs args, string localWebApplicationPath)
    var name = new AssemblyName(args.Name);
    var files = System.IO.Directory.GetFiles(localWebApplicationPath + "/bin");
    return (from file in files 
        where System.IO.Path.GetFileNameWithoutExtension(file) == name.Name 
        select Assembly.LoadFrom(file))
        .FirstOrDefault(assembly => assembly.GetName().FullName == name.FullName);

The only thing you need is a reference to the respective web application. In feature event receivers, you can get it via the feature parent object. In item event receivers, you can get it via the parent list of the item and its parent web.

You have to add the assembly resolve logic to the GAC. And you have to add the hook before .NET tries to invoke a function with a reference to a type in the bin folder. This is important since .NET ensures that all types are available before the function is entered, not when the actual line of code is reached. Using proxy classes for event receivers and similar SharePoint classes is a good way to hide all of this from developers.

We developed and implemented the proxy classes at adesso. We might publish the implementation, be it open or closed source, but this is still undecided. If you are interested in this, please leave a comment. I am not the one who decides, but we are actively seeking opinions on this, so you will actually influence the outcome.

SharePoint: Isolating Test Data

If you extract the change sets described in the previous installment of this series into a separate installer class, you can use it to create data structures without installing WSPs. This enables automated integration tests for the installer itself, and for code based on the resulting data structures. Automated tests should be reproducible. When you run them twice, each run should yield the same result, if you didn’t change the code in between. If you run tests against the SharePoint object model, you will notice that SharePoint persists your changes between two test runs. This means that two consecutive runs can differ, if the second run evaluates data from the first run. Looking at SQL doesn’t help in this case. Most SQL servers support transactions, the basic tool for integration tests against databases. You start the transaction during the test setup and perform a rollback in the tear down method. This leaves the database effectively untouched. SharePoint does not support transactions, so this way is out of reach.

A common solution is to rely on mock objects. If you run your tests using mocks, your data won’t reach SharePoint, and won’t be persisted. This is feasible when the system under test is the business logic. But in many cases, the integration with SharePoint is more critical than the business logic itself. The object model exhibits some strange behavior which you probably won’t mirror in your mock objects. Take the SPWeb class as an example. When you create an instance, then add a new user defined field type and have a look at the field types exposed by your SPWeb instance, you will see the old list not including your new type. Somewhere deep inside SPWeb this list is cached, and you cannot influence it. Similar behavior can be observed for the Properties list. This can result in hard to find bugs. The second important source of errors hidden by mocking is the invisible dependency chain. Switch on forms based authentication, and SPWeb.EnsureUser will actually require a web.config with the appropriate settings for System.Web.Security.Roles.Providers. Although this is reasonable given the nature of forms based authentication, it is a source of confusion since it runs fine in a web context and fails for console applications or automated tests. Given these drawbacks, mocking the SharePoint object model should be handled with care.

Another source of inspiration can be unit tests. Main memory doesn’t support transactions either, yet unit tests run isolated from each other due to the lack of persistence. The point is that new main memory is allocated for each test run. Even if the physical bytes are reused, they are logically unused. You can mirror this by creating a new environment for each test run. Similar to memory-based unit tests, this environment is used once and then discarded. SharePoint provides different levels of isolation: You can create a new farm, a new web application or a new site collection. Creating a new farm provides perfect isolation, but takes a lot of resources. This is not feasible in practice. New site collections provide isolation of lists and content types, but share installed solutions, user defined field types and the like. Web applications fall somewhere in between. We prefer using one site collection per tests, since these are relatively cheap to create and sufficient in many cases. Creating a new web application is orders of magnitude slower.

You gain another order of magnitude in execution speed when you pre-allocate the test site collections. A windows service in the background can ensure that there are always a few dozen site collections ready to be used as a testing environment. Each test run then take one of them (if available), mark them as used and delete it when it’s done:

public SPSite GetSite()
  var site = UnunsedSites.FirstOrDefault() ?? CreateSite();
  site.RootWeb.AllProperties["IsUsed"] = true;
  return site;

private static SPSite CreateSite()
  var site = WebApplicationHelper.CreateSite();
  site.RootWeb.AllProperties["IsReady"] = true;
  return site;

It is not trivial to determine whether there is a usable site collection. SharePoint likes throwing exceptions when you access a site collection during its creation or deletion:

public static IEnumerable<SPSite> UnunsedSites
  get { return TryFilter(s=>!IsUsed(s) && IsReady(s)); }

private static IEnumerable<SPSite> Sites
  get { return WebApplicationHelper.WebApplication.Sites; }

private static IEnumerable<SPSite> TryFilter(Func<SPSite, bool> filter)
  foreach (var site in Sites)
        if (!filter(site))
      yield return site;

private static bool? TryParseBool(object value)
  if (value == null)
    return null;
  bool result;
  if (bool.TryParse(value.ToString(), out result))
    return result;
  return null;

private static bool IsReady(SPSite site)
  return TryParseBool(site.RootWeb.AllProperties["IsReady"]) ?? false;

private static bool IsUsed(SPSite site)
  return TryParseBool(site.RootWeb.AllProperties["IsUsed"]) ?? false;

If test runs fail in a way that the tear down method is not reached, for example when you stop a run in the debugger, the site collection won’t get deleted. You can add a garbage collector to the windows service to remove these zombie site collections:

public static IEnumerable<SPSite> ZombieSites(TimeSpan timeOutReady, TimeSpan timeOutUsed)
  return TryFilter(site => IsZombie(site, timeOutReady, timeOutUsed));

private static bool IsZombie(SPSite site, TimeSpan timeOutReady, TimeSpan timeOutUsed)
  var age = DateTime.UtcNow - site.LastContentModifiedDate;
  var isZombie = (!IsReady(site) && age > timeOutReady) ||
                 (IsUsed(site) && age > timeOutUsed);
  return isZombie;

Using this cache reduces the overhead for each test run to about 300 ms. This is huge when compared to unit tests. On the other hand, it is fast enough to encourage developers to write and run a few tests for the code they are working on, probably even using test-driven development.

We developed and implemented the site collection cache at adesso. We might publish the implementation, be it open or closed source, but this is still undecided. If you are interested in this, please leave a comment. I am not the one who decides, but we are actively seeking opinions on this, so you will actually influence the outcome.

Using .less in ASP.NET MVC4

Update [2012-06-02]: Visual Studio 2012 RC includes a new bundle mechanism that supports chained transforms. Scott Hanselman posted an example that includes .less.

ASP.NET MVC 4 (beta) supports post-processed bundles which are especially useful for CSS style sheets and JavaScript. MVC 4 is shipped with minifiers for CSS and JavaScript, but you can also use your own post-processor. This is where you can plug in .less, using the dotlessClientOnly nuget package:

public class LessMinify : IBundleTransform
	private readonly CssMinify _cssMinify = new CssMinify();
	public void Process(BundleContext context, BundleResponse response)
		if (context == null)
			throw new ArgumentNullException("context");
		if (response == null)
			throw new ArgumentNullException("response");
		response.Content = dotless.Core.Less.Parse(response.Content);
		_cssMinify.Process(context, response);

Now you can create bundles with your own minifier:

var styles = new Bundle("~/Content/myStyle", new LessMinify());

SharePoint: Updating Data Structures

One of the most problematic aspects of change requests in SharePoint projects is the update mechanism. You have WebParts written in C# and the accompanying .webpart files. These are based on data stored in lists based on a declarative XML specification. The lists themselves refer to ContentTypes, also specified using XML. ContentTypes rely on Site Columns, as found in a third kind of XML file. The data is formatted using CSS files, and all of this is activated according to a XML based Feature. Your solution is already deployed, your users have filled it with actual production data. And then they notice that something has to change. You have to update the whole solution.

SharePoint ships with an update mechanism for solutions, accessible via the PowerShell commandlet Update-SPSolution. How does it work? I can’t tell for sure. Handling all the different kinds of declarative data structures is difficult, and sometimes counterintuitive. Have a look at the feature definitions. When you add a new feature to your solution, it is actually ignored by Update-SPSolution. You have to add a new solution package instead. This leads to solutions with technical feature names, such as “Web” and “Site”. These features contain everything used in this scope. Most users don’t like these features, they prefer names from their domain language instead of obscure technical terms. This is something many developers learned to work around. But there are more complex problems. Given a ContentType with a field called A. Now this field has to be renamed to B. If you update the respective XML files, how does Update-SPSolution handle this? Does is rename the field? Does it remove A and add B? Does is anything at all? It’s this kind of uncertainty which leads many developers to one-shot solutions, devoid of any future updates. You could look it up for any special case, but in general you won’t find a satisfying answer. Retracting and deploying the solution is no alternative, because this would delete all the existing user data.

SharePoint is not the only platform which has to deal with mutable data structures. Lets have a look at SQL. SharePoint isn’t a relational database management system, but the concept of SharePoint lists and SQL tables are close enough to get inspirations. It is important to note that the declarative list definitions are somewhat comparable to CREATE TABLE, while the SharePoint object model can also be used to mimic ALTER TABLE. Tools like LiquiBase and dbDeploy help SQL developers to update data structures without losing user data. Their basic concept is the change set. A change set contains all changes from one version of the database to the next. When applied incrementally to an empty database, you finally reach the current data structure. You can also use them to update existing databases. If your production database runs in version 15, and you install the update including the change sets up to version 22, only the sets 16 to 22 are applied. Change sets explicitly specify the required change, not the resulting structure. This allows to rename fields without losing data, because the change set makes it clear that A is renamed to B, in contrast to removing A and adding B. While the resulting structures would be the same, the first case retains the user data while the second one yields an empty column.

The concept of change sets can be also implemented in SharePoint. The object model provides everything you need. The most comfortable way for developers is to add extension methods to SPList etc. using a fluent syntax:

SPList list;
SPContentType contentType; 

Edit(0) specifies that the following change set is only to be applied if the current version of the list equals 0. AddContentType specifies an operation in this change set. You could also chain operations. The final Apply() then checks whether the current version of the list is actually 0. If it is, all operations are applied and the version is incremented by one. When you run the same code a second time, Apply() detects that the change set at hand has already been applied and ignores it. This allows for incremental updates:


In this case, the previously added content type is removed in the next step. You might think that you could also delete the first line instead of adding a second one, so that it isn’t even added in the first place. But the point of this mechanism is to deal with already deployed data structures, so you can only append operations, never change the ones already deployed on production systems. The change set implementation could look like this:

using System;
using System.Collections.Generic;

public abstract class ChangeSet<T> : IChangeSet
  private readonly List<Action<T>> _changes = new List<Action<T>>();
  private readonly T _entity;
  private readonly int _fromVersion;
  private readonly string _contextId;

  protected ChangeSet(T entity, int fromVersion, string contextId)
    _entity = entity;
    _fromVersion = fromVersion;
    _contextId = contextId;

  protected ChangeSet(ChangeSet<T> changeSet)
    _entity = changeSet._entity;
    _fromVersion = changeSet._fromVersion;
    _contextId = changeSet._contextId;
    _changes = new List<Action<T>>(changeSet._changes);

  protected ChangeSet(ChangeSet<T> changeSet, Action<T> change) : this(changeSet)

  protected abstract Uri WebUrl { get; }
  protected T Entity { get { return _entity; } }

  public void Apply()
    var versionAccess = new VersionAccess(WebUrl);
    var entityVersion = versionAccess.GetVersion(_entity, _contextId);
    if (entityVersion < _fromVersion)
      throw new MissingChangeSetsException(_entity, entityVersion, _fromVersion, _contextId);
    if (entityVersion > _fromVersion)

    foreach (var change in _changes)
    versionAccess.SetVersion(_entity, _fromVersion + 1, _contextId);

  protected abstract void OnPostChanges();

You should store the versions numbers in a hidden list. Unlike the web properties, you can concurrently two different items in a list without overwriting without conflicts, while web properties are always persisted as a whole.

Since you usually don’t have a central point of data access in SharePoint projects, you also don’t have a central point to manage all changes in the data structures. Therefore multiple modules in your code might want to apply change sets to the same object, for example the root web. You can deal with this by storing the version together with a context id:

list.Edit(0, contextId).AddContentType(contentType).Apply();

Given these ids, the order in which your features are activated and the updates are applied becomes irrelevant. By the way, feature event receivers are a good place to apply change sets. As you don’t have declarative data structures anymore, your data structures won’t be removed when you retract your features. This allows you to use the retract/deploy mechanism to install updates, including all the bells and whistles of a new deployment such as adding new features. The feature activated event is then used to perform the update.

We developed and implemented the change set concept at adesso. We might publish the implementation, be it open or closed source, but this is still undecided. If you are interested in this, please leave a comment. I am not the one who decides, but we are actively seeking opinions on this, so you will actually influence the outcome.

Agile SharePoint Development

SharePoint is a strange framework. It provides you with an incredible amount of functionality. You can customize it in many ways, by configuration and by code. Power users love it. Yet it feels like computer science stone age to developers. When browsing the interfaces and decompiling the classes, you could constantly post to Just think about the user data storage table. Accordingly many SharePoint development projects uses processes from the computer science stone age, somewhere between the Waterfall model and Cowboy coding. This works fine for smaller projects, but it introduces a large cost for somewhat more complex undertakings. The power SharePoint gives to its users is a rich source of ideas, and with ideas come change requests. This is where agile development processes come into play. Embrace change. Users often update their mental model of their business processes, with each implementation you provide. Follow their updates, and they will be happier.

Change is also a risk. You might break something. Or worse, you might break something without noticing it. This is what the following article series is all about: Reducing the risk of change in SharePoint projects.

  1. Updating Data Structures
    Your lists will have to change. Your content types will have to change. This is about how to deal with mutable data structures without losing user data.
  2. Isolating Test Data
    If you run automatic tests, be sure to do so in a reproducible way. This posting shows you how to isolate the test data, to avoid interfering with future test runs.
  3. Isolating Test Code
    SharePoint has a special relation to the Global Assembly Cache. Get to know the implications, and learn to avoid the GAC.
  4. Dependencies
    Testing against resources such as the SharePoint Log makes it hard to verify the result. You might even introduce conflicts. Learn how to apply dependency injection to SharePoint to improve testability.
  5. Refactoring
    Tool support for refactoring C# is great. For CAML, it isn’t. Learn how to use POCOs and LINQ in a clean way, avoiding LINQ-to-SharePoint.