Whenever you add a new feature or fix a bug, the resulting code usually does the job, but it doesn’t look so great on the first attempt. There is nothing wrong with the code, at least speaking in terms of logic. Yet if you don’t brush up your code, you will become slower and slower, because of all the hard to maintain quick fixes and hacks. The improvement of code without a change in external behavior is called refactoring. In a SharePoint context, you typically want to rename a site column when the users indicate that they found a better name for whatever. You can do so using the update mechanisms from a previous installment of this series. But renaming the column itself is only one part of the job. There are usually quite a few places where you access this column in your code. First, you can read and write the column using the  operator of SPListItem. Second, you can search for this column using CAML. In both cases, the column name is a string. Refactoring tools such as integrated in Visual Studio or plugged in by ReSharper do a great job when you rename .NET entities such as classes or properties, but they do not understand the meaning of string constants. This imposes a risk for rename operations, as you might miss some instances of the string constants. Your automated tests will catch them, but this is less comfortable than having an automated rename operation that simply works. .NET 3.5 introduced a solution for this. The Language Integrated Query, LINQ, allows you to write queries based on .NET entities. For SQL databases, the Entity Framework also maps the SQL structures to .NET classes, so you can access your fields using class properties. This enables refactoring support, not only for renaming but for many structural changes.
SharePoint 2010 includes LINQ-to-SharePoint. Its interface is independent of the SharePoint object model and relies a custom class hierarchy. The hierarchy is generated using spmetal.exe based on an existing SharePoint instance. This is problematic in several ways. First, spmetal has a few bugs and occasionally creates broken class mappings. You can fix the generated source code manually, but the next time the SharePoint structure changes, you have to generate the classes again and apply all the manual fixes a second time. Second, LINQ-to-SharePoint itself exhibits some strange behavior, such as omitting the time part in CAML queries for DateTime. It just does not emit the required IncludeTimeValue=”TRUE” parameter. You cannot work around this issue, since you cannot use your own CAML code in LINQ-to-SharePoint queries. This is where you have to fall back to the standard SharePoint object model. This is also the reason why you cannot migrate to LINQ-to-SharePoint incrementally, you have to switch whole code blocks at once.
You don’t have to rely on LINQ-to-SharePoint to profit from LINQ. The SharePoint object model is powerful enough to support all the operations, so you can add another layer on top of it just like LINQ-to-SharePoint does. We suggest a thin layer, just like the Micro-ORMS Dapper, Massive, or LINQ-to-SQL add on top of SQL. Using re-linq you can translate LINQ to CAML with very few lines of code. Mapping classes to SPListItem using reflection is equally simple, and as a result you get a easy to work with LINQ layer that is perfectly interoperable with the object model. Refrain from adding more functionality like object tracking. It’s usually not worth the effort, increases the complexity and requires your colleagues to learn new concepts. Simply translating LINQ to CAML and SPListItem to custom classes is easy to understand and does the job.
We developed and implemented the data access layer at adesso. We might publish the implementation, be it open or closed source, but this is still undecided. If you are interested in this, please leave a comment. I am not the one who decides, but we are actively seeking opinions on this, so you will actually influence the outcome.