Saturday, April 6, 2013

Processing XML with .NET while preserving your sanity


XML is facing quite a bit of criticism, raging from accusations of being too complex to just being on of Micrsoft's attempts to take over the world (it's actually an open W3C standard but nevermind). The general consensus is however that it does what is says on the tin, which is to transfer data between applications, allowing humans to take a peak in-between. The focus of this post is the last part involving human beings (stupid humans, always making programming more difficult than it needs to be!)

Consider this piece of XML:
<?xml version="1.0" encoding="UTF-8"?>
<xmldata>
  <element>value</element>
</xmldata>
Nice and readable, isn't it? Well, most times at least. Now look at it without the new lines and identation:
<?xml version="1.0" encoding="UTF-8"?><xmldata><element></element></xmldata>
Much less so, indeed!
Everyone who has had their fair share of XML programming has faced this situation - for parsers the two XML documents are identical, however when we need to take a look at our data when debugging we get confused by the ugly, unformatted XML. This is made worse by the .NET Framework's tendency to produce the latter variety of XML by default.

There are, of course, simple ways to make our lives easier by formatting XML the way we want it - it's just less obvious than it should, so I decided to put this post together and shed some light on a few tricks and subtleties.

Dumping XML to a file & XmlWriter

It's a natural scenario to grab an XML file, do some processing, and save it. Here is a trivial piece of code that does that:
XmlDocument xml = new XmlDocument();
xml.Load(INPUT_PATH);
// ... XML processing code
xml.Save(OUTPUT_PATH);
This of course can easily be adapted to save the XML to a database, over the network or wherever else we need it to be. Most times it would appear to work fine, until you try it on our simple input. Here is what we get:
<?xml version="1.0" encoding="utf-8"?>
<xmldata>
  <element>
  </element>
</xmldata>
Looks almost the same - but not quite, the closing tag is on a new line! Curiously though, when there is an actual value between the tags we don't get a new line added to it, so it might be pretty hard to spot in a complex XML document with most elements containing values. That's actually what caused the bug I was troubleshooting when I got the inspiration for this post - it might look like it's not a big deal but what's between the opening and closing tag is our value and we just had a new line inserted into it - for most applications a new line is quite different than an empty string!
So then, what do we do about it?

Using XmlWriter

We have a class in the .NET Framework that's meant to give us more control over XML-exporting operations - XmlWriter. Here is the simplest possible way to use it, without supplying any explicit settings:
XmlDocument xml = new XmlDocument();
xml.Load(INPUT_PATH);
// ... XML processing code
XmlWriter writer = XmlWriter.Create(OUTPUT_PATH);
xml.Save(writer);
writer.Close();
And the result:
<?xml version="1.0" encoding="utf-8"?><xmldata><element></element></xmldata>
... i.e. exactly what we were trying to escape from. Looks like the XmlWriter is of not much use by itself when it comes to formatting XML for human-readability.

XmlWriterSettings.Indent

Luckily, there is the Indent property in XmlWriterSettings which is false by default but can very easily bet set to true:
// ... XML processing code
XmlWriterSettings writerSettings = new XmlWriterSettings();
writerSettings.Indent = true;
XmlWriter writer = XmlWriter.Create(OUTPUT_PATH, writerSettings);
xml.Save(writer);
Leading to, finally, a well-formatted XML output!

XmlWriter with it's settings object is nice and all, but there is one caveat that you could hit before getting to the Indent property - there is also the NewLineHandling property, which you might be tricked into thinking would achieve our goal. In fact, it only affects the new lines within actual values between tags and doesn't apply to the new lines in the markup.

Juggling with XML formats in-memory

But wait a second, this neat solution relies on XmlWriter, which - in this example, at least - only writes to a file. What if we don't actually need to write the file to the FS and we'd rather have a string, byte array or some other in-memory structure? We have a few ways to achieve this using the same XmlWriter-based logic, presented here from most to least atrocious.
One option is to just save the file and read it back like a text or binary file - this will load the well-formatted XML in memory as string/bytes/whatever. In case this solution looks attractive, I have some advice for you - don't do it. Even with an SSD drive read/write operations are expensive, and also an unnecesarry risk - the HDD might be full, we might not have access to the folder, etc. I can think of only one situation where this 'solution' would be advisable - if you actually need the physical files, e.g. for debugging or logging purposes.
Another way to harness XmlWriter for this task is to combine it with .NET's flexible, polymorphic stream architecture and create your XmlWriter around a MemoryStream. Now if using something called MemoryStream in order to just sanitize your XML doesn't sound like an overkill to you then I guess I can't argue further - if you haven't been featured on TheDailyWTF you probably will soon be.
But fear not - there is another way. Enter LINQ...

XElement/XDocument

This solution doesn't actually use LINQ itself - it just taps on the XElement-based infrastructure that LINQ to XML uses to objectify XML documents. Apart from making it possible to use LINQ on XML, these classes also use some of the more recent .NET framework additions, like object initialization and anonymous types, in order to make dealing with XML in .NET less cumbersome.
Here is how to beautify an XML document in-memory and assign it to a string:
XDocument xDoc = XDocument.Load(INPUT_PATH);
// ... XML processing code
String xmlString = xDoc.ToString();
And here is what we get:
<xmldata>
  <element></element>
</xmldata>
Almost but not quite - it's missing the XML declaration (<?xml version...). Here is how add it - we just need to change the last line to:
String xmlString = xDoc.Declaration.ToString() + Environment.NewLine + xDoc.ToString();
And voila - we have a well-formatted XML in-memory, in two lines!

Further notes on XML and strings in .NET

But why did we need to change that line in order to add the XML declaration, why doesn't it get included automatically? Upon further observation, we can see that there is the XDocument.Save(String path) method, which saves the document to disk and does include the declaration - so it starts to look like an unintentional omission to not include it in ToString()?

As it turns out, not only is there a reason for that, but there are in fact at least two good reasons to implement ToString() in such a way, and each of them reveals something interesting about the way .NET handles XML and strings in general

Handling chunks of XML in-memory

The traditional XmlElement-based approach is a traditional DOM implementation - it builds a tree of the document in memory, and deals with all pieces of XML as documents - even if it's a single element, a dummy XML document object will be created around it. That's not the case with XElement - there an XElement instance can represent just one XML element without any context.
This point of view is taken further by considering the XML declaration to not be a part of the document - it's just a header of the .xml file format to mark the content as valid XML, to indicate the version and encoding (although you need to know what's the encoding in order to read the header that gives you the encoding but nevermind). That's why you only get the XML declaration inserted when you save the thing to a file - before that it's just a document in memory that holds a piece of XML markup.

String encoding in .NET

First, a quick refresher on encodings, as I suspect that developers that work for the western market only don't deal with them in-depth on a daily basis. Encodings are ways to map characters to sequences of bits so that they can be stored in binary media. Everyone knows about ASCII, which assigns one byte per character and fits just the Latin alphabet and a bunch of funny symbols, and Un In the average .NET developer's practice, encodings are used explicitly in order to convert strings to bytes and vice versa. Let's extend our example in order to get that neat XML in a byte array, e.g. to be sent over a socket:
XDocument xDoc = XDocument.Load(INPUT_PATH);
String xmlString = xDoc.Declaration.ToString() + Environment.NewLine + xDoc.ToString();
byte[] xmlBytes = Encoding.UTF8.GetBytes(xmlString);
Now, we can of course call our good friends XmlWriter and MemoryStream but as was demonstrated we have a better way to deal with the task at hand - the only change from the previous example is the addition of the last line that uses the UTF8 encoding to convert the sequence of symbols that is represented by our string to a sequence of bytes. I bolded this because it's crucial for my next point - to understand the situation here, we need to think of strings abstractly. When we have a string object it is of course just a point to a region in memory that is filled with bytes but that's of no concern to our encoding object - it only looks at the sequence of charters, regardless of how they are represented in memory. It then generates the matching bytes for each character to give us our byte array - that's it.

OK, but what does this have to do with XML and the reason why XDocument doesn't include the XML declaration? Well, it's the same principle - XDocument is a pure soul without a body (i.e. a physical file), and it treats the encoding as a bodily concern - it's a mere physical representation of the information in the XML document. That's why the pure information contained in the XDocument object shouldn't contain the XML header, and with it - the encoding, when in fact it isn't associated with any encoding at all.

.NET does store strings as bytes in memory, so there is one more encoding operation going on all the time - the mapping of our sequence of symbols to the bytes in the managed heap. For this purpose, CLR uses UTF-16 - hence the 2 byte size of the char datatype and 2 bytes per symbol for string. What is a little bit confusing at first is that there is no specific UTF-16 encoding option, although we have UTF7, UTF8, UTF32 and Unicode - which is not even an encoding but the overall standard that covers them all. The System.Text.Encoding.Unicode encoding is in fact UTF-16, which is a glimpse into how in the .NET architects' team they assume UTF-16 to be the default encoding.

For more details and fun into how strings, and objects in general, are stored in memory in .NET you can always count on the guru Jon Skeet: http://msmvps.com/blogs/jon_skeet/archive/2011/04/05/of-memory-and-strings.aspx

Bonus: The code

Here is a simple Visual Studio 2010 solution that demonstrates all the code in this post for download

Also, in case you don't like downloading files from strangers - here is the same solution shared on CodePlex:
https://xmldemo.codeplex.com/

Friday, September 21, 2012

InfoPath - The URN specified in the XML template file does not match the URN specified in the form

You cheeky developer you, been monkeying around with your XSN files, haven't you ;-)

There comes a time in every SharePoint developer's life when he* has to deal with some InfoPath. That's often a frustrating experience - especially when inheriting a middle-sized to large project. InfoPath probably does what it's supposed to do well - and this is to provide static forms for users to fill. If you swerve from the road well travelled however and try to build something more custom some opportunities (as we call problems nowadays) are likely to emerge, which get aggravated by the limited resources on the more specific scenarios.

One of those specific cases is when you need to manage form templates on the fly, programmatically. That's not particularly hard - after all InfoPath form templates are just CAB files containing XML, so you can dynamically unwrap them, do the changes needed and then put them back together with MakeCab or similar tool. Still, the process is not that hard to botch, and this "The URN specified in the XML template file does not match the URN specified in the form" error is one of the possible outcomes of getting it wrong.

This one is kind of self-descriptive, although it might not be exactly intuitive for InfoPath newbies. See, the URN (Uniform Resource Name) something like an identifier for the form template and is referenced on two different places throughout the files that comprise the XSN package and naturally, it needs to be the same in both. So, if you are spewing InfoPath form templates on the fly, with different names, you need to take care to edit the URN in sync - otherwise you end up with this error. You can also leave these alone altogether and not edit them - I haven't noticed any adverse consequences so far.

First, in minifest.xsf:

<?xml version="1.0" encoding="UTF-8"?>
<!--
This file is automatically created and modified by Microsoft InfoPath.
Changes made to the file outside of InfoPath might be lost if the form template is modified in InfoPath.
-->
<xsf:xDocumentClass solutionFormatVersion="2.0.0.0" solutionVersion="1.0.0.46" productVersion="14.0.0" name="urn:schemas-microsoft-com:office:infopath:your_form_name:-myXSD-2011-02-09T17-25-42" xmlns:xsf="http://schemas.microsoft.com/office/infopath/2003/solutionDefinition" xmlns:xsf2="http://schemas.microsoft.com/office/infopath/2006/solutionDefinition/extensions" ...

Then, in template.xml:

<?xml version="1.0" encoding="UTF-8"?>

<?mso-infoPathSolution name="urn:schemas-microsoft-com:office:infopath:your_form_name:-myXSD-2011-02-09T17-25-42" solutionVersion="1.0.0.46" productVersion="14.0.0" PIVersion="1.0.0.0" ?>
<?mso-application progid="InfoPath.Document" versionProgid="InfoPath.Document.2"?>
<?mso-infoPath-file-attachment-present?>
<my:myFields ...


* - excuse the 'sexist' language but I have yet to meet a female SharePoint developer. A CV came in recently that made all the devs in the office gather around the monitor to admire what was supposedly one of those elusive beasts that combine both SharePoint skills and a vagina. I didn't catch a glimpse myself, so for me the status of SharePoint ladies is in the same category as Yeti - "I know someone who's cousin has seen one. He took a picture, but then film got inadvertantly ruined" I haven't been to any SharePoint cons though and these are the most likely places to spot these creatures - if they exist. I'm not implying that girls can't do SharePoint, they just aren't trying ;-)

Friday, March 23, 2012

Smart phones, dumb people

The onset of spring reminds me of many things - beers in parks, the project that was due to finish in February, etc. - but the most relevant one is that I've actually been owning my HTC Desire Z for almost a year. My, how does time fly when you have a smartphone! Remember how tedious the longer trips to the bathroom were back in the day, reading those shampoo labels over an over again, waiting for the relief of the long awaited splash sound? And now you just can't get enough of those moments, replying to your facebook messages in the privacy of the smelly cubicle; and before you know it it's time to wipe, but you still look at the screen while you do. Mobile computing has indeed changed our lives profoundly.

I remember those days when I first fell in love with my latop, thinking that no other gadget will ever get nearly as close to my heart. And I still love it dearly - but let's face it - no matter how mainstream geek culture has become it will always be considred weird to walk out of the toilet with a laptop. Still, I must admit, smartphones have a long way to go - although Android is quite good for what it's worth, it doesn't come anywhere near a full featured OS and my limited exposure to WP7 and iOS has clearly indicated that they have a different but at least equally annoying sets of drawbacks. This is to be expected - one needs to make efficient use of the limited resources of the small device so some cuts have to be made and in the same time the millions of morons that use it around the world should be prevented from from ruining it by downloading dodgy stuff.

My smartphone has anyway found a unique place in my digital life and is inching its way towards my heart. Nevertheless, ask the general population what they think about smartphones - most will be like "Oh, those zombies staring at them iphones constantly, I wonder how they don't get run over by buses! Mine phone shows me the time and takes calls and that's all I need". It's natural for people to not feel needs that have never been fulfilled - you never miss something that you've never had. I'm confident that these are going to be a dying breed soon, though. In fact, it's already happening, at least in London - the overall UK smarphone penetration is at the top of the range for Europe at 40% but this includes rural areas with poor 3G coverage where it makes little sense to own one; in London the tedium of the trips in the tube and the congested traffic has forced most people to kill time by tapping their fingers on various screens. And in Singapore it has already happened, with hefty 90% - gosh, don't they have old people there, I thought it's a civilized country but apperently they euthanize them at 50.

Now tablets are braced to be (or already are?) the next big thing. When the first iPads came out they looked kind of neat although I couldn't quite see their niche. iPads, I thought, are only good for use in cramped seats, and iPad owners never travel in cramped seats. I was willing to try one but as someone that types with more than 60 wpm, I see the physical keyboard as a very important part of the human-device interface, so the urge was not that strong. Then the Android-based tablets came out and although they are still in their infancy I'll be damned if I buy a gadget with storage that I can't browse with my shell but should rather sync through iTunes, one of the shittiest applications every to be shoved down the users' throats. Immature as they are, the tablet wars have already provided for some hilarity with the countless legal battles that Apple and Samsung are locked in, cultimating to the one that invokes 2001: A Space Odyssey and shall from now on be known as the Kubrick defense (after the Chewbacca defence). I another twist in this story, the supposedly game-chaning new screen of the iPad and its heart, the A4 system-on-a-chip (i.e. CPU + GPU) are both manufatcured by Samsong. Go figure! The only thing that's for certain is that the Samsung tablets are not likely to be technologically inferior to the iPads. Now throw in an open OS and lower price and I can see them getting popular quite soon.

Still, I'm not entirely sold just yet. Apart from the cramped seats, which I do use often, I have limited use of a super big phone that doesn't fit into a pocket, which is what tablets pretty much are if you run iOS, or even android, on them. Now if only this thing could have a real OS on it and still last that long on battery!

I kind of like Microsoft's strategy here - they include tablet support for the full-featured OSs and leave Windows Phone just for, well, phones. Subsequently, vendors of Windows-based tablets are trying to fit a real computer into the form factor, rather than going bottom-up (i.e. enlarging a touchscreen phone). An example is the Samsung Series 7 Slate PC, that has a real i5 processor and comes from their laptop, rather than mobile phone division. That's a gadget I'd love to use so much that the 3:30 hours of battery life will simply not be enough. Now there is a lot of hype about Windows 8 and how tablet-friendly it's going to be and I must admit that I'm kind of succumbing to it, if someone manages to put it on something equally powerful that lasts for more than 7 hours on battery I'll be the first to buy one (online - not queueing like an apple addict).

Wednesday, September 28, 2011

.NET is more open than Java

There has always been this stereotype that Microsoft are sworn enemies of open source and Java is uber cross-platform. Well, guess what, that's not true and the proof came in April 2010. I know that this is about a century ago in software years but for some reason the news have gone past me and I'd hate to not comment on that - be it regretably late.

The story goes as follows - after acquiring Sun (and the Java technology along with it), Oracle decided to sue Google over their use of Java on the Android OS (source), basically redefining 'cross-platform' to mean 'running on any platform that Oracle can make money out of'. The fact that Java had a chance to become the de facto universall development environment for mobile devices doesn't matter.

In the same time Microsoft have this thing called Community Promise, which basically means that they are OK with you implementing a .NET environment on any platoform, and more specifally - it means that they won't pull the same stunt about .NET on Android (source, and yeah, .NET development on Android is possible - http://android.xamarin.com/ ).

Now let's get in the time machine and travel back to present day. It's already October 2011 and in the meantime Android has become the dominant OS for smartphones, and Novell have been acquired by Attachmate and along with it - the Mono and Mono for Android team (Ximian, the company that originally developed Mono was acquired by Novell), followed immediately by Attachmate dumping the Mono and MonoDroid teams - a truly disturbing development for the .NET and mobile development communities. However, a large part of the team that originally created Mono now formed Xamarin - a company that is committed to supporting and advancing this family of products, and they have reached an agreement with Attachmate to be the official stewards of these projects.

So, Mono, MonoTouch and Mono for Android, after changing hands three times are now once again in the hands of a small, dedicated team, which is of course very promising. We'll only need to see who will get to buy Xamarin next...

I really hope for one thing, though - that it's not going to be Microsoft or Google, or for that matter - any other major player in the smartphone game, as this will lead to Apple immediately blocking MonoTouch on iOS. For the ones unfamiliar with MonoTouch - this is a product that lets you compile .NET applications for iOS, and with HTML5's future being uncertain before 2022 - that's pretty much the only way to run anything that was not orignially written on Objective C on the iPhone and other iOS devices. Now MonoTouch is not a true .NET implementation - it's not CLR, it doesn't JIT and execute intermediate language code - it just compiles .NET to native code.

There were times that the future of MonoTouch was quite murky - there was the infamous point 3.3.1 in the Apple developer agreement that stipulated that iPhone apps must be originally developed in Objective C, i.e. the trick that MonoTouch does was officially banned. But then the ailing Jobbs was apparently struck by a dose of benevolence and decided to let MonoTouch do its thing.

The implications of both developments - now you can develop in .NET on every major mobile platform! Of course it's not as easy as it sounds - as Miguel de Icaza (the mastermind behind Mono) points out himself, you can only really reuse the business logic, you'll need to implement the front-end differently for every platform. Still, you'll do it with the same tools, and will follow roughly the same practiceses.

The topic of cross-platform mobile development with .NET is one that has captured my interest and I will be posting more about it in the future, so if there's anything that you are particularly interested in, or have something interesting to share please make full use of the comments.

Tuesday, September 20, 2011

Luke, who's your father - multiple interface implementation in C#

Multiple inheritance - this strange and gruesome beast

Multiple inheritance was practised in those dark, long gone days when c++ roamed the software world. Then the meteor of modern OOP languages struck, nearly wiping out the usage of c++ in enterprise software, and with it went multiple inheritance. We know of its existance from written record (early 90s programming textbooks, still in use in some parts of the world) and through archaeological reviews of some decades-old systems.

There is one remnant of multiple inheritance that is still present today, though - multiple interface implementation. In c# we don't have multiple inheritance and the complications associated with it (e.g. the diamond problem, which I call Luke, I am your father... twice) but it brings about another set of peculiarities.

First, there is the situation with duplicate member names but it's resolved relatively simply with explicit interface implementation. You can create a hierarchy like this one:

interface IBase1 { void DoBaseStuff();}
interface IBase2 { void DoBaseStuff();}
class Derived : IBase1, IBase2
{
public void DoBaseStuff() { }
}

You can implement both methods with one declaration, as in the example above, or alternatively - have explicit implementation and two different versions:

class Derived : IBase1, IBase2
{
void IBase1.DoBaseStuff() { }
void IBase2.DoBaseStuff() { }
}

Hell, you can even combine both and provide a native implementation on top of the two explicit interface implementations:

public void DoBaseStuff() { }
void IBase1.DoBaseStuff() { }
void IBase2.DoBaseStuff() { }

so you can then call the three different implementations like this:

Derived derived = new Derived();
derived.DoBaseStuff();
((IBase1)derived).DoBaseStuff();
((IBase2)derived).DoBaseStuff();

Turning to the dark side

All this is just for starters and is explained in every decent .NET book (I hope we have it in the one I co-wrote - it would be ironic if we don't). Now let's get to the more interesting stuff.

One can easily reproduce the Luke I'm your father twice situation using just interfaces, the following thing compiles:

interface IVader {
void TurnToTheDarkSide();
}
interface ISomething1 : IVader { }
interface ISomething2 : IVader { }
class Luke : ISomething1, ISomething2
{
public void TurnToTheDarkSide() { }
}

However, this is much easier to deal with compared to real multiple inheritance as we haven't been to provided the means to code separate implementations for ISomething1.TurnToTheDarkSide() and ISomething2.TurnToTheDarkSide() - we can only have a single body for this method and it will be executed no matter if we access the object through a Luke, ISomething1 or ISomething2 pointer. The only bit of fun we can have is to provide an own implementation in the class:

public void TurnToTheDarkSide() {
Console.WriteLine("Luke, I'm your own implementation");
}
void IVader.TurnToTheDarkSide() {
Console.WriteLine("Luke, I'm your derived interface implementation");
}

In this case we'll have one method for calls originating from a Luke pointer and another one when calling through any of the interfaces, e.g. this code:

Luke luke = new Luke();
luke.TurnToTheDarkSide();
((ISomething1)luke).TurnToTheDarkSide();
((IVader)luke).TurnToTheDarkSide();

will produce the following:

Luke, I'm your own implementation
Luke, I'm your explicit interface implementation
Luke, I'm your explicit interface implementation

And that's it - in .NET we can't have separate implementations for the two Vaders, even through explicit interface implementation, that's a feature of the framework intended to spare us some of the complexities.

Dealing with dark explicit implementations

Now let's see if there are some practical implications to this. Suppose that a we have a bunch of Jedi will eventually be politely asked by a Sith lord to join the dark side. We might model the situation with something like:

interface IJedi {
void TurnToTheDarkSide();
}
class Vader : IJedi
{
void IJedi.TurnToTheDarkSide()
{
Console.WriteLine("- Sure, why not. It can prevent death, right?");
}
virtual public void TurnToTheDarkSide()
{
Console.WriteLine("- Sweet, it does lightnings, doesn't it?");
}
}

class Luke : Vader//, IJedi
{
public override void TurnToTheDarkSide()
{
Console.WriteLine("- Never!");
}
}

then test it with:

Luke luke = new Luke();
Console.WriteLine("- Luke, turn to the dark side!");
luke.TurnToTheDarkSide();
Console.WriteLine("...");

Console.WriteLine("- Luke, turn to the dark side!");
((Vader)luke).TurnToTheDarkSide();
Console.WriteLine("...");

Console.WriteLine("- Luke, turn to the dark side!");
((IJedi)luke).TurnToTheDarkSide();

and see the result...

- Luke, turn to the dark side!
- Never!
...
- Luke, turn to the dark side!
- Never!
...
- Luke, turn to the dark side!
- Sure, why not. It can prevent death, right?

Oops, we lost Luke!
It looks like something dark from his father still lives in him, and revealing it is as easy as calling Luke through an IJedi interface, i.e. exactly the thing we'll do if we need to check a collection of Jedis - we'll poll them polymorphically through the IJedi interface. That's because Vader is so evil that he's provided both own and explicit interface implementation - in this case whichever derived class we address through an IJedi pointer it will always have a tendency towards the dark side.

So is there a way to save Luke? Fortunately, it turns out that there is one, and the answer is hidden in the commented out implementation of the IJedi interface by the Luke class. To fight the dark side, Luke needs to implement IJedi in his own right, and only then he will be able to silence the explicit interface implementation of his father.

I suspect that there are people who'll either find the examples too abstract or simply don't care enough for the future of the galaxy to accept this as an important real-world example. These people should consider the following situation - we're deriving from a class that implements IDisposable, but in our derived class we need to implement some completely different disposal logic, e.g. we might need to keep some resources for longer than the base class disposal logic dictates. Then, we throw an instance of our class to some closed-source framework that does (obj as IDisposable).Dispose(). Our natural step is to override Dispose() and expect our implementation to be executed - but no, the bastard that coded the base class (also closed source) decided to do both own and an explicit interface implementation of Dispose(). In this case our only option seems to be to implement the interface directly ourselves.

A similar situation might arise if we are deriving from some class and then need to serialize objects with some serializer that we don't have control over. If the base class author implemented both ISerializable.GetObjectData and own GetObjectData then we don't have control over how our derived class will get serialized if the call to GetObjectData is made through an ISerializable pointer.

As a conclusion, I'd like to stress that I'm not a huge Star Wars fan but I just thought that the story gives us a good context to portray the problem at hand. Further, they deserve at least a bit of praise for what they just did to the BT Tower.

I hope this helped you on your way to enlightenment. May the force be with you!

Friday, March 23, 2007

Showing multiple charts from a single .xlsx in MOSS 2007 Excel Services - a bug in Excel

SharePoint 2007 bunch of technologies is indeed remarkable, especially on first sight. When you delve a bit deeper you can find some things to make you scratch the crown and go googling for a couple of hours but nevertheless WSS 3.0 and MOSS 2007 are really powerful.

Some of the new features that I am exploring are the basic BI capabilities that rely mainly on Excel Services. One of the components of Excel Services is the Excel Web Access web part that is capable of displaying named items from Excel 2007 worksheet, that can be:
  • Pivot tables
  • Charts
  • Worksheets
  • Named cells
  • Named cell ranges
I created a workbook with 3 sheets, every one with a pivot table and a chart. I renamed the tables from the Name box (top left ), like CompaniesTable, etc. and showed them in the web parts. But when I tried to change the names of the charts they always ended up with the name 'Chart 1'.

Having all charts with the same name can make arranging the web parts in the site harder. So, here is the workaround for this Excel bug:
http://support.microsoft.com/default.aspx/kb/928984

You just rename the chart from Layout -> Chart Tools -> Properties