Monday, July 31, 2006

Overview of Customizing Community Server

I have been learning a lot about customizing CommunityServer lately and it occurs to me that it would have been easier to do more faster if I had had an overview of the way CS is built. To that end, I will try to codify what I have observed.

Structurally, CS can be a little confusing because it appears to be a fully implemented asp.net 2.0 website application. The fact is that it is almost entirely 1.1, but was built by people who were very knowledgeable about the changes coming in 2.0. Whether its the use of master pages and skins, or the appearance of a global IsNullOrEmpty string checking method - the app seems to be 2.0. It isn't; the 2.0 version still uses the home-grown skins/master pages and many other 2.0-seeming features.

In terms of customizing the way your website looks or behaves you have to start with the aspx pages found in the various folders of the website application structure. These pages will point you to the various skins or controls in use. Yet, they will never (besides in controlpanel) point you to the code or visual features of the website. All of the real implementation occurs in the master pages, skins, and views, or in the control code in one of the included assemblies.

Each aspx page will identify some master page (not identifying one explicitly means it will use master.ascx) in its CS:MPContainer control declaration, and potentially one or more control declarations in CS:MPContent controls. The master pages are generally slim, and control overall layout of a page. The actual master pages are simply custom controls (ascx files) located in the /Themes/[current theme]/Masters folder of the website application. The most common base implementation of a master page is of 3 sections called 'lcr' (left side content), 'bcr' (body content, and 'rcr' (right side content). You can either define controls for every descendent page in these sections in the master page, or override master page content by declaring CS:MPContent controls with these ids on your aspx page. Skins should not implement these content controls. For example, if you wanted to understand how each thing is showing up on default.aspx, you would open default.aspx and find the 'ThemeMasterFile' attribute on the page level CS:MPContainer control. If you navigate to the HomeMaster.ascx file you'll see that the only thing being added here are some style includes in a "HeaderRegion", and the 3 content sections. In order to really understand where the content is coming from, you have to look at the controls declared within the various content sections.

A control declaration on a page or skin will carry a custom prefix defined on the page and the name of the control - this is standard asp.net customization. It is important to note such control names because first - along with the prefix declaration on the page - it will point you to the control code in the proper assembly where you can see how it is implemented. Second, the name is almost always identical to the the name of the skin that is used to display the control, with an added prefix of 'Skin-'. By default, all templated controls in CS will load a skin named "Skin-" + [the name of the control] + ".ascx".

While the master pages generally declare overall sections in which the various skins will be displayed, the aspx pages, skins, and views define the HTML and any additional sub-controls along with client or server side script to control display. I believe it is preferable to leave all html out of the aspx pages, and rely on the skins for this implementation. As a simple example, if we look this time at 'login.aspx' we'll see that the master file is not declared (so it is master.ascx) and all content is controlled by the "CS:Login" control. That means this control is declared in the CommunityServer.Controls.Login class, and its layout will be found in /Themes/default/Skins/Skin-Login.ascx. Sure enough, the layout of the login page is on this skin. The functionality (application logic) of this page is found in the class file.

This brings us to an important lesson about how all this comes together in the application: the class files control behavior by "wiring" properly named controls to certain events or operations on the back end. For instance, the DefaultButtonTextBox control for the password on Skin-Login.ascx must be named 'password' in order for the control logic to work properly. This magic takes place in the "AttachChildControls" method of each control which manipulates its members on the back end.

Using this basic knowledge we can then start to change how our website looks and behaves. Each templated or skinned control (those with skins) has a property "SkinName" which it inherits and will consult as the proper skin to apply if it has been supplied. Recall that if this property is null, then the skin named "Skin-[control name]" will be applied. Note that I have run into controls which ignore this property, but it is not the norm. As such, if we want to change how login.aspx looks we should create a new skin, and provide the name as a "SkinName" attribute on the control declaration on login.aspx. I think you should copy and rename the skins rather than alter them because it will save you headaches later if you try to upgrade CS, and clearly shows where you have made changes. When you fill in the "SkinName" attribute you use the full file name of the skin you created. This name may need to include the sub folder when you are dealing with blogs and galleries (i don't really have the nuances of these exceptions mastered but generally the controls from these assemblies automatically determine the folder which contains their skins so try that first. Aspx pages in the Blog subfolder are really an exception to most things I have said so far anyhow and I'll cover that later).

If you want to change the way login.aspx behaves you'll need to modify the Login class. Again, rather than modifying the class provided with CS, you should create a new assembly for your controls and extend the Login class via inheritance. You can change the name to match your modified skin if you have one, or leave the name the same. The only name collision issue I had working in VS 2003 was with the namespace including the CommunityServer.Controls prefix (so don't use MyCompany.CommunityServer.Controls, try CS.Controls) - the controls themselves are all fully prefixed in the aspx and ascx pages so there is no confusion there. I have found that the control classes aren't all designed real well for extension, so I often have to copy base class methods in order to modify behavior, but I am trying to guarantee that an upgrade will still work, and that I know where my code begins and CS provided code stops. Once you have your class built and the assembly included in your web project, you can change or add the TagPrefix declaration on your page and repoint the control declaration to your new custom control.

Blog Skin Exception

Blog aspx pages generally declare which view they are using, rather than a skin. These "Views" are found in /Themes/Blogs/[current blog theme]/Views and have the name "View-[view name].ascx. Views contain layout templates with various blog controls in them. Each control then has a skin named the same way I mentioned above for other controls. However, these controls have their skins in the /Themes/Blogs/[blog theme]/Skins folder, rather than with the other skins.
Submit this story to DotNetKicks

Thursday, June 29, 2006

Community Server Customization: Title Bar Links

I have really enjoyed learning how to use and customize Community Server. I think it is a great platform for developing community based websites. Yet, it can be difficult to work with because customizations aren't documented well. To that end, I will try to share things I have learned while customizing CS. However, my disclaimer is: I just started this type of CS customization, and while I have searched for the best way to do things, I might have missed a better way.

The basic problem I am writing about is that I have a page which is not one of the base "areas" on install (home/blogs/forums/picture/files, etc) but which I want to link to from the title-bar. I not only want a link, but that link should be highlighted when a user is located on the custom page, or in the custom area. The link should also be relative from any location from within the site, no matter if you are hosted at the root of a domain, or under a virutal directory (the kind of reference you get with the tilde in a asp:hyperlink navigateUrl in 2.0).

Link magic occurs in the SiteUrls.config configuration file. To accomplish our goal, we'll only need to edit the config file - soon I'll write an article on customizing reading the config. There's a whole lot you can do in this file, but everything is accomplished through three types of entries:

location - A location has an attribute for 'path' which is a root-relative path to a particular location in the site. For instance the weblogs location path is "/blogs/" because this is where all blog related files are located. There is also a name attribute which will identify a particular location for later reference. Finally, you can set a boolean attribute "exclude" to indicate whether or not this location is excluded from url re-writing (url re-writing is a process by which paths are canonicalized or formatted according to patterns in siteurls - possibly other stuff. You can even setup your own rewriter via the provider pattern supported for most of this type of low-level stuff in CS).

url - Indicates a path within a given location. This works out such that a page name can be given as the path, but will always be directed to that page name underneath a folder (if one exists) for a named location. To that end, a url element contains attributes for a path, and a location. Each url also has a name. A url path may also contain string format token(s), and a related 'pattern' attributes. If a pattern exists, the url rewriter will format the path accordingly.

link - the config file itself shows what these are, but the thing to understand for our current problem is that we want to use a resourceUrl, not a navigateUrl which should be reserved for external URLs only. A resourceUrl points to a named url element, and itself has a unique name by which it will be referenced in skins and master files (more on that later). The other attributes are documented well enough in the config doc.

To put it all together: First, figure out if one of the existing locations will be relevant for your link. If you have a page under the root of the site, "common" will do. If you have created a new sub-folder, you should create a location for that folder. Second, create a url element that corresponds to the proper page you want to link to within your chosen location. Finally, create the link element that references your resourceUrl (url element), and either provide a 'text' attribute, or point to a named resource string (resource strings are in the resources.xml file in the proper language sub-folder under languages) to indicate how the link should be labelled.

Once this has been done, your link will be added to the tab-strip in the title bar. In order for the correct tab to be highlighted when you are on your page, you need to include a "SelectedNavigation" control on the master or skin for that page with the 'Selected' attribute value set equal to the name of the relevant link element. If one page will serve more than one title-bar link, you can programmatically adjust the "Selected" property of the global SelectedNavigation control in the Page_Init event:

void Page_Init(object sender, EventArgs e)
{
if(Request.QueryString["GroupID"] != null && Request.QueryString["GroupID"] == "8")
{
Context.Items["SelectedNavigation"] = "MyNewLink";
}
}
Submit this story to DotNetKicks

Thursday, June 08, 2006

Mini-update: Daemon on x64

Daemon Tools is now available for the 64-bit platform.

http://daemon.alphabravo.org/daemon403-x64.exe
Submit this story to DotNetKicks

Tuesday, June 06, 2006

Online virus-scan support for x64

I am happy to discover that the beta version of safety.live.com's Safety Center works wonderfully on my x64 machine. I have had a difficult time finding reliable anti-virus solutions for x64. All other online versions have failed at one level or another.

If you haven't checked out the tool, check it out here.
Submit this story to DotNetKicks

Wednesday, May 31, 2006

Update: Custom Membership Provider

Yep, it was just that easy. It took me a total of 2 hours to write and test a custom provider that would solve the hack problem I wrote about last time. It would take longer if you wanted to implement a full provider, but I only needed CreateUser and DeleteUser for this issue.

The documentation and examples for creating a membership provider are good, so there's no need to cover the basics. I did run into a couple of gotcha's.

First, I really didn't need total customization so I simply extended the Framework's SqlMembershipProvider. If you are using SqlServer, this works fine. The strange thing here is that the connection string, _sqlConnectionString in the provider, is private. This being the case I had to override Initialize and get the value for myself before passing the config along to the base class. Not difficult, just a silly repeat of existing code in the base.

My only other issue was with providing a good message to users if my custom provider Create or Delete operations gave an exception. The basic model on Create is to trap significant issues and log the specific problem while returning null as the MembershipUser and setting the
MembershipCreateStatus to some error type. In this case, the only applicable status is 'ProviderError' which causes a generic message to be displayed to the user. Logging isn't really helpful in this case, so I wrote the real message to a Session variable on the current HttpContext. OnCreateError, i set the error message to this session value.

Otherwise, all I really did was move my code from the OnUserCreated method into the Create method of my new provider. Wired this provider up instead of the other one, and viola!

Hats off to MS on this strategy implementation.
Submit this story to DotNetKicks

Tuesday, May 30, 2006

Membership Forms Auth Integration with Community Server

I recently set out to solve the following problem:

I have a community site running CommunityServer as one application running under a sub-domain url and acting as part of the site (in a different app) running on the base domain (and www sub-domain). The community site is running Community Server 2.0 and is setup to use forms authentication for Membership, along with Roles, and Profiles. CS uses the new provider model for asp.net 2.0, utilizing custom CS providers. The primary site is also asp.net 2.0 and is using the built-in .net framework sql providers. I want to be able to login or create a user in either site, and have that login and profile carry over to the other site.

The basic information you need to make this happen in .net forms authentication is straightforward, and is covered here. Make those changes first, then there are just a few adjustments to make to the base site and two tweaks to Community Server to complete the fix.

First, Community Server. The CS providers are configured in the web.config of the CS site. In order to have CS draw from the same Membership data as your base site they must share the application name attribute. You can change this in the web.config section for each provider. However, this isn't likely to change anything in CS. You also need to go into the CS database and change the ApplicationName column value in the cs_SiteSettings table. I couldn't find this as a configurable setting anywhere else in CS. The only other change to make to CS is to set the cookie domain to the same domain as the other site. Obviously this only works if our problem space is dealing in different sub-domains. To do this in the CS control panel go to AdministrationMembershipCookie and Anonymous Settings area and set the Cookie Domain value.

Once you have made these changes to CS, you'll need to restart the application pool.

At this point the two sites should allow login's from one site to carry over to the other (if you have cookies from earlier attempts, you'll need to delete them). The only problem is that if you allow new logins to be created at both sites, you'll need to align new member creation on the base site with member creation on the CS site. One way to deal with this issue is to direct member creation over to your CS site. I wanted my primary site to be self-contained so I decided to create CS-capable logins in the primary site.

First, the CS profile configuration has numerous attributes defined. You should add these to your profile configuration section of your main site's web.config. Second, CS has its own user and profile tables in the database that store additional community data for the standard Membership user created by the provider. The "right" way to do this may well be to create a custom provider. Like any good developer I have a lazy streak that set me looking for a quicker way to accomplish this. I did try to just use the CS providers in my main site. Unfortunately, these providers depend on way too many configuration settings and file locations based on the CS site for this to be worthwhile. What I settled on for now is to add the additional rows to the database in a handler for the CreatedUser event of my CreateUserWizard control. This does require a hack that is probably better dealt with by creating a custom provider, but I'll do that when I have more time.

For now there are two CS tables: cs_Users and cs_UserProfile which need new rows based on the created Membership user. Most of the columns have default values, but you'll need to supply the Membership based ID to both tables, and the new CS based UserId to the UserProfile table (FK), along with the proper SettingsID. Along with this, you need to add this new user to the proper roles associated with CS. By default new users are in the 'Everyone' and 'Registered Users' roles. A simple call to Roles.AddUserToRoles will take care of that.

The hack centers around what to do if the CS database updates fail. The problem is that the Membership operation already completed succesfully (which it must for you to have the FK data for cs_Users) and you are in an event that can no longer cancel the operation. Thus, you need to deal with rolling back the new user yourself and editing the CreateUserWizard messages and diplay format which the website user sees. In theory, a custom membership provider would have access to the create operations in time to deal with providing an error to the control. In this case, I just run the updates in a transaction and delete the new membership user if there is a failure during the cs table inserts. Finally, you have to set custom error text, and eliminate the success message from the control. Good enough to get going. I'll write more when I create the custom provider.

PS: A good article on how the dynamic profile properties are provided to intellisense is here.
Submit this story to DotNetKicks

Tuesday, May 23, 2006

Cool Tool: Hydrus DataSet Toolkit

I am very excited about this great new tool from Hydrus Software that allows you to work with typed DataSet objects without DataAdapters or writing your own sql statements. Basically, it infers database schema from your typed dataset and uses classes called WhereConstraints that limit the results in various ways. It's even fairly easy to write your own constraints if the included constraints don't do the trick.

The DataSet Toolkit saves all the time you would normally spend writing specific queries, or maintaining the code you write for the adapters, etc. I can think of several projects where it was a daily job to update dataadapter statement code every time something changed about the database. With this tool, you can ignore that stuff. If you are used to working with the .net framework CommandBuilder objects, then you still have to write select statements. The DataSet toolkit removes this job too. Pretty cool.

Check it out.
Submit this story to DotNetKicks

Thursday, May 18, 2006

IFrame Gotcha not caught by VS05 parser

The VS'05 script parser is pretty good at catching tag violations when you work in the html view of an aspx page. However, it did not catch the need for an </iframe> closing tag for the iframe element. This is a requirement for the iframe tag in both IE and Mozilla. I didn't know that previously, and was getting the strangest results when I would inspect the DOM and find that everything below my iframe tag was consumed in the enclosing div.
Submit this story to DotNetKicks

Monday, May 08, 2006

Software development is like construction

I am going to depart from my usual facts-only entries because I thought this was funny. I was told again recently that software development is like construction, and that the software development profession should be able to "grow up" and create predictable bug-free software. This was the gist of my reply:

I have heard the comparison between software development and construction a number of times. It's an alluring analogy for those that don't really understand software development. Maybe you've heard it used by a manager looking to explain why getting software done on time, with all the features done correctly shouldn't be so hard.

Software development is like building a bridge. You have a need for which you make a plan, and then set about lining up the materials and manpower necessary to put the plan into action. Construction projects experience set-backs, but through proper project management (the assignment of new resources, overtime hours, etc) the project gets done - and you have a bridge every time.

Yes, software development is just like that. It's just like building a bridge where 4 months into the project the city engineer declares that no bridge shall be fewer than 200 feet, and you were building a 185 ft. bridge. And, after increasing the size of the bridge to 200 feet - resetting some of the primary support posts, adding a few people to the project, and purchasing new materials - you realize that good old Portland cement won't work for this bridge. You look for the proper materials in the marketplace, and find a few companies working on some experimental materials. They aren't selling anything yet, but are happy to let you have their recipes. So, you get your guys working on this new compound while you finish the foundation and supports. After 2 months, they decide they have it and you go back to work pouring the street. About this time, the mayor decides that no cars will be allowed in the city, and your bridge is no longer necessary.

Software development is just like construction. Another person once told me that we can call software developers 'engineers' when they can be sued for doing their job incorrectly. I say, you can sue me for the software when you can tell me exactly what you want.

Of course, there are lots of ways to improve the process of software development and in the last few years I think a lot of progress has been made. However, software is never a bridge. That's hardware. People like software because it becomes whatever they want. I think we just have to figure out how to help people better explain what they need.
Submit this story to DotNetKicks

Wednesday, April 12, 2006

Doing Web Service Exceptions Right

These things are covered in various articles as referenced below, but I would like to synthesize the most important points. First, a summary of the issue. When your code in a web service method throws an exception, the framework wraps it in a SoapException object which relates to the "Fault" node permitted by the SOAP recommendation. If you throw (or allow an unhandled exception) of any type other than SoapException, the SoapException thrown by the framework will only contain the text details of the exception in the message of the exception. This is ugly, and hard to work with. The SOAP recomendation is that you provide fault details within the fault. In order to do this in an asp.net web service, you must throw a SoapException where you have set the details node first via the Details property on the SoapException object.

1. Every web service method should catch System.Exception and wrap the exception in a SoapException, adding necessary details to the Detail property of the exception. Please note that the xml node provided to this property must have the root name "[Dd]etail". It is recommended that you create the root element utilizing the SoapException.DetailElementName.Name and SoapException.DetailElementName.Namespace constants.

2. You must also provide the detail node as a node from a document(such as myXmlDoc.DocumentElement) and cannot just pass the XmlDocument.

3. The InnerException property of your custom SoapException will always be ignored. This is used by the framework for unhandled exceptions of types other than SoapException.

4. To work with SoapException details, it makes sense to have a helper method to wrap other exceptions such that every catch block can simply throw via a call to the helper:

[WebMethod]
public string SomeMethod()
{
try
{
//do something
}
catch(System.Exception excep)
{
throw GetSoapException("Failed to do something",excep);
}
}

private SoapException GetSoapException(string message, System.Exception originalException)
{
StackTrace trace = new StackTrace(1);
SoapException eSoap = new SoapException(message,
SoapException.ServerFaultCode, //Could be ClientFaultCode depending on circumstances.
trace.GetFrame(0).GetMethod().Name,
detail.GetSerializedData(), //detail is some serializable object with xml nodes providing
//exception info. Cut from this sample for clarity.
excep);
return eSoap;
}

5. ServerFaultCode and ClientFaultCode are not necessarily important to set properly, but they indicate what the cause of the problem was. You should indicate a ServerFaultCode if something went wrong in the normal operation of the service. This might be the case if you are wrapping an exception from your catch block. If you intentionally throwing a fault because the client has sent bad data:

if(String.IsNullOrEmpty(someInputString))
throw GetSoapException("Your input string was null or empty",
new ArgumentNullException("someInputString"));

you would indicate this via the ClientFaultCode code.

6. If you want to have a serializable object which contains error details, you will need to expose this to the client code via customizations to your wsdl document. Alternatively, the client code could have its own version of the object via a seperate shared assembly. Whatever makes sense.

7. The client should then have a catch block for SoapException around all web service calls, and some helper method for deserializing the Detail property and taking action based on the contents.

Links :

Using SOAP Faults
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnservice/html/service09172002.asp

Handling and Throwing Exceptions in XML Web Services
http://msdn2.microsoft.com/en-us/library/ds492xtk.aspx

SoapException.Detail Property
http://msdn2.microsoft.com/en-us/library/system.web.services.protocols.soapexception.detail(VS.80).aspx

Discussion on InnerException
http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.dotnet.framework.webservices&tid=48c5c279-1982-462d-8a2d-10db072671bb
Submit this story to DotNetKicks

Monday, April 10, 2006

Locating embedded resources

I found it mildly challenging to locate some resources that I had embedded when using the overload ctor for ResourceManager that takes the name of the resource file to load. After some noodling around (and reading anything but the docs for what the baseName should be) it turns out that assembly resources are always named in a 'flat' format that will include the "defaultnamespace" assembly property, and any sub-folder in which the resource file is contained. For example, the resource "ExceptionStrings.resx" in an assembly with the default namespace "TestResources" in the subfolder "Resources" will be:

'TestResources.Resources.ExceptionStrings'

A good way to figure out what your resource files are called is to perform the following while debugging:

string[] resourceNames =
Assembly.GetExecutingAssembly().GetManifestResourceNames();
foreach(string name in resourceNames)
{
Debug.WriteLine("ResourceName: "+ name);
}

This will output each resource in your assembly by its full name - which should be the name provided to the ResourceManager ctor (minus the ".resources" file extension).
Submit this story to DotNetKicks

Monday, March 06, 2006

Identifier Quoting with OracleClient implementation in .net

There are a couple of very annoying programming choices in the OracleClient provided in the .net framework.

1) QuotePrefix and QuoteSuffix on the OracleCommandBuilder return String.Empty if they have not been set by a user of the builder. Yet, if you QuoteIdentifier(string) you will get back a quoted string - using the default Oracle identifier of double quotes. This works the same way as OleDb and Odbc because those don't have a default, but breaks with the more intelligent design of the SqlClient which will return the correct brackets when queried.

2) Oracle has an odd habit of upcasing any identifier which has not been quoted. That being the case if you have mixed case identifiers you need to quote them. Unfortunately, The OracleCommandBuilder doesn't allow any way to quote identifiers. According to the documentation, the column and table names are retrieved in a case-sensitive manner, ah how true (retrieved fromDbDataReader.GetSchemaTable) - yet they are set in the generated Sql statements as they are received - which Oracle then Up-cases making them incorrect. A simple property on the builder of "QuoteAllIdentifiers" would have solved this problem. As it is, you cannot use mixed case identifiers with the OracleCommandBuilder. I tried using mappings with an escaped quote on each identifier to no avail.

3) Finally, if you attempt to use the UnquoteIdentifier(string) method on the OracleCommandBuilder, it throws an exception! WTF! Again, this is in contrast to the SqlClient implementation which correctly just returns your string.
Submit this story to DotNetKicks

Thursday, February 09, 2006

Minor bug in MSBuild on x64 Machine

The reserved property MSBuildExtensionsPath apparently always returns "C:\Program Files\MSBuild" in the command line no matter where it is actually installed. Oops. On my x64 machine, this path is totally invalid, as all standard programs - including vs05 and related apps - are installed to "C:\Program Files(x86)\". Only 64-bit programs go into the standard "Program Files" path.

This does not appear to be true when compiling my modified project file in VS05. Not sure what to make of this discrepancy.
Submit this story to DotNetKicks

Friday, January 27, 2006

Regex Performance - RegexOptions.Compile

I have been working on a small project that utilizes Regular Expressions to do most of the heavy lifting. I have been aware of the Compile option, but haven't really experienced any benefit before. However, in this project it became clear that certain expressions were really bogging down the engine - causing the overall run to crawl. This expression was one that was slow:

(?<1>^\s*)Foo(?<2>[ ]\w*[ ])(Bar)(?<3>[ ][fF]un[ ])(?<4>\w*)(\s')

Strangely, it seems very like the others I was using that were not slow.

In any case, I was using the static IsMatch from the Regex class, so I switched to creating an instance of Regex with the Compile option and everything sped back up. Very handy. I haven't done any real analysis on what caused these certain expressions to be slow; I am sure my moderate understanding of RE has caused some inefficient syntax. I would like to look into it further, but it will have to wait...
Submit this story to DotNetKicks

Wednesday, January 18, 2006

XSLT to transform NAnt script to MSBuild

I undertook a side project to write an XSL transform to convert my existing NUnit scripts into MSBuild scripts. The exercise in XSLT was a good refresher for me, and it was a good way to learn MSBuild. Other than that I have to admit the result may be less than useful.

In any case, I have posted the xsl stylesheets.

The main problem I see in using these to any great value is that: first, you could just runNAnt from MSBuild; second, you need additional stylesheet entries for every task; and third, you would either have to write stylesheets for every function found, and potentially write the MSBuild task to substitute for the function. Not worth the effort.

Ah well, perhaps the stylesheets make a good tutorial on how to get things done. On the other hand, since I was just getting back to using XSLT for the first time in several years, perhaps you'll find errors or misuse.
Submit this story to DotNetKicks

Tuesday, January 17, 2006

Differences between NAnt and MSBuild

It's been a while since my last post, but I am back to make the comparison I previously promised between NAnt and MSBuild. Why? Mostly to learn about MSBuild. Partly to help with conversion between NAnt and MSBuild. I should point out right away that there is no explicit need to convert NAnt scripts to MSBuild. You can exectute NAnt from within MSBuild as a Task. Finally, I have not achieved guru status with either NAnt or MSbuild, so please rectify any mistakes below with a helpful comment.

That being said, on with the comparison:

1. NAnt has functions. MSBuild really has no such thing. MSBuild is infinately extensible via Tasks, but there aren't that many tasks as compared with NAnt functions. I think this is a sign of maturity in NAnt. Since MSBuild's programmers had NAnt to look at, we do have to wonder why they excluded some things but we can guess that dev timelines ran out.

2. NAnt has a few fileset types with specialized attributes. All file references in MSBuild are contained in ItemGroup blocks. However, with ItemMetadata providing infinite extensibility to each Item in an ItemGroup, the specialized attributes are not required.

3. In the main, the NAnt schema tends to be attribute centric, while MSbuild favors elements with text content. The NAnt schema also favors lowercase names, while MSBuild favors an initial capital.

4. NAnt allows fileset groups to be included inside a target. MSBuild Targets may only reference ItemGroups specified as children of the Project element.

5. MSBuild seems to be missing the notion of a basedir. This basedir attribute is very helpful in NAnt. MSBuild only has the project root as basedir, and can use PATH variables. Again, I think the maturity of NAnt shows in this oversite. Obviously, you can define a Property with an appropriate base directory Uri and append it to every path in an ItemGroup. You could probably also make use of ItemMetaData if you were writing a custom Task.

6. Property references in NAnt are denoted by ${}, while MSBuild uses $(). What is this, C# versus VB? You also cannot use '.' characters in your property names in MSBuild, though it is legal in NAnt.

7. MSBuild references Items in an ItemGroup with the syntax @(ItemName). NAnt references filesets by id utilizing a refid attribute without decoration.

8. There are 72 built-in tasks in NAnt. There are 35 in MSBuild, however most of the common tasks related to .net use are in there. They both include an Exec(exec) task for calling out to the system. They both allow you to write your own to extend the functionality of the build. So, if it can be done in code, you can run it from either one.

9. Both allow conditions to be placed on nearly every element to determine if the build should include the enclosing item. However, NAnt uses both an 'if' and 'unless' approach, where MSBuild just simply has 'Condition' that supports '!' (not); 'And'; and 'Or'. Here the MSBuild approach seems more streamlined.

10. MSBuild Projects can have multiple default targets, and also has an InitialTarget which can be run before other targets for prepatory steps. Utilizing 'depends'/'DependsOnTargets' attributes you could craft your own workflow in either program. Similar to the Default, you can have multiple targets specified in the DependsOnTargets attribute which is an interesting enahancement over NAnt.

11. A subtle difference in the CSC Task is that in NAnt, the warnings to ignore are elements which each have a condition. In MSBuild, warnings are a single attribute which contains a semi-colon delimeted list. In NAnt, you could conditionally ignore some warnings on some builds based on criteria. No such thing would be possible in MSBuild.
Submit this story to DotNetKicks

Monday, December 19, 2005

Using MSBuild - Don't learn from default build project file

If you are used to utilizing Ant or Nant to build your .net solutions, MSBuild is a fairly simple switch, though I still think Nant is more full-featured. However, the more I learn about MSBuild the less concerned I become with the available featureset.

One thing I did find confusing was the contents of the default build file generated by the VS IDE when you add a build type to a Team Project. This file is not a good way to learn about build files. First, it is based on a team foundation server based build. Second, it does not contain targets but depends entirely on the included targets file (Microsoft.TeamFoundation.Build.targets). This targets file can be valuable to learn from. I also recommend taking a peek at the MSBuild schemas (Microsoft.Buld.Core.xsd and Microsoft.Build.Commontypes.xsd)

If you build your own build file from scratch I believe you will find the experience more analogous to building with Nant. The thing about the default build file is that it is full of properties and item groups which have comments indicating how values should be specified to generate a proper build. However, this really only relates to the targets used by the included MS targets file. Great if you want a simple build, or don't care how its done. To learn the build syntax, throw away the default file and set about creating your own. I realize this is "wasted" effort since the default compile target is just fine but I prefer controlling everything and removing all 'MS Magic' from the project.

The MSDN docs are good enough to learn from, so I won't provide the basic info here. I will summarize a few basic thoughts:
  • Target has pretty much the same meaning as in Nant. There are other Nant analogies, especially with the built-in tasks. However, I would prefer to cover this in a more detailed explanation of Nant->MSBuild conversion issues.
  • ItemGroup collections can have any element content. The names of the elements within are used to dereference the value lists in a manner similar to properties.
  • ItemGroup children can have nested values that can be dereferenced with "dot" notation as in @(Parent.Child).
  • PropertyGroup collections allow any element content as with ItemGroup nodes. This allows a nice compact way to defining a property value, rather than indicating the value= attribute all the time.
  • The CSC task attributes are a little funny because they rename properties from the command line object they encapsulate. Strange choice, but if you know what csc requires at the command line you can figure it out. You can also read the docs, but who wants to do that?!
  • While learning, be sure and recognize the built in tasks, the reserved properties, and the well-known item metadata lists. You wouldn't want to recreate the wheel.
Submit this story to DotNetKicks

Wednesday, December 14, 2005

Dead DC Removal

There is a catch-22 you get into if you try to properly remove a domain controller when it has failed completely - either it is dead, or because of massive corruption it might as well be.

The funny thing is that in the tools ms recommends for this job you have to connect to the DC in question in order to demote it or cleanup its entries in AD. Silly. I finally found an answer though...

If this dead controller is one of your operations masters, well - good luck. If not, then you will need the tool ldp.exe. I hadn't seen this one before, but it basically allows you to edit the domain ADSI containers. The details are here:

http://computing.fusion13.com/ActiveDirectory/Remove-A-Domain-Controller-From-Active-Directory-With-LDP.shtml

Follow this post, and the dead server is gone. Other articles indicate that when you rebuild the machine, you should use a new name due to cached values on machines throughout the domain.
Submit this story to DotNetKicks

Thursday, December 01, 2005

Learning Team Foundation Source Control Coming From VSS

It has been a decent little adventure getting more than one developer at a time working with Team Foundation Source Control (or VersionControl - a little aside: it seems its all about version control when dealing with the web services API, but its called source control in the IDE UI).

Coming from VSS (or Source Unsafe, or Source Destruction - choose your favorite epithet) some things which are common in other control systems take some getting used to when first moving to "real" source control. My only other version control experience was with Perforce, and there are some parallels here. I have since also gotten into CVS while dealing with NUnit development, and there are other parallels here. If you come to TFSC from one of these other systems, the learning curve will be much more shallow.

Allow me to enumerate the lessons learned so far:
1. The folder you designate as a workspace for a certain project will be totally controlled by source control, including deletes. This is great (and normal for other version control systems), but a real adjustment for lazy VSS users who sometimes like to keep the development area dirty in order to work with different versions of code at the same time (mySource.cs.old for instance).
2. What you actually have on disk is of no consequence to the source control system. Which is to say that the notion of having the latest, or the content of a particular file is controlled by the actions which you have taken against source control through its interfaces. The information stored in the source control database will be used by the source control system, not a lookup of files on your disk. If you happen to do something on disk, outside of source control - you will soon be very confused when you go to use source control again. To get back to a usable state, you should choose "Get Specific Version", and then check the "Force get of file versions already in workspace" checkbox. This will "update" your disk, even if everything was up to date, and get you back in sync with what the source control system believes to be true. In short, don't do anything to the workspace outside of version control. If you want to play around with files, copy the workspace to another location.
3. Putting auto-generated folders or files in source control is unworkable. What used to simply be a problem or difficulty in VSS is totally unworkable in team system source control. This means all /bin, /doc, and /obj folders must be left out of source control. This is actually a tenant of good configuration management anyhow, but the problem is that by default when you add projects to source control, you will get these folders. You must consciously remove or avoid these folders.

A related issue is how you deal with assembly references when you are not including any of the referenced assemblies in these auto-generated folders. Your projects keep relative links to referenced assemblies in the

<HintPath>..\..\ReferenceAssemblies\Release\SomeAssembly.dll</HintPath>

element. Each developer (and the build server) will need to have this same relative path, or a gac entry for the referenced assembly. Another possibility is to use an environment variable so each developer can have their own locations, but this seems like more setup than its worth.

4. The entire team must standardize on file or url based projects when working with web projects. Because of the relative references in web.config and because of unit tests, all hell breaks loose if you have one developer using a url based web project, and another using the file system based. Choose one way to do it, and force all developers to accept this standard.

As far as which to choose, I would say the file based is better simply because it is keeps your web projects in the same basic location as your other projects, and because it is the way Visual Studio expects to work - and we all know that if you accept some product defaults, you'll save headaches.
Submit this story to DotNetKicks

Thursday, November 17, 2005

x64 Support Update

Daemon Tools 4.0 delivered without the promised x64 support. Dammit.
Submit this story to DotNetKicks