Friday, December 15, 2006

Community Server Paged Permissions Grids

The default implementation of Community Server assumes you will have only a handful of roles. This caused me some problems when I added 700 new roles to configure specific admin/user permissions on 350 forums. All of the permissions grids in the control panel are not paged, and neither IE nor Firefox like loading the massive permissions grids into memory.

Thankfully, the answer is quite simple. In order to add paging to all permissions grids in the control panel, you simply make a couple of minor changes to the BasePermissionsGridControl.cs file.

At the bottom of the buildGrid() method you'll see a comment concerning paging just after the call to ApplyUserSettings(). You can delete each line from that point forward since we now want paging.

Next, you need to add an override of ConfigureGrid and add the following single line to the method:


Finally, set the Grid.PageSize property to whatever size you like in the BasePage_Load method (or leave it alone and let the default take over).

Viola! All permissions grids are now paged in the Control Panel.

Submit this story to DotNetKicks

Tuesday, December 12, 2006

Exchange Distribution Lists and WebDAV

Download Sample Code

I encountered an interesting side project recently where I wanted to retrieve information about distribution lists on a local Exchange Server from a web-based application.

I didn't want to use CDO/MAPI directly for various reasons, so I looked into the WebDAV api available for querying Exchange via web requests. I found some helpful information here, and was quickly able to produce a class to consume distribution list data for use in my app.

The basic query for retrieving all distribution lists from a particular folder (where folder is the RootURI below) is:

<?xml version=\"1.0\"?>

<D:searchrequest xmlns:D = "DAV:" >

<D:sql>SELECT "urn:schemas:contacts:cn\", "urn:schemas:contacts:members\", "DAV:displayname" FROM RootURI WHERE "DAV:ishidden" = false AND "DAV:isfolder" = false AND "DAV:contentclass" = 'urn:content-classes:group'</D:sql>


DAV content-classes are found via mappings from PR_MESSAGE_CLASS and PR_CONTAINER_CLASS properties. The schema references to specific properties in Exchange are derived from the Exchange Server schema. In the above example, we want the property cn from the urn:schemas:contacts namespace. The uri we query should contain contact items whose content-class is of type group.

Now, in order to figure out where all these properties live, and what you can expect can be challenging. You can start with the properties reference by namespace on MSDN. These will tell you the content-class. Or, as I did here, you can use a tool like IndependentDav to get a look at what is returned from the server. I don't think the tool is necessary once you have a handle on where things come from, but it can difficult to track some things down because mappings between Exchange Store schema content-classes and DAV content-classes are not 1:1 and the properties reference gives the content-class of the Exchange Store schema. Finally, you can query the store at a uri without limiting the return values, and look through the query results manually.

Getting on with the distribution lists then, I created a helper class to make the requests and manage the two part process of first constructing a DistributionList class, then filling the members of the list with a seperate query. I pass the streamed response from the query directly to the constructor of the DistributionList class. The code is long and cumbersome, so it's better to just look at it in the sample.

The list members are actually not found via webDAV directly, but a query to the OWA server services as described here. I created a ListMemberList class to hold ListMember classes. The constructor on the list parses each member in the response, and set it as a property on the DistributionList class.  

When its all said and done, I can use the management class to get the DL's from the configured server with very little overhead, and - best of all - no interop.

Submit this story to DotNetKicks

Monday, November 20, 2006

Enable Domain Account As App Pool Identity

While the answers are around already, the simplified listing of what to do when you create a user in your domain to run an IIS 6.0 application pool on Windows Server 2003 is:

  1. Add the user to the IIS_WPG on the web application server.
  2. Run aspnet_regiis with the -ga flag and the domain qualified user name (ie. MyDomain\TheUser). This gives the app pool user the ability to read and write from appropriate files, etc. I thought this would be accomplished by adding the user to the group above, but it didn't work until I had completed both of these steps.
  3. Run setspn on the domain controller (you'll need to have the toolkit installed) with the protocol name 'HTTP' and application server name (ie. HTTP/myAppServer) and a second argument for the domain qualified username (ie. MyDomain\TheUser). All together that is 'setspn HTTP/MyAppServer MyDomain\TheUser'. It is also recommended that you run the same command again, but with the fully qualified name of the app server.

Some security minded folks will tell you that you shouldn't actually do this anyhow, but if you must this set of instructions will allow you to use a user with minimum permissions. Just make sure this user is not used for anything else - thus limiting the potential the user will have an increasing permission set over time that no one can remember.

Submit this story to DotNetKicks

Thursday, September 07, 2006

Using Sandcastle

You Mean I Actually Have to Use Sandcastle? Yes, with the death of NDoc, if you want to generate a chm based on triple-slash comments then you will probably need the help of Sandcastle.

It was nice of MS to release this thing - really. It would have been nicer if they would have sent a team to help NDoc, but let's not dwell on what could have been.

Using the August CTP I quickly found myself in hell. It was one thing to follow the directions for the sample. It was a whole other thing to actually build docs for my own projects. I created a bit of the hell for myself by trying to build docs in the directory structure of the project for which the docs are meant to be a part. I am sure I am not alone in this desire - it makes source control and packaging so much simpler.

I'll skip to the end and let you know that the results are quite nice - I am glad to have the docs. Everyone acknowledges the problems, and in fact there are some great helper articles referenced in the comments - mostly here. Heck, one guy even created a nice batch file and configuration doc builder. Unfortunately, it doesn't actually create usable batch file - at least not for me. It was faster to fix the batch I was working on than to debug his tool. That tool would be a great thing to spend some free cycles on and fix - but then look what happened to NDoc, so we can be sure that will never happen.

Another guy created a serious batch file here - again I couldn't make it work for me.

There are a few morals to this story:

  1. 1) Batch files and esoteric config files without good docs are always hell to work with. Be prepared to dig in.
  2. 2) Just hard code all the folder/file references in the sandcastle.config to full paths. You can try to be all fancy, but this is just a CTP and there are more robust versions to come. Besides, it's just install locations and that isn't too bad (in fact, the config doc creator tool mentioned above actually does a nice job on this).
  3. 3) Create all the content folders (html,scripts,art,styles) directly below the batch so BuildAssembler doesn't get confused
  4. 4) A much discussed batch line from the examples where the results of one transform are piped into another XslTransform statement is total BS - it never worked until I split the line into another statement with results from the first. Not sure why someone felt they needed the pipe to begin with.
  5. 5) Don't try to name any help doc transform results to anything other than test.[hhc,hhp,hhk]. I don't know anything about the hhc API, but it sure doesn't like to work with anything other than test. Just rename the output, test.chm, to something else at the end of your batch file.
Submit this story to DotNetKicks

Monday, August 28, 2006

Community Server Customization - Expanded Member Search Part 2

In part 1 of this subject I explained the basics of how to change the skins and controls to allow a custom member profile field to be searched from the user/Members.aspx page. In fact, according to CS' Daily News, perhaps I said too much about various files and controls without giving an overview of what I was trying to accomplish. To summarize the intent:

  1. Additional fields have been added to the user profile in the database. This required a change to the view cs_vw_Users_FullUser in order to expose the property. My example is silly, but sufficient - Eye Color is now a profile property.
  2. We want to search by Eye Color as a distinct user property on the Members list page. We modify the search control on this page to include a drop down with various eye colors on which to search.
  3. When the search control receives the request, it must pass the selected eye color in a query string which will actually drive the database query values. In order to pass this value along to the , we added the property "EyeColor" to the UserQuery object.

With this level of change there are a number of classes which must be modified. I suggest you try very hard to work with custom versions of the classes in order to avoid upgrade problems and a later inability to find the custom code you wrote. In order to make this customization I have created the following classes in my custom assembly:

  • UserSearch - extends CS.Controls.UserSearch and implements OnInit, SearchButton_Click, GetUsersAndBindControl, and AttachChildControls mainly as copies of the original. I only call the base in OnInit after initializing a local copy of _isAdmin.
  • ExtendedUserQuery - extends CS.UserQuery and adds only the single property value I wish to store.
  • CustomCommonSqlDataProvider - extends CS.Data.SqlCommonDataProvider and overrides only the GetUsers method, along with a constructor that simply passes its parameter values to the base. Don't forget this will require a provider reference change in communityserver.config under the providers node where the name = "CommonDataProvider".

One caveat here where I broke my "customize it" rule - you would need to create a custom ForumMembersView as well in order to call the GetUsersAndBindControl on your UserSearch control because this method is not virtual. I decided to live dangerously and changed the base class to make this method virtual - but you could take the high road.

Now, I described what I did in UserSearch last time. The changes required in the common data provider will fill out the rest of the story. If the query passed into this method is of the type ExtendedUserQuery, then we will need to generate our own member query clause (if not, we can just pass the work onto the base). The member query clause is a generated sql clause executed by the stored procedure "cs_users_Get". This is a good thing because it will keep us from modifying the stored procedure. In the provided method this clause is generated by the static method BuildMemberQuery on the SqlGenerator class. We'll need to copy this entire class to our custom code, and change query parameter to accept an ExtendedUserQuery (you can't extend the class or get at its oddly protected static helpers).

We want to add our modification to the where clause, so look for the comment "// ORDER BY CLAUSE" in this method as it indicates the end of the where clause creation section where we will insert our new predicate. The where clause will have at least one predicate already, so our clause will start with " AND". The profile properties are exposed in the view mentioned above, cs_vw_Users_FullUser which has been given the alias "P". Thus, our predicate format is:

" AND P.EyeColor = '{0}'"

Finally then our new sql generator code is:

if(query.EyeColor != null)
    sb.AppendFormat(" AND P.EyeColor = '{0}'",query.EyeColor);

Once you have done this, the rest of the code in the GetUsers method remains unchanged. Finally, you should copy the static method cs_PopulateUserFromIDataReader into your provider too in order to put the custom values into the created User object before passing it back. However, it isn't necessary to get the user search results.

I said previously that it would be even better to make a more generic property search mechanism. In order to do that we change the ExtendedUserQuery to have a StringDictionary rather than an individual property. For the query string you can either ignore it and pass values via another mechanism such as user Session, or add a delimited set of strings to a Property Name value such as ppn=EyeColor.Height.Weight and a Property Value value such as ppv=Blue.74in.175. Then your sql generation code would just need to iterate the values and add predicates as necessary.

Even better would be an enhancement to CS that used a more flexible means of manipulating the query. I am biased, but I really like the WhereConstraint idea in Hydrus' DataSetToolkit technology.

Submit this story to DotNetKicks

Tuesday, August 22, 2006

Community Server Customization - Expanded Member Search Part 1

You can search several user parameters in Community Server in the default implementation of the UserSearch control, but if you have added some custom attributes or fields for member profiles, then you'll need to customize the search. In this case, I am not just talking about customizing the UI to support the params, but the search functionality itself.

The default search functionality is carried out in the UserSearch control, which is displayed on the ForumMembersView control from the Discussions namespace (Note: the UserSearch control shares some functional similarity to the CSSearch (SearchBarrel) implementation, but is distinct).  When you click on the Search button the page posts to itself, turning your query into a set of query string parameters. The existing parameters are as follows:

  • 'Search=1' = Perform Search
  • t = Search text
  • st = search type to perform (for admins only) [search by username or by email; provides values of all/username/email]
  • 'su={0}' = search by username (boolean 1 or 0)
  • 'se={0}' = search by email (boolean 1 or 0)
  • 's={0}' = include accounts with active or inactive status.
  • r = role to limit search
  • jc = join data comparer (gt/lt, etc)
  • jd = join date
  • pc = post date comparer
  • pd = post date
  • sb = sort column
  • so = sort by order (ASC/DESC)

Specifically, the ForumMembersView control populates it's user list during data binding by calling GetUsersAndBindControl on the UserSearch control referenced by the view.  Within the UserSearch control, searches are turned into query strings which in turn are turned into a UserQuery object which is eventually passed to the static Users.GetUsers function to populate the list of users found by the query.

In order to tweak this functionality we'll need to customize the SearchButton_Click event handler and GetUsersAndBindControl functions on the UserSearch control. These methods aren't available to be overridden, so we'll need to copy them into our new class that extends the provided class, and we'll change the skin/view to include our control instead of the CS provided control. Unfortunately, we can't allow the original class to receive the button click event because it redirects the response, so we'll need to override and copy the contents of AttachChildControls (where the event is wired up to the search button) as well. This is a common problem in implementing sub-classes of CS controls that could be solved by setting the event handlers as protected virtual members.

In any case, once we have our custom UserSearch control (not yet customized) we need to add field selectors to the UI of the Skin-UserSearch.ascx skin, and update the View-ForumMembers.ascx skin to reference our UserSearch control instead of CS'.

Having added the search selector fields, we need to update the SearchButton_Click event handler to add our new fields to the query string. To do this, choose a moderately descriptive shorthand for your item that isn't already in the list above; ie. 'ec' for an "Eye Color" attribute. Likewise, you'll need to modify the GetUsersAndBindControl method to recognize your new attribute in the query string, and add it to the UserQuery object. You will need to subclass UserQuery and add your custom properties directly to the class. 

Your query-string writing code in SearchButton_Click might look like this:

if(this.eyeColorSelector != null && this.eyeColorSelector.SelectedIndex != 0)
    url.AppendFormat("&ec={0}", HttpUtility.UrlEncode(this.eyeColorSelector.SelectedValue));

And, your retrieval code in GetUsersAndBindControl might look like this:

string eyeColor = context.QueryString["ec"];

    query.EyeColor = eyeColor;

Next time let's look at the details of updating the data provider to use the new query value, and a more flexible way to handle multiple custom attributes without adding every one to the UserQuery.

Submit this story to DotNetKicks

MS Betas I Like

Looks like I can finally get back to Windows Desktop Search with the new v.3 beta that includes x64 support:

I am also really enjoying the new blog writer application, and am using it right now...

Submit this story to DotNetKicks

Monday, July 31, 2006

Overview of Customizing Community Server

I have been learning a lot about customizing CommunityServer lately and it occurs to me that it would have been easier to do more faster if I had had an overview of the way CS is built. To that end, I will try to codify what I have observed.

Structurally, CS can be a little confusing because it appears to be a fully implemented 2.0 website application. The fact is that it is almost entirely 1.1, but was built by people who were very knowledgeable about the changes coming in 2.0. Whether its the use of master pages and skins, or the appearance of a global IsNullOrEmpty string checking method - the app seems to be 2.0. It isn't; the 2.0 version still uses the home-grown skins/master pages and many other 2.0-seeming features.

In terms of customizing the way your website looks or behaves you have to start with the aspx pages found in the various folders of the website application structure. These pages will point you to the various skins or controls in use. Yet, they will never (besides in controlpanel) point you to the code or visual features of the website. All of the real implementation occurs in the master pages, skins, and views, or in the control code in one of the included assemblies.

Each aspx page will identify some master page (not identifying one explicitly means it will use master.ascx) in its CS:MPContainer control declaration, and potentially one or more control declarations in CS:MPContent controls. The master pages are generally slim, and control overall layout of a page. The actual master pages are simply custom controls (ascx files) located in the /Themes/[current theme]/Masters folder of the website application. The most common base implementation of a master page is of 3 sections called 'lcr' (left side content), 'bcr' (body content, and 'rcr' (right side content). You can either define controls for every descendent page in these sections in the master page, or override master page content by declaring CS:MPContent controls with these ids on your aspx page. Skins should not implement these content controls. For example, if you wanted to understand how each thing is showing up on default.aspx, you would open default.aspx and find the 'ThemeMasterFile' attribute on the page level CS:MPContainer control. If you navigate to the HomeMaster.ascx file you'll see that the only thing being added here are some style includes in a "HeaderRegion", and the 3 content sections. In order to really understand where the content is coming from, you have to look at the controls declared within the various content sections.

A control declaration on a page or skin will carry a custom prefix defined on the page and the name of the control - this is standard customization. It is important to note such control names because first - along with the prefix declaration on the page - it will point you to the control code in the proper assembly where you can see how it is implemented. Second, the name is almost always identical to the the name of the skin that is used to display the control, with an added prefix of 'Skin-'. By default, all templated controls in CS will load a skin named "Skin-" + [the name of the control] + ".ascx".

While the master pages generally declare overall sections in which the various skins will be displayed, the aspx pages, skins, and views define the HTML and any additional sub-controls along with client or server side script to control display. I believe it is preferable to leave all html out of the aspx pages, and rely on the skins for this implementation. As a simple example, if we look this time at 'login.aspx' we'll see that the master file is not declared (so it is master.ascx) and all content is controlled by the "CS:Login" control. That means this control is declared in the CommunityServer.Controls.Login class, and its layout will be found in /Themes/default/Skins/Skin-Login.ascx. Sure enough, the layout of the login page is on this skin. The functionality (application logic) of this page is found in the class file.

This brings us to an important lesson about how all this comes together in the application: the class files control behavior by "wiring" properly named controls to certain events or operations on the back end. For instance, the DefaultButtonTextBox control for the password on Skin-Login.ascx must be named 'password' in order for the control logic to work properly. This magic takes place in the "AttachChildControls" method of each control which manipulates its members on the back end.

Using this basic knowledge we can then start to change how our website looks and behaves. Each templated or skinned control (those with skins) has a property "SkinName" which it inherits and will consult as the proper skin to apply if it has been supplied. Recall that if this property is null, then the skin named "Skin-[control name]" will be applied. Note that I have run into controls which ignore this property, but it is not the norm. As such, if we want to change how login.aspx looks we should create a new skin, and provide the name as a "SkinName" attribute on the control declaration on login.aspx. I think you should copy and rename the skins rather than alter them because it will save you headaches later if you try to upgrade CS, and clearly shows where you have made changes. When you fill in the "SkinName" attribute you use the full file name of the skin you created. This name may need to include the sub folder when you are dealing with blogs and galleries (i don't really have the nuances of these exceptions mastered but generally the controls from these assemblies automatically determine the folder which contains their skins so try that first. Aspx pages in the Blog subfolder are really an exception to most things I have said so far anyhow and I'll cover that later).

If you want to change the way login.aspx behaves you'll need to modify the Login class. Again, rather than modifying the class provided with CS, you should create a new assembly for your controls and extend the Login class via inheritance. You can change the name to match your modified skin if you have one, or leave the name the same. The only name collision issue I had working in VS 2003 was with the namespace including the CommunityServer.Controls prefix (so don't use MyCompany.CommunityServer.Controls, try CS.Controls) - the controls themselves are all fully prefixed in the aspx and ascx pages so there is no confusion there. I have found that the control classes aren't all designed real well for extension, so I often have to copy base class methods in order to modify behavior, but I am trying to guarantee that an upgrade will still work, and that I know where my code begins and CS provided code stops. Once you have your class built and the assembly included in your web project, you can change or add the TagPrefix declaration on your page and repoint the control declaration to your new custom control.

Blog Skin Exception

Blog aspx pages generally declare which view they are using, rather than a skin. These "Views" are found in /Themes/Blogs/[current blog theme]/Views and have the name "View-[view name].ascx. Views contain layout templates with various blog controls in them. Each control then has a skin named the same way I mentioned above for other controls. However, these controls have their skins in the /Themes/Blogs/[blog theme]/Skins folder, rather than with the other skins.
Submit this story to DotNetKicks

Thursday, June 29, 2006

Community Server Customization: Title Bar Links

I have really enjoyed learning how to use and customize Community Server. I think it is a great platform for developing community based websites. Yet, it can be difficult to work with because customizations aren't documented well. To that end, I will try to share things I have learned while customizing CS. However, my disclaimer is: I just started this type of CS customization, and while I have searched for the best way to do things, I might have missed a better way.

The basic problem I am writing about is that I have a page which is not one of the base "areas" on install (home/blogs/forums/picture/files, etc) but which I want to link to from the title-bar. I not only want a link, but that link should be highlighted when a user is located on the custom page, or in the custom area. The link should also be relative from any location from within the site, no matter if you are hosted at the root of a domain, or under a virutal directory (the kind of reference you get with the tilde in a asp:hyperlink navigateUrl in 2.0).

Link magic occurs in the SiteUrls.config configuration file. To accomplish our goal, we'll only need to edit the config file - soon I'll write an article on customizing reading the config. There's a whole lot you can do in this file, but everything is accomplished through three types of entries:

location - A location has an attribute for 'path' which is a root-relative path to a particular location in the site. For instance the weblogs location path is "/blogs/" because this is where all blog related files are located. There is also a name attribute which will identify a particular location for later reference. Finally, you can set a boolean attribute "exclude" to indicate whether or not this location is excluded from url re-writing (url re-writing is a process by which paths are canonicalized or formatted according to patterns in siteurls - possibly other stuff. You can even setup your own rewriter via the provider pattern supported for most of this type of low-level stuff in CS).

url - Indicates a path within a given location. This works out such that a page name can be given as the path, but will always be directed to that page name underneath a folder (if one exists) for a named location. To that end, a url element contains attributes for a path, and a location. Each url also has a name. A url path may also contain string format token(s), and a related 'pattern' attributes. If a pattern exists, the url rewriter will format the path accordingly.

link - the config file itself shows what these are, but the thing to understand for our current problem is that we want to use a resourceUrl, not a navigateUrl which should be reserved for external URLs only. A resourceUrl points to a named url element, and itself has a unique name by which it will be referenced in skins and master files (more on that later). The other attributes are documented well enough in the config doc.

To put it all together: First, figure out if one of the existing locations will be relevant for your link. If you have a page under the root of the site, "common" will do. If you have created a new sub-folder, you should create a location for that folder. Second, create a url element that corresponds to the proper page you want to link to within your chosen location. Finally, create the link element that references your resourceUrl (url element), and either provide a 'text' attribute, or point to a named resource string (resource strings are in the resources.xml file in the proper language sub-folder under languages) to indicate how the link should be labelled.

Once this has been done, your link will be added to the tab-strip in the title bar. In order for the correct tab to be highlighted when you are on your page, you need to include a "SelectedNavigation" control on the master or skin for that page with the 'Selected' attribute value set equal to the name of the relevant link element. If one page will serve more than one title-bar link, you can programmatically adjust the "Selected" property of the global SelectedNavigation control in the Page_Init event:

void Page_Init(object sender, EventArgs e)
if(Request.QueryString["GroupID"] != null && Request.QueryString["GroupID"] == "8")
Context.Items["SelectedNavigation"] = "MyNewLink";
Submit this story to DotNetKicks

Thursday, June 08, 2006

Mini-update: Daemon on x64

Daemon Tools is now available for the 64-bit platform.
Submit this story to DotNetKicks

Tuesday, June 06, 2006

Online virus-scan support for x64

I am happy to discover that the beta version of's Safety Center works wonderfully on my x64 machine. I have had a difficult time finding reliable anti-virus solutions for x64. All other online versions have failed at one level or another.

If you haven't checked out the tool, check it out here.
Submit this story to DotNetKicks

Wednesday, May 31, 2006

Update: Custom Membership Provider

Yep, it was just that easy. It took me a total of 2 hours to write and test a custom provider that would solve the hack problem I wrote about last time. It would take longer if you wanted to implement a full provider, but I only needed CreateUser and DeleteUser for this issue.

The documentation and examples for creating a membership provider are good, so there's no need to cover the basics. I did run into a couple of gotcha's.

First, I really didn't need total customization so I simply extended the Framework's SqlMembershipProvider. If you are using SqlServer, this works fine. The strange thing here is that the connection string, _sqlConnectionString in the provider, is private. This being the case I had to override Initialize and get the value for myself before passing the config along to the base class. Not difficult, just a silly repeat of existing code in the base.

My only other issue was with providing a good message to users if my custom provider Create or Delete operations gave an exception. The basic model on Create is to trap significant issues and log the specific problem while returning null as the MembershipUser and setting the
MembershipCreateStatus to some error type. In this case, the only applicable status is 'ProviderError' which causes a generic message to be displayed to the user. Logging isn't really helpful in this case, so I wrote the real message to a Session variable on the current HttpContext. OnCreateError, i set the error message to this session value.

Otherwise, all I really did was move my code from the OnUserCreated method into the Create method of my new provider. Wired this provider up instead of the other one, and viola!

Hats off to MS on this strategy implementation.
Submit this story to DotNetKicks

Tuesday, May 30, 2006

Membership Forms Auth Integration with Community Server

I recently set out to solve the following problem:

I have a community site running CommunityServer as one application running under a sub-domain url and acting as part of the site (in a different app) running on the base domain (and www sub-domain). The community site is running Community Server 2.0 and is setup to use forms authentication for Membership, along with Roles, and Profiles. CS uses the new provider model for 2.0, utilizing custom CS providers. The primary site is also 2.0 and is using the built-in .net framework sql providers. I want to be able to login or create a user in either site, and have that login and profile carry over to the other site.

The basic information you need to make this happen in .net forms authentication is straightforward, and is covered here. Make those changes first, then there are just a few adjustments to make to the base site and two tweaks to Community Server to complete the fix.

First, Community Server. The CS providers are configured in the web.config of the CS site. In order to have CS draw from the same Membership data as your base site they must share the application name attribute. You can change this in the web.config section for each provider. However, this isn't likely to change anything in CS. You also need to go into the CS database and change the ApplicationName column value in the cs_SiteSettings table. I couldn't find this as a configurable setting anywhere else in CS. The only other change to make to CS is to set the cookie domain to the same domain as the other site. Obviously this only works if our problem space is dealing in different sub-domains. To do this in the CS control panel go to AdministrationMembershipCookie and Anonymous Settings area and set the Cookie Domain value.

Once you have made these changes to CS, you'll need to restart the application pool.

At this point the two sites should allow login's from one site to carry over to the other (if you have cookies from earlier attempts, you'll need to delete them). The only problem is that if you allow new logins to be created at both sites, you'll need to align new member creation on the base site with member creation on the CS site. One way to deal with this issue is to direct member creation over to your CS site. I wanted my primary site to be self-contained so I decided to create CS-capable logins in the primary site.

First, the CS profile configuration has numerous attributes defined. You should add these to your profile configuration section of your main site's web.config. Second, CS has its own user and profile tables in the database that store additional community data for the standard Membership user created by the provider. The "right" way to do this may well be to create a custom provider. Like any good developer I have a lazy streak that set me looking for a quicker way to accomplish this. I did try to just use the CS providers in my main site. Unfortunately, these providers depend on way too many configuration settings and file locations based on the CS site for this to be worthwhile. What I settled on for now is to add the additional rows to the database in a handler for the CreatedUser event of my CreateUserWizard control. This does require a hack that is probably better dealt with by creating a custom provider, but I'll do that when I have more time.

For now there are two CS tables: cs_Users and cs_UserProfile which need new rows based on the created Membership user. Most of the columns have default values, but you'll need to supply the Membership based ID to both tables, and the new CS based UserId to the UserProfile table (FK), along with the proper SettingsID. Along with this, you need to add this new user to the proper roles associated with CS. By default new users are in the 'Everyone' and 'Registered Users' roles. A simple call to Roles.AddUserToRoles will take care of that.

The hack centers around what to do if the CS database updates fail. The problem is that the Membership operation already completed succesfully (which it must for you to have the FK data for cs_Users) and you are in an event that can no longer cancel the operation. Thus, you need to deal with rolling back the new user yourself and editing the CreateUserWizard messages and diplay format which the website user sees. In theory, a custom membership provider would have access to the create operations in time to deal with providing an error to the control. In this case, I just run the updates in a transaction and delete the new membership user if there is a failure during the cs table inserts. Finally, you have to set custom error text, and eliminate the success message from the control. Good enough to get going. I'll write more when I create the custom provider.

PS: A good article on how the dynamic profile properties are provided to intellisense is here.
Submit this story to DotNetKicks

Tuesday, May 23, 2006

Cool Tool: Hydrus DataSet Toolkit

I am very excited about this great new tool from Hydrus Software that allows you to work with typed DataSet objects without DataAdapters or writing your own sql statements. Basically, it infers database schema from your typed dataset and uses classes called WhereConstraints that limit the results in various ways. It's even fairly easy to write your own constraints if the included constraints don't do the trick.

The DataSet Toolkit saves all the time you would normally spend writing specific queries, or maintaining the code you write for the adapters, etc. I can think of several projects where it was a daily job to update dataadapter statement code every time something changed about the database. With this tool, you can ignore that stuff. If you are used to working with the .net framework CommandBuilder objects, then you still have to write select statements. The DataSet toolkit removes this job too. Pretty cool.

Check it out.
Submit this story to DotNetKicks

Thursday, May 18, 2006

IFrame Gotcha not caught by VS05 parser

The VS'05 script parser is pretty good at catching tag violations when you work in the html view of an aspx page. However, it did not catch the need for an </iframe> closing tag for the iframe element. This is a requirement for the iframe tag in both IE and Mozilla. I didn't know that previously, and was getting the strangest results when I would inspect the DOM and find that everything below my iframe tag was consumed in the enclosing div.
Submit this story to DotNetKicks

Monday, May 08, 2006

Software development is like construction

I am going to depart from my usual facts-only entries because I thought this was funny. I was told again recently that software development is like construction, and that the software development profession should be able to "grow up" and create predictable bug-free software. This was the gist of my reply:

I have heard the comparison between software development and construction a number of times. It's an alluring analogy for those that don't really understand software development. Maybe you've heard it used by a manager looking to explain why getting software done on time, with all the features done correctly shouldn't be so hard.

Software development is like building a bridge. You have a need for which you make a plan, and then set about lining up the materials and manpower necessary to put the plan into action. Construction projects experience set-backs, but through proper project management (the assignment of new resources, overtime hours, etc) the project gets done - and you have a bridge every time.

Yes, software development is just like that. It's just like building a bridge where 4 months into the project the city engineer declares that no bridge shall be fewer than 200 feet, and you were building a 185 ft. bridge. And, after increasing the size of the bridge to 200 feet - resetting some of the primary support posts, adding a few people to the project, and purchasing new materials - you realize that good old Portland cement won't work for this bridge. You look for the proper materials in the marketplace, and find a few companies working on some experimental materials. They aren't selling anything yet, but are happy to let you have their recipes. So, you get your guys working on this new compound while you finish the foundation and supports. After 2 months, they decide they have it and you go back to work pouring the street. About this time, the mayor decides that no cars will be allowed in the city, and your bridge is no longer necessary.

Software development is just like construction. Another person once told me that we can call software developers 'engineers' when they can be sued for doing their job incorrectly. I say, you can sue me for the software when you can tell me exactly what you want.

Of course, there are lots of ways to improve the process of software development and in the last few years I think a lot of progress has been made. However, software is never a bridge. That's hardware. People like software because it becomes whatever they want. I think we just have to figure out how to help people better explain what they need.
Submit this story to DotNetKicks

Wednesday, April 12, 2006

Doing Web Service Exceptions Right

These things are covered in various articles as referenced below, but I would like to synthesize the most important points. First, a summary of the issue. When your code in a web service method throws an exception, the framework wraps it in a SoapException object which relates to the "Fault" node permitted by the SOAP recommendation. If you throw (or allow an unhandled exception) of any type other than SoapException, the SoapException thrown by the framework will only contain the text details of the exception in the message of the exception. This is ugly, and hard to work with. The SOAP recomendation is that you provide fault details within the fault. In order to do this in an web service, you must throw a SoapException where you have set the details node first via the Details property on the SoapException object.

1. Every web service method should catch System.Exception and wrap the exception in a SoapException, adding necessary details to the Detail property of the exception. Please note that the xml node provided to this property must have the root name "[Dd]etail". It is recommended that you create the root element utilizing the SoapException.DetailElementName.Name and SoapException.DetailElementName.Namespace constants.

2. You must also provide the detail node as a node from a document(such as myXmlDoc.DocumentElement) and cannot just pass the XmlDocument.

3. The InnerException property of your custom SoapException will always be ignored. This is used by the framework for unhandled exceptions of types other than SoapException.

4. To work with SoapException details, it makes sense to have a helper method to wrap other exceptions such that every catch block can simply throw via a call to the helper:

public string SomeMethod()
//do something
catch(System.Exception excep)
throw GetSoapException("Failed to do something",excep);

private SoapException GetSoapException(string message, System.Exception originalException)
StackTrace trace = new StackTrace(1);
SoapException eSoap = new SoapException(message,
SoapException.ServerFaultCode, //Could be ClientFaultCode depending on circumstances.
detail.GetSerializedData(), //detail is some serializable object with xml nodes providing
//exception info. Cut from this sample for clarity.
return eSoap;

5. ServerFaultCode and ClientFaultCode are not necessarily important to set properly, but they indicate what the cause of the problem was. You should indicate a ServerFaultCode if something went wrong in the normal operation of the service. This might be the case if you are wrapping an exception from your catch block. If you intentionally throwing a fault because the client has sent bad data:

throw GetSoapException("Your input string was null or empty",
new ArgumentNullException("someInputString"));

you would indicate this via the ClientFaultCode code.

6. If you want to have a serializable object which contains error details, you will need to expose this to the client code via customizations to your wsdl document. Alternatively, the client code could have its own version of the object via a seperate shared assembly. Whatever makes sense.

7. The client should then have a catch block for SoapException around all web service calls, and some helper method for deserializing the Detail property and taking action based on the contents.

Links :

Using SOAP Faults

Handling and Throwing Exceptions in XML Web Services

SoapException.Detail Property

Discussion on InnerException
Submit this story to DotNetKicks

Monday, April 10, 2006

Locating embedded resources

I found it mildly challenging to locate some resources that I had embedded when using the overload ctor for ResourceManager that takes the name of the resource file to load. After some noodling around (and reading anything but the docs for what the baseName should be) it turns out that assembly resources are always named in a 'flat' format that will include the "defaultnamespace" assembly property, and any sub-folder in which the resource file is contained. For example, the resource "ExceptionStrings.resx" in an assembly with the default namespace "TestResources" in the subfolder "Resources" will be:


A good way to figure out what your resource files are called is to perform the following while debugging:

string[] resourceNames =
foreach(string name in resourceNames)
Debug.WriteLine("ResourceName: "+ name);

This will output each resource in your assembly by its full name - which should be the name provided to the ResourceManager ctor (minus the ".resources" file extension).
Submit this story to DotNetKicks

Monday, March 06, 2006

Identifier Quoting with OracleClient implementation in .net

There are a couple of very annoying programming choices in the OracleClient provided in the .net framework.

1) QuotePrefix and QuoteSuffix on the OracleCommandBuilder return String.Empty if they have not been set by a user of the builder. Yet, if you QuoteIdentifier(string) you will get back a quoted string - using the default Oracle identifier of double quotes. This works the same way as OleDb and Odbc because those don't have a default, but breaks with the more intelligent design of the SqlClient which will return the correct brackets when queried.

2) Oracle has an odd habit of upcasing any identifier which has not been quoted. That being the case if you have mixed case identifiers you need to quote them. Unfortunately, The OracleCommandBuilder doesn't allow any way to quote identifiers. According to the documentation, the column and table names are retrieved in a case-sensitive manner, ah how true (retrieved fromDbDataReader.GetSchemaTable) - yet they are set in the generated Sql statements as they are received - which Oracle then Up-cases making them incorrect. A simple property on the builder of "QuoteAllIdentifiers" would have solved this problem. As it is, you cannot use mixed case identifiers with the OracleCommandBuilder. I tried using mappings with an escaped quote on each identifier to no avail.

3) Finally, if you attempt to use the UnquoteIdentifier(string) method on the OracleCommandBuilder, it throws an exception! WTF! Again, this is in contrast to the SqlClient implementation which correctly just returns your string.
Submit this story to DotNetKicks

Thursday, February 09, 2006

Minor bug in MSBuild on x64 Machine

The reserved property MSBuildExtensionsPath apparently always returns "C:\Program Files\MSBuild" in the command line no matter where it is actually installed. Oops. On my x64 machine, this path is totally invalid, as all standard programs - including vs05 and related apps - are installed to "C:\Program Files(x86)\". Only 64-bit programs go into the standard "Program Files" path.

This does not appear to be true when compiling my modified project file in VS05. Not sure what to make of this discrepancy.
Submit this story to DotNetKicks

Friday, January 27, 2006

Regex Performance - RegexOptions.Compile

I have been working on a small project that utilizes Regular Expressions to do most of the heavy lifting. I have been aware of the Compile option, but haven't really experienced any benefit before. However, in this project it became clear that certain expressions were really bogging down the engine - causing the overall run to crawl. This expression was one that was slow:

(?<1>^\s*)Foo(?<2>[ ]\w*[ ])(Bar)(?<3>[ ][fF]un[ ])(?<4>\w*)(\s')

Strangely, it seems very like the others I was using that were not slow.

In any case, I was using the static IsMatch from the Regex class, so I switched to creating an instance of Regex with the Compile option and everything sped back up. Very handy. I haven't done any real analysis on what caused these certain expressions to be slow; I am sure my moderate understanding of RE has caused some inefficient syntax. I would like to look into it further, but it will have to wait...
Submit this story to DotNetKicks

Wednesday, January 18, 2006

XSLT to transform NAnt script to MSBuild

I undertook a side project to write an XSL transform to convert my existing NUnit scripts into MSBuild scripts. The exercise in XSLT was a good refresher for me, and it was a good way to learn MSBuild. Other than that I have to admit the result may be less than useful.

In any case, I have posted the xsl stylesheets.

The main problem I see in using these to any great value is that: first, you could just runNAnt from MSBuild; second, you need additional stylesheet entries for every task; and third, you would either have to write stylesheets for every function found, and potentially write the MSBuild task to substitute for the function. Not worth the effort.

Ah well, perhaps the stylesheets make a good tutorial on how to get things done. On the other hand, since I was just getting back to using XSLT for the first time in several years, perhaps you'll find errors or misuse.
Submit this story to DotNetKicks

Tuesday, January 17, 2006

Differences between NAnt and MSBuild

It's been a while since my last post, but I am back to make the comparison I previously promised between NAnt and MSBuild. Why? Mostly to learn about MSBuild. Partly to help with conversion between NAnt and MSBuild. I should point out right away that there is no explicit need to convert NAnt scripts to MSBuild. You can exectute NAnt from within MSBuild as a Task. Finally, I have not achieved guru status with either NAnt or MSbuild, so please rectify any mistakes below with a helpful comment.

That being said, on with the comparison:

1. NAnt has functions. MSBuild really has no such thing. MSBuild is infinately extensible via Tasks, but there aren't that many tasks as compared with NAnt functions. I think this is a sign of maturity in NAnt. Since MSBuild's programmers had NAnt to look at, we do have to wonder why they excluded some things but we can guess that dev timelines ran out.

2. NAnt has a few fileset types with specialized attributes. All file references in MSBuild are contained in ItemGroup blocks. However, with ItemMetadata providing infinite extensibility to each Item in an ItemGroup, the specialized attributes are not required.

3. In the main, the NAnt schema tends to be attribute centric, while MSbuild favors elements with text content. The NAnt schema also favors lowercase names, while MSBuild favors an initial capital.

4. NAnt allows fileset groups to be included inside a target. MSBuild Targets may only reference ItemGroups specified as children of the Project element.

5. MSBuild seems to be missing the notion of a basedir. This basedir attribute is very helpful in NAnt. MSBuild only has the project root as basedir, and can use PATH variables. Again, I think the maturity of NAnt shows in this oversite. Obviously, you can define a Property with an appropriate base directory Uri and append it to every path in an ItemGroup. You could probably also make use of ItemMetaData if you were writing a custom Task.

6. Property references in NAnt are denoted by ${}, while MSBuild uses $(). What is this, C# versus VB? You also cannot use '.' characters in your property names in MSBuild, though it is legal in NAnt.

7. MSBuild references Items in an ItemGroup with the syntax @(ItemName). NAnt references filesets by id utilizing a refid attribute without decoration.

8. There are 72 built-in tasks in NAnt. There are 35 in MSBuild, however most of the common tasks related to .net use are in there. They both include an Exec(exec) task for calling out to the system. They both allow you to write your own to extend the functionality of the build. So, if it can be done in code, you can run it from either one.

9. Both allow conditions to be placed on nearly every element to determine if the build should include the enclosing item. However, NAnt uses both an 'if' and 'unless' approach, where MSBuild just simply has 'Condition' that supports '!' (not); 'And'; and 'Or'. Here the MSBuild approach seems more streamlined.

10. MSBuild Projects can have multiple default targets, and also has an InitialTarget which can be run before other targets for prepatory steps. Utilizing 'depends'/'DependsOnTargets' attributes you could craft your own workflow in either program. Similar to the Default, you can have multiple targets specified in the DependsOnTargets attribute which is an interesting enahancement over NAnt.

11. A subtle difference in the CSC Task is that in NAnt, the warnings to ignore are elements which each have a condition. In MSBuild, warnings are a single attribute which contains a semi-colon delimeted list. In NAnt, you could conditionally ignore some warnings on some builds based on criteria. No such thing would be possible in MSBuild.
Submit this story to DotNetKicks