Wednesday, October 03, 2007

Community Server and Extended Profile Properties

Anyone who has done much customization for Community Server will eventually run into the need to add more profile properties for users. These properties are really easy to add. The problem is that you cannot easily search on these fields because they are stored as two text columns in the database. There's an old add-on that offered one way to get at these values. Another would be do your searching in code since the values are easy to get at once the User objects have been loaded.

While it isn't efficient enough for regular site-based use, and it certainly isn't pretty; you can query the database for these values using text manipulation procedures in Sql Server. Adding first-class columns to the profile table requires a major effort in CS, so this approach can work if you want to add ad-hoc usre reports.

<disclaimer>I really can't recommend the following for adding functionality to your CS search page - but if you manage to use it and have some performance measures, please share them.</disclaimer>

I added three functions to my CS database*:

GetUserPropertyValue - takes the UserId, property name, property type (b,s,i), and settingsID and comes back with the named property value.

GetPropertyStartIndex - takes the property name you're searching for, and the list of property names. Used by GetUserPropertyValue.

GetPropertyLength - takes the same params as above, and returns the text length of the property value.

The start index, and length are what is stored in the property names string. You then go to the property values string to get the value only. I broke this operation up to into separate procedures for readability.

Finally, I created a modified version of cs_vw_Users_FullUser to include values for the fields I was interested in by adding this function call into the select query:

dbo.GetUserPropertyValue(cs_UserID, N'MySpecialProperty', N'S', SettingsID)

This way, I could write a report against all users and search, sort, and filter by custom values.

Download the sql code.

This hack will work for asp.net based profiles too since the methodology used to store these key/value strings in cs_UserProfile (PropertyNames | PropertyValues) is the same as is used for the base aspnet_Profile table for asp.net membership profiles.

*my sql-guru friend John wrote the original manipulation query.

Submit this story to DotNetKicks

Monday, September 24, 2007

Earned Value Early Results

We have our first reporting period under our belts, and I am very excited about the EV results.

In this fictionalized representation of our first project, we see the period % complete reporting by the team leader:

This evaluation of our status leads to the following period values for EV:

You'll notice that while 145 hours were spent instead of only 120, we also saw 8.2% EV, instead of the planned 6.63%. However, the question we have is, was this enough extra EV to justify the hours spent? In our calculations we have the answer:

We're still all green because the EV was indeed enough to overcome the overspending. Notice that the CPI is roughly $1.02 - this means that the EV just barely covered the cost overrun for the period (we got $1.02 effort out of our team for each project $1 planned). Thus, while we spent over $17K in the period on labor instead of just over $14K, the project is projected to end early, and still come in under budget. What this also shows, is that we'll need to watch this team's hours closely over the coming weeks to make sure this level of efficiency is maintained.

Our second team had a more modest hours overage, but still managed a healthy start on EV:

Leading to an even rosier bottom line report:

So far, this seems like a realistic view of real project progress. Obviously, there's still a ways to go. BTW - it didn't take even close to 15 minutes to enter and analyze the data.

Submit this story to DotNetKicks

Friday, September 21, 2007

Earned Value Management

One session of a conference I attended way back in March was a discussion about managing projects presented by Juval Löwy. One of Juval's excellent claims was that he had used earned value analysis to keep his management time to 15 minutes every Friday afternoon. He explained he used the rest of his time to become a better architect, and when he was finally too bored with the simplistic task of management, he moved on to architecture.

A great story, and certainly a compelling one for someone that needs to manage on less than 40 hours a week. I have been waiting to use this magical method, so as a recent project was ramping up, I pulled out my notes and hit a few websites to try and prepare for the 15 minute management job. Going on scant examples of exactly the type of project analysis I wanted caused this task to take somewhat more than 15 minutes. I found some helpful examples here and here.

If you aren't familiar with EV, the basic idea is that actual work is tracked against the amount of value accrued by the work completed against the plan of value to have been accrued according to the plan milestones. The notion of Earned Value has to do with tracking accomplished activities, rather than just time spent (I think this works well with my management objectives - I don't care how much time you spent, I care that the work gets done). Each activity has some percentage of value to the overall project. A simple way to get at this in software development is to divide the hours for one task against the total hours estimated for the project. Choosing the proper level of task granularity to track is a matter of balancing ease of reporting with meaningful levels of detail for catching problems before they go out of control. Once the earned percentage values are assigned to milestone tasks, you can either work with dollar figures or just work with hours - it doesn't matter. The important thing is to calculate how much earned value should be accrued at various key intervals along the project path. For instance, if I am checking in on my developers every Friday night, then I calculate EV earned to each Friday. When my developers report in, they tell me how many hours they worked this week, and what % complete they are for each milestone task.

Planned EV will be the number of developer hours available in the time period divided by the total number of project hours. When each developer reports a percent done, you multiply that by the percentage EV calculated for each task against which you are reporting. Summing each EV gives you the period EV which you compare against your goal. You can then use the total actual hours reported as a measure of how much real effort it took to get actual EV. This calculation comes into play as you forecast the future effort expenditure of the project. If you thought you could complete 10% EV with 80 hours, and got 8% EV with 80 hours, your SPI will give you a multiplier for estimating future period effort. This is where the 15 minutes comes in. Each Friday you must inspect your planned EV and and planned Costs against your actual's. You then analyze what variances in the charts mean to you:

  •  PEV way ahead of AEV? You underestimated and should add resources
  • AEV way ahead of PEV? - You overestimated and may be able to divert resources
  • AEV=PEV but costs overrun? - You can meet the deadline, but your resources are inefficient - you need to figure out what's bogging down your resources.
You get the idea. Manage next week's resources based on your 15 minutes of EV and cost analysis.

Without a tool to provide EV analysis for me, I turned to Excel. The only gotchas I had in setting up the calculations was discerning the best path for tracking - either project to date sums, or per period values; and getting graphs to show the EV and cost trends. In the end I went with per period values and some extra calculations to show trends and two separate graphs for EV and Cost.

With the plan in place, I simply communicated the project milestone tasks to the team along with target dates and requested weekly reporting of progress. Hopefully this leaves the devs with schedule freedom and a sense of autonomy, while giving me a way to know which adjustments are necessary before we have a big problem.

Submit this story to DotNetKicks

Orcas Beta 1 and other new stuff

It seems to me that the development world is experiencing another period of rapid evolution again, much as it did in '99-'01 around web application servers, XML, and all the XML spinoff technologies. Interestingly, during the last evolution many people left javascript in the dust; but it has had a massive resurgence to the forefront this time around. I think it also interesting to note that the lastest tools and technologies are clearly evolutionary in that most are still new ways of utilizing the XML breakthrough. Well, I am sure someone more knowledgable wrote about this years ago; but it's clear that it is time to dig deep into the recent mountain of information.

As one who is primarily an MS-based developer, a lot of that mountain I am facing is coming out Redmond lately.

  • .net 3.0
    • Windows Communication Foundation
    • Windows Workflow Foundation
    • Windows Presentation Foundation
  • Silverlight
  • C# 3/3.5 (lambdas, extensions, implicit types, anonymous types, type initializers, expression trees, and LINQ)
  • AJAX Framework
  • PowerShell scripting
  • Vista/Longhorn Server (IIS7 being the big one here so far)

Along with this has been the birth of Ruby and the resurgence of javascript.

Of course, many of these things have been around or brewing for years, but the pressure has mounted as more of the MS stuff nears release.

I've had the opportunity to play with Orcas the last couple of days, and it has been fun. Silverlight programming isn't really something I think I'll do a lot of, but it provided a good way to mess around with XAML and learn about what's available. It also provided a way to get into VS 9, and the new Blend tool.

First, the XAML stuff is cool, especially the web delivered Silverlight. Updating markup, and then watching visuals automatically update was great. However, my artistic skills approach those of a first grader so the appeal is limited. I can see a potential upside in using Silverlight for rich administrative interfaces. I hate all-flash sites because they are usually more about style than substance (i don't need to hear sounds when I drop down menus or wait for transition effects when I go to a new page), but if used carefully and mixed with AJAX it could be good.

VS 9 has some fairly exciting innovations too. I especially like the javascript editor has improvements (though it could be a lot better - see Aptana). 

C# 3.5 has some great stuff going on. Scott Guthrie's coverage of LINQ features has been excellent. It does take some time to get used to the LINQ programming model (which utilizes most of the new language features), but it's well worth it. Start investing the time now.

Submit this story to DotNetKicks

Tuesday, June 05, 2007

Private Workspace Config Mgmt w/Team System

On a recent project our team has been using Microsoft's Team System from within Visual Studio. We are using the built-in capabilities for:

  • Requirements Definition
  • Task Assignments
  • Source Control
  • Build Server
  • Bug Tracking
  • Code Coverage
  • Unit Testing

So, we're involved in it up to our necks. We've been using pieces of the system for quite a while, but this is our first immersion. The SCM is a little funky sometimes, and really annoys my SCM-geeked teammate.

Honestly, I think it has been a pretty good experience overall. I like the reporting we get out of the box (these only come into play if you do use TS for everything), and there is no question that as an extension of the IDE, having everything available in one familiar place is a big bonus.

The main point I wanted to bring up here is an early issue we had with getting our private workspaces* and build server to have all the right files from the repository. We do not share an overall folder structure, and I don't like forcing developers to have their files in any one place on disk on their machine. I am just too much of a libertarian for that.

Because this project included a couple of frameworks, we decided to install those on every machine, and depend upon the GAC for references. However, we wanted to share a signing key, a shared AssemblyInfo file, a test run configuration file, team build types, along with multiple assemblies not included in the installed frameworks.

For the shared files we decided to use the solution items folder for the solution. In order for the items here to be source controlled, and therefore downloaded from the repository, they had to exist beneath the root workspace folder. We decided that we would mimick the folder structure exactly on disk as it was in the solution items folder. Specifically, we created solution items folders, and then created their counterparts in the root solution folder on disk.

Assemblies were placed in an Assemblies directory and other files in a Common folder (unfortunately, this is still one area where the SCM can still be flakey and occasionally get latest version doesn't seem to pull new solution items).

Now, in each project we referenced the assemblies in the solution items folder rather than locally. We also needed to put links for the common key and sharedassemblyinfo.cs into each project. In order to do this, each csproj file was hand edited with a new "Compile" element:

<Compile Include="..\Common\SharedAssemblyInfo.cs">
     <Link>Properties\SharedAssemblyInfo.cs</Link>
     <Visible>true</Visible>
</Compile>

The Link sub-element tells the project where to display this file relative to the project root.

We added a "None" element for the key:

<None Include="..\Common\ourkey.snk" />

Finally, in order for the .testrunconfig file to work, and to see the team build files they had to be included in the solution via the context menu 'include existing item' - for some reason these references did not come along with updated versions of the solution file.

With this all in place, every developer can reliably get latest and build without issue. 

*I mean this in the sense of the Configuration Management pattern given by Berczuk which I think is probably normative for most developers - do your dev in a private workspace where you control the versions of the components you are using and the version of the code you developing against. You control when and how your environment changes. This is basically done by downloading a complete environment from the repository to your local workstation.

Submit this story to DotNetKicks

Generic Mock Factory

I've been involved in VS integration work lately, and within the unit testing framework provided by the integration SDK is an excellent mock factory for any interface based type.

If you are involved in unit testing where you need mocks, this is an excellent generic resource for any project - not just VS integration.

GenericMockFactory.cs

Submit this story to DotNetKicks

Saturday, March 31, 2007

Windows Communication Foundation - SOA

I have spent time this week learning about the WCF and it's place in jumpstarting SOA based application development while attending the SDWest Expo in Santa Clara. Juval Löwy provided most of the presentations on WCF; along with a few by his chief architect Michele Bustamante.

One of the main points Juval made was that WCF based development is an evolutionary shift from Component Oriented development to Service Oriented development. During an excellent history lesson on the growth of software development methodologies he made the case that services are the next major shift because they improve upon the goals of component development. The WCF improves primarily in decoupling: technology from the service provided; transaction and concurrency management; security; reliability, versioning, and communication protocols from the business logic. Because the WCF provides this great level of decoupling Juval declared with a mischevious smile, that coding in the CLR is coding in the stone-age. The WCF has arrived, and everything you code should now be coded as a service.

While we can't take this declaration entirely seriously - Mrs. Bustamante is more balanced -it is a fact that transaction support, protocol independence, security, reliability, versioning, and concurrency are provided for free by the WCF. Obviously all services decouple the technology and platform from the service provided - that's why the industry is in love with services - but the WCF adds so much more.

All this goodness is provided by implementation of the WS-* specifications by the WCF. That really makes it so much better in my opinion because it means that transactions, security, etc. can pass from a WCF service to any other which also implements those standards (IBM was part of the spec group, so I am sure their products will follow suit soon). You have to take some time to think about what this means to the way you can build software - we may have to agree with Juval that a fundamental shift is underway. This isn't about WCF causing a fundamental shift, SOA causes the fundamental shift. It's the easy use of the full range of WS-* standards which makes the WCF such an attractive reason to start thinking services more often than you ever have before - even if not for absolutely everything as Juval suggests.

If you aren't familiar with the WCF, the basics are very simple. The new ServiceModel namespace includes an attribute for ServiceContract which is placed on an interface, making that the contract for the web service. Each operation within the web service interface is attributed as OperationContract, exposing these members of the interface on the service. Every class type exposed in an operation signature must be attributed with DataContract to mark it as serializable. Given these few attributes, your classes are ready and you just need a little configuration. If you remember .net Remoting, then I suggest you forget it, but this will at least be familiar to you. I won't go into here, you can check out WCF configuration here.

If you are going to design more services, you have to take some things into account during design. I won't discuss the normal service design issues that exist in all services. However in WCF programming a couple of things got my attention. For instance, while you cannot expose an interface type as a contract operation variable, you can expose "well known types" which will enforce an object hierarchy. This will be slightly different from the pure OO interface and class design you might have used, but preserves the basic idea.

Also, when it comes to passing state data an object DAL is enforced by classes attributed as DataContract. This ensures serialization capabilities by the Framework. As a design consideration it simply means you have to have classes that hold simple data elements to pass across the service boundary. This isn't a bad thing, it just may be different than you would design for internal business objects. Mix with LINQ (in the near future) within your .net classes and you'll probably have these pure data objects in your design anyhow.

One final consideration concerning services everywhere is the issue of performance. Obviously, if you are going across the wire as SOAP/HTTP, you're subject to the inefficient payload as well as the potential wire latency. This is no different from any other service call. But, with every service call there is a serialize/deserialize operation for every object. I haven't tested this hit, but it will be something compared with the nothing of in-memory object passing.  So, if you are configured for SOAP/HTTP performance is already a potential problem and you'll never notice the performance of any other part of the WCF infrastructure code. If you are going binary/TCP, then the test would be worth the double-check.

Finally, if you are implementing WCF the folks at iDesign have a whole bunch of helper classes and utilities you should check out.

Submit this story to DotNetKicks

Tuesday, February 27, 2007

Helpful C# Class File Template Change

Something which has annoyed me several times in VS2005 is that new classes are internal by default. I almost never want an internal class, so I went looking for the option to change. I never did find it, but a colleague has just pointed out the solution:

All new files come from various files in %InstallDir%/Common7/IDE/ItemTemplates

Find Class.zip and modify the class.cs file in this archive. Be sure to delete or modify the contents of the ItemTemplatesCache folder while you are at it.

*InstallDir is [Program Files/Microsoft Visual Studio 8] by default.

Submit this story to DotNetKicks