Monday, December 19, 2005

Using MSBuild - Don't learn from default build project file

If you are used to utilizing Ant or Nant to build your .net solutions, MSBuild is a fairly simple switch, though I still think Nant is more full-featured. However, the more I learn about MSBuild the less concerned I become with the available featureset.

One thing I did find confusing was the contents of the default build file generated by the VS IDE when you add a build type to a Team Project. This file is not a good way to learn about build files. First, it is based on a team foundation server based build. Second, it does not contain targets but depends entirely on the included targets file (Microsoft.TeamFoundation.Build.targets). This targets file can be valuable to learn from. I also recommend taking a peek at the MSBuild schemas (Microsoft.Buld.Core.xsd and Microsoft.Build.Commontypes.xsd)

If you build your own build file from scratch I believe you will find the experience more analogous to building with Nant. The thing about the default build file is that it is full of properties and item groups which have comments indicating how values should be specified to generate a proper build. However, this really only relates to the targets used by the included MS targets file. Great if you want a simple build, or don't care how its done. To learn the build syntax, throw away the default file and set about creating your own. I realize this is "wasted" effort since the default compile target is just fine but I prefer controlling everything and removing all 'MS Magic' from the project.

The MSDN docs are good enough to learn from, so I won't provide the basic info here. I will summarize a few basic thoughts:
  • Target has pretty much the same meaning as in Nant. There are other Nant analogies, especially with the built-in tasks. However, I would prefer to cover this in a more detailed explanation of Nant->MSBuild conversion issues.
  • ItemGroup collections can have any element content. The names of the elements within are used to dereference the value lists in a manner similar to properties.
  • ItemGroup children can have nested values that can be dereferenced with "dot" notation as in @(Parent.Child).
  • PropertyGroup collections allow any element content as with ItemGroup nodes. This allows a nice compact way to defining a property value, rather than indicating the value= attribute all the time.
  • The CSC task attributes are a little funny because they rename properties from the command line object they encapsulate. Strange choice, but if you know what csc requires at the command line you can figure it out. You can also read the docs, but who wants to do that?!
  • While learning, be sure and recognize the built in tasks, the reserved properties, and the well-known item metadata lists. You wouldn't want to recreate the wheel.
Submit this story to DotNetKicks

Wednesday, December 14, 2005

Dead DC Removal

There is a catch-22 you get into if you try to properly remove a domain controller when it has failed completely - either it is dead, or because of massive corruption it might as well be.

The funny thing is that in the tools ms recommends for this job you have to connect to the DC in question in order to demote it or cleanup its entries in AD. Silly. I finally found an answer though...

If this dead controller is one of your operations masters, well - good luck. If not, then you will need the tool ldp.exe. I hadn't seen this one before, but it basically allows you to edit the domain ADSI containers. The details are here:

http://computing.fusion13.com/ActiveDirectory/Remove-A-Domain-Controller-From-Active-Directory-With-LDP.shtml

Follow this post, and the dead server is gone. Other articles indicate that when you rebuild the machine, you should use a new name due to cached values on machines throughout the domain.
Submit this story to DotNetKicks

Thursday, December 01, 2005

Learning Team Foundation Source Control Coming From VSS

It has been a decent little adventure getting more than one developer at a time working with Team Foundation Source Control (or VersionControl - a little aside: it seems its all about version control when dealing with the web services API, but its called source control in the IDE UI).

Coming from VSS (or Source Unsafe, or Source Destruction - choose your favorite epithet) some things which are common in other control systems take some getting used to when first moving to "real" source control. My only other version control experience was with Perforce, and there are some parallels here. I have since also gotten into CVS while dealing with NUnit development, and there are other parallels here. If you come to TFSC from one of these other systems, the learning curve will be much more shallow.

Allow me to enumerate the lessons learned so far:
1. The folder you designate as a workspace for a certain project will be totally controlled by source control, including deletes. This is great (and normal for other version control systems), but a real adjustment for lazy VSS users who sometimes like to keep the development area dirty in order to work with different versions of code at the same time (mySource.cs.old for instance).
2. What you actually have on disk is of no consequence to the source control system. Which is to say that the notion of having the latest, or the content of a particular file is controlled by the actions which you have taken against source control through its interfaces. The information stored in the source control database will be used by the source control system, not a lookup of files on your disk. If you happen to do something on disk, outside of source control - you will soon be very confused when you go to use source control again. To get back to a usable state, you should choose "Get Specific Version", and then check the "Force get of file versions already in workspace" checkbox. This will "update" your disk, even if everything was up to date, and get you back in sync with what the source control system believes to be true. In short, don't do anything to the workspace outside of version control. If you want to play around with files, copy the workspace to another location.
3. Putting auto-generated folders or files in source control is unworkable. What used to simply be a problem or difficulty in VSS is totally unworkable in team system source control. This means all /bin, /doc, and /obj folders must be left out of source control. This is actually a tenant of good configuration management anyhow, but the problem is that by default when you add projects to source control, you will get these folders. You must consciously remove or avoid these folders.

A related issue is how you deal with assembly references when you are not including any of the referenced assemblies in these auto-generated folders. Your projects keep relative links to referenced assemblies in the

<HintPath>..\..\ReferenceAssemblies\Release\SomeAssembly.dll</HintPath>

element. Each developer (and the build server) will need to have this same relative path, or a gac entry for the referenced assembly. Another possibility is to use an environment variable so each developer can have their own locations, but this seems like more setup than its worth.

4. The entire team must standardize on file or url based projects when working with web projects. Because of the relative references in web.config and because of unit tests, all hell breaks loose if you have one developer using a url based web project, and another using the file system based. Choose one way to do it, and force all developers to accept this standard.

As far as which to choose, I would say the file based is better simply because it is keeps your web projects in the same basic location as your other projects, and because it is the way Visual Studio expects to work - and we all know that if you accept some product defaults, you'll save headaches.
Submit this story to DotNetKicks

Thursday, November 17, 2005

x64 Support Update

Daemon Tools 4.0 delivered without the promised x64 support. Dammit.
Submit this story to DotNetKicks

Wednesday, November 16, 2005

Unit Testing ASP.NET 2.0 Forms Auth Site

I may have missed a simpler solution, but when trying to run a simple unit test against an Asp.net website that had forms authentication enabled, I needed this entry in the web.config:

<location path="VSEnterpriseHelper.axd">
<system.web>
<authorization>
<allow users="?">
</allow>
</authorization>
</system.web>
</location>

Otherwise, you end up with an error such as:

"The web site could not be configured correctly; getting ASP.NET process information failed. Requesting 'http://localhost/SimpleTestWebsite/VSEnterpriseHelper.axd' returned an error: The information returned is invalid. "

I would have figured this would be covered in documentation, but couldn't find it there. I also would have thought that logging in via FormsAuthentication.Authenticate("user","pass") in one of the test setup methods would have helped. Apparently, this call is made though before you even setup the test fixture. If I were the MS developer behind this my face would be awfully red.
Submit this story to DotNetKicks

Friday, November 11, 2005

MS Team Build

I am not certain of all the reasons, but TeamBuild utilizes a completely new way to automate builds. Sure, this sort of thing exists as NAnt; probably more accurately as Cruise Control, but I am sure MS had their reasons for redoing it.

I am a bit disappointed in both the build and unit test frameworks because I already have continuous integration scripts in Nunit, Nant, and CCNet. Now, I have to do it all over again. I think the primary reason for the reinvention was to be able to track the builds as part of the reporting data.

Even redoing things in TeamBuild, I find that it is a little odd. We'll grant that its beta material.
  • The build automation seems to revolve around a web service call, and making your own executable. Everything I found on the net was for beta2. The documentation for beta 3 suggests using tbuild in some parts of the docs, or teambuild in others. I could not locate either of these executables anywhere. This is pretty lame. I can schedule CCNet, what's up with TeamBuild? I built my own beta 3 executable. [EDIT: With the last release and updated docs on the web- I finally found that the executable is msbuild.exe. It is located in your framework directory. I am still a bit frustrated with team build, but its getting better as docs improve understanding. The build blog is a great resource as well.]
  • Build results page is a mishmash of data, and most of the information is actually in links to text dump files of the build process. Again, the cruise control implementation of a website with results is far more usable.
  • The distribution of websites to the drop location doesn't seem real useful. You get the bin directory with compiled helper libraries. hmmm. I guess i'll have to get into that more to understand the purpose.

For me the bottom line is that out of the box, this feature seems to add very little (nothing?) to what was available from community software previously. It also seems so far to be weak. I'll continue working with it - maybe i'll be converted.

Submit this story to DotNetKicks

Tuesday, October 18, 2005

Things that won't run on my x64 machine...

I have had very few programs refuse to install. Most just run as 32 bit without issue.

These programs will not install...

  1. Daemon Tools - new release (v4) that will support x64 expected soon.
  2. Virtual Server - R2 version update will support x64. Currently in beta.
  3. MSN Desktop Search - no update expected that I could find
  4. Google Desktop Search - no update expected that I could find.
  5. Printer driver for Dell Inkjet 720 - presumably all Dell inkjets?
  6. cvsnt - at least it won't run. Not sure of the actual issue. Does great in win32.

This one requires a simple workaround...

  1. x64 Flash plug-in (and all other plug-ins as far as I can tell) - no mention of an update coming that I have found. 32-bit explorer works just fine though.

Submit this story to DotNetKicks

Insights on x64 v. x86

In my last post, it was certainly clear that I needed to do some lernin'. A very helpful fellow at MS pointed me to the following blog entries:

http://blogs.msdn.com/joshwil/archive/2004/10/15/243019.aspx
http://blogs.msdn.com/joshwil/archive/2005/05/06/415191.aspx
http://blogs.msdn.com/joshwil/archive/2005/04/08/406567.aspx

First, it is worthwhile to read the related articles referenced in the 10/15 post. You should also realize that during these posts, the handling of x64 targeted assemblies was in flux, and later articles contradict the earlier ones. However, the overall picture is more clear when you read them all.

The one thing I was right about in my last post was that basically these issues do arise from the program saying "I am not going to run in this environment". Whether or not they *could* run is a decision the developer has made, and is trying to enforce. In the .net world this safety mainly revolves around the use of Interop, PInvoke, and unsafe code blocks.

Version 1.1 of the framework (and strangely v. 2.0 of CLR Header- 2.0 framework produces v2.5 headers) did not have a concept of targeting processor architectures - this targeting is done in the assembly manifest. That basically means all 1.1 assemblies are agnostic. I ran a few simple tests, and here is what I found:

1. The main problem I was having when I wrote my last post was because I am an idiot, and was getting a Bad Image Format Exception (BIFE) because I was trying to load a 2.0 assembly into a 1.1 executable. Don't try that, and don't try to debug x64 issues when it happens.

2. The tool called corflags is a great help mainly as a quick assembly header reader (use it with assembly name only and it will display the target info - both 2.0 and 1.1).

3. When a 1.1 executable is loaded on an x64 machine it is run in the WOW64 as a 32-bit process and utilizes the 32-bit framework.

4. When a 2.0 executable is loaded, it runs according to how the configuration targets the architecture. If this 2.0 executable is loaded and references a 1.1 assembly, there is no problem on load. However, if the 1.1 assembly utilizes PInvoke, Interop, or unsafe code there could be runtime problems. For this reason, you should probably target x86 when compiling a 2.0 executable that utilizes these operations, or if you don't know what it does.

5. If you compile your application to "Any CPU" and the assemblies you use specifically target x86, you will BIFE it at load time. You can do two things here - compile yours to target x86 and forgo the wonder of 64-bit computing, or force a header change with corflags. The second option just seemed to break things for me.
Submit this story to DotNetKicks

Friday, October 14, 2005

x64 v.x86

I just got a new machine recently that has a Xeon proc with x64 architecture. I figured I should be forward looking. I also didn't really know what I would be getting into with program compatibility - especially with my own programs.

1. Most programs will run just fine in WOW64. Actually, why they don't all run is a mystery to me, but I bet sometimes its overactive processor architecture checking as I get stopped by the installer which tells me the program just won't run on my machine, or the program stops itself as it runs the first time. I have yet to see the program that has some fatal error when it tries to run. Since I can run programs that are easily 5 years old, I would guess any program would run. Obviously, if a program is dealing directly with the processor, or something in the 32 bit WinAPI I suppose there could be problems. But, again I don't really *get* the whole issue.

2. Developing in the .net framework held a few mysteries as well. When I installed I got the 64 bit version of the framework, and the x86 version. The 64 is my default framework and unless pointed to the 32-bit, everything runs as 64. This makes app-dev straightforward as things are marked "agnostic" or "Any CPU" in the MSIL and the proper framework is loaded when the program is run - no matter if my compiled programs are moved to a 32-bit machine.

The problem comes in when a program I compile agnostically calls on assemblies which are compiled to target the x86 framework. My program starts up a 64-bit process, and then calls on an assembly to load in that process. It is strictly x86 and fails to load with System.BadImageFormatException. So, I have to compile to an x86 config so my process gets loaded in wow, and targets the x86 framework. There seems to be more to it than this, and you can still get f'd by other people's .net stuff that has been compiled agnostically. I am looking into what to do about this. Probably some config file settings.
Submit this story to DotNetKicks

Monday, October 03, 2005

Team Foundation Beta 3

Let the fun begin! - I now have the latest RC's and now the beta 3 of the TFS. I installed the RC's the day after they came out, and have been waiting on the beta. Downloaded it last Friday, but ran into an issue rather quickly with the SharePoint Services configuration. I was able to quickly isolate the cause of this annoying error because the docs clearly state you need to install in Server Farm mode. However, no such choice is presented. Thanks to the following post:

http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=95559

...the answer is provided.

Moving beyond that however, I ran into an issue with the web service validators. Strangely, all this required was that I go to the services myself and run the registration service from the browser. Then, a 'retry' made it go.

That being solved, we get to our next issue: cannot connect to the Analysis server! Opening my Management Studio, I try to connect to the server and get a more helpful error. This error indicates that "under default settings SQL Server does not allow remote connections". The error indicates it is trying named pipes. I check with my Sql Server Configuration Manager, and sure enough named pipes is disabled. I enable, restart and try again. No help. I can connect in the studio, but setup still complains. After some time installing over and over again, it looks like this one is a sticky domain security issue. I tabled it for now, and just installed TeamServer on a single box.

It runs very slowly on this underpowered machine.
Submit this story to DotNetKicks

Friday, September 23, 2005

Messing Around with XML Serialization

In order to work through some of these issues with when to use xml in thin services (previous post), I have been trying to push the use xml serialization (System.Xml.Serialization) on classes which enforce the schema used for messages to the backend service. This xml serialization is also used by the framework for types exposed via web services. If you have XmlSerialization attributes on your own classes, then this automatic serialization will take place in a predictable fashion, and you can also serialize object yourself (say, for sending a stream to a non-soap/non-wsdl service) and expect the same xml. Nothing new in any of this, it has been in the framework all along.

In any case, two minor gotchas I encountered which are both well documented aspects of using the attribution, but were easy for me to miss:
1) Serialized properties read/write, except for arrays/collections. Of course this is true, how else could the object be deserialized - yet when most of my properties were collections, the couple that didn't seem to work were very confusing.
2) Use of the [XmlAnyElement] attribute on types exposed via your webservice can break your wsdl schema due to UPA issues. This UPA concept is explained well by Priya Lakshminarayanan on the Xml team blog. The primary problem is that if you don't differentiate your known types elements from the possible open-ended types, the parser doesn't know what to do with the types it encounters if you can have 0 (minOccurs=0) of the known type. If you use a different namespace to identify unknown (#any) elements, then you can avoid this issue. Or, from the docs:

"You can apply multiple instances of the XmlAnyElementAttribute to a class member, but each instance must have a distinct Name property value. Or, if the same Name property is set for each instance, a distinct Namespace property value must be set for each instance."

For my purposes, names couldn't be identified, so it had to be namespace.
Submit this story to DotNetKicks

Wednesday, September 21, 2005

Web Services Interfaces On Thin Layers -

We are all familiar with the SOA buzz, and have probably already built many web services. I think that Visual Studio has gone a long way toward making web services extrememly easy to code, hiding most ugly XML-oriented details from you. Its good to know how and why the xml functions, but actually being burdened with the angle-bracket writing is unnecessary. Anyhow, one of the issues that comes up is how to write your methods such that the contract for the service is SOA-compliant and also instructive to consumers. Web Methods and xml serialization in .net try to answer this by allowing the contract writer to use concrete types to define the interface, and then auto-magically converting this information into schema-based wsdl messages and port types.

If you design a good interface in terms of the SOA concerns (expose business processes only, make the interface immutable, decoupled from implementation, etc) you can use familiar objects underneath that will enhance the explicit understanding of your service contract. .Net takes care of making sure the web service exposure of your code is WS-I Basic Profile compliant.

Now let me get to the issue with thin services. By that I mean services which have boundaries just beneath the surface, such that they do minimal handling of the data before pushing it back to another layer - be it another service, or to the database. I find this leaves us with the sticky design question of what to do about how tightly types are defined when we want the interface to remain generic enough that it can remain unchanged over time, and we don't want to code a lot of serialization objects just to serialize and deserialize for very brief use.

The magical open interface is one that accepts a single xml parameter. Xml can be used to define anything, so I tell the consumer to send me such and such xml. I take that xml directly, modify it slightly and push it back to the next layer that knows how to use it. From the other side, I receive some xml, make a change or two and push it out directly to the consumer. But, as a contract this interface sucks. See the anti-pattern 'loosey goosey'.

Ok, so you go and design some serializable classes, but then you are building these thin classes that do little more than express the xsd as concrete language types. Feels like monkey-work to me. But, given the monkey-work you have classes that can be used by developers of extensions to the service, and you are further away from the angle-brackets .Consumers also have a nice idea of what you are after. Not sure what the 'right' way to do it is. Still considering.
Submit this story to DotNetKicks

Tuesday, September 13, 2005

Working through the TFS system a bit more, I am trying to clean up several minor issues.

1. I had errors on the ReportServer services site - compilation issue for reportservices.asmx. This is handled by the following blog. See the section on changing the assembly section of web.config.

2. The team site web parts regularly throw an error concerning ctl00oReportServer. This has been handled most effectively by Mike Attili.

3. Build Server wouldn't install. The install log only pointed to the lack of SqlServer based services. Since I am using a two-tier model this didn't make any sense. The answer turned out to be a Firewall/ICS dependency issue for 2K3 SP1 servers.

4. I am still trying to get an answer on the bug rate report...

An error has occurred during report processing. (rsProcessingAborted)
Query execution failed for data set 'AreaPathPath1'. (rsErrorExecutingCommand)
The default members for the dimension 'WorkItem_FactProject' do not exist with each other.

5. I am also looking for a reason all my reports are empty. I have some hope that when cubes refresh overnight, I might see something new now that i fixed the web service.
Submit this story to DotNetKicks
A head-slapper for me. If you are familiar with Cockburn's 'Agile Software Development', then you know the methodology he espouses he calls "Crystal". You probably also know that he classifies projects within crystal by color and hardness (size and criticality). So 'Crystal Clear' is the methodology pertaining to implementing Crystal in teams from 1-8 people. Aha, thus the name of his book.
Submit this story to DotNetKicks

Sunday, August 28, 2005

I have finally started toying with asp.net 2.0. I figured I better get with it before it was RTM. In any case I started with converting my homebrew website project.

The conversion utility was actually quite good, and made a number of changes; including folder structure, a few access modifiers, and most importantly removing the notion of "code behind" and replacing it with partial classes for pages and controls. It's not like it compiled right away, but nobody expected that. The first thing to note about 2.0 is that application code (specific business logic classes, special functional classes, interface definitions, etc) is now totally seperate from page and control code. In fact, that's one of the new asp.net system folders - "App_Code". The classes defined in this code are compiled to a seperate assembly at run time. The pages are compiled to their own assembly, and neither assembly is placed in the /bin directory any longer. You will find the assemblies in the temporary asp.net folder - as you would have in 1.1. For my homebrew application I see 3 assemblies - "App_Web_[random].dll"; "App_Code.[random].dll"; and "App_global.asax.[random].dll".

To take a look, I quickly open them in Relfector and verify the contents...

Yep, that's my app_code section of the site. Then I open the App_Web...

OK, not quite what I was expecting. Those are my controls (ascx), but I don't see my pages anywhere. Then I get what I expect from the global...

As you can see, this presents a new way of thinking about things like state managers, or custom configuration file readers. If you have this in application code, that code can no longer access instances of pages, nor can you late bind to these assemblies as they are not published by name to /bin.

I also use UIP on the site, which requires late binding, so that was one thing that was broken. The other was that I use a PageBase object for each page to create a master template style arrangement for common navigation, and footer - object instances of controls. The idea of a master template is built into asp.net 2.0, so I'll have to look at changing. In any event, the pages are also handled by a state manager. This is a business object which interacts with instances of pages and controls. Both of these concepts are broken in asp.net 2.0 because business logic classes (app_code) have no way of knowing about page classes. This problem is easily handled through a bit of indirection and late control binding as discussed here:

http://msdn.microsoft.com/asp.net/default.aspx?pull=/library/en-us/dnaspp/html/upgradingaspnet.asp

(look at the section on conversion issues)

This doesn't really put my code in a great place for 2.0, but does bring it to a point where it will compile. Because of the late binding, I actually end up having to move all of my app_code out to a separate assembly so I can reference it. I asked Scott Guthrie about this, and ended up having an interesting discussion (mostly with Simon Calvert actually) regarding the usefulness of the app_code section.

Simon gave the following reasons for the app_code architecture:

"App_code gives several benefits in developing applications:

it’s a known folder into which you can easily drop code files and not have to undergo any manual build step. This improves productivity in certain situations. App_code also has given semantics for arbitrary file extensions, not just regular code files. You can drop XSD or WSDL files and again, everything is done for you in the runtime. That is proxies are suto generated etc. Then in the designer tool, you get benefits in creating a class file and objects within the folder and have immediate intellisense and statement completion against your custom objects. This latter point is important in terms of productivity. If the app_code were a separate class library, then you would need to have separate projects, with references and undergo a build process in order to pull the updated class library to the web project.

App_code also allows sub-separation for specific scenarios like code languages, so you can easily separate out and manage the vb/c# in a shared code project for example

When using the app_code folder, the compilation process means that the assembly is compiled and that your pages all get references to that assembly automatically. It therefore is a separate assembly

Consuming types from the app_code folder is as simple as new-ing the type in your page codebehind for example and in cases where you might need the assembly name, we have strived to remove the requirement for the assembly name. For example, if you create a custom control in app_code then you can register it without referring to the assembly name. In cases where the name is required, you can use the moniker, “app_code”"

Well, I don't know enough yet about working in asp.net 2.0, so I am sure we'll find Simon's advice helpful as we move along. For my project, I still need that late binding so I have a separate assembly. They did mention potentially getting at the app_code via the BuildManager.GetType() method, but didn't expand on that, and I haven't looked into it.

My final issue to get the project actually working was to get SQL Server reconnected, and minor modification to an XmlValidatingReader (deprecated in 2.0) in the UIP to become an XmlReader with the proper attributes.

Submit this story to DotNetKicks
I purchased the book Crystal Clear by Alistair Cockburn (apparently pronounced "Koh-burn"). As I read it I will try to post highlights. Seems like it has some promise.
Submit this story to DotNetKicks

Wednesday, August 17, 2005

I finally bottled that smoked porter homebrew last night. That's going to be good.

As I bottled that beer, I was thinking through the Agile methodology information I have read recently. While I have been familiar with Agile for some time, I haven't really been in a situation where I needed to utilize it, or implement it in depth. However, that time has come with some changes at a client site. They want more maturity in their development, fewer defects, more predictable timelines - you've heard the story 1000 times.

First question - why is it so difficult to get development done right when we all know the answers to better development? While I think it has something to do with developer motivation, I think it is primarily a management issue. In organizations where the management is unwilling to trust development, and to work closely with them on a "methodology" that seems like just another "latest thing", a framework like Agile cannot take hold. Old fashioned results orientation around artifacts and PM tasks that development doesn't really want to invest in are still demanded by management. Furthermore, there is no tolerance for learning curve time to implement the changes. Half-hearted or non-existent mangement investment leads to a failure of implementation in the development department. This leads management to declare victory on it being a useless methodology change, and then to cry again about results.

So, I guess my first answer to my first question is that "WE" all know how development should be done, but not everyone in the organization agrees with us.

The second answer might be that we don't actually know how to do it, or agree on what's best. In my experience, the ability to gain understanding of a potential implementation is limited by undertstanding of the scope of the development effort, and what pieces of the latest methodologies will be used, and to what depth. Are we trying to be CMMI level 5, level 1, or nowhere near it? How much can we expect our PMs and Business Owners to engage in scenario definitions? What will the resulting documents look like? So, even as developers we tend to mill around trying to answer management's questions and needs, along with how it will work for us. Often the effort ends here, or again becomes half-hearted.

Basically, these issues cause me to want to find a way to ease into this methodology change. I can see the Agile paradigm being broken into two methodology sections - management methodologies (both team and project) and development. Development methodologies might include things like TDD, SCRUM, and pairing. Management methodologies include managing special cause variations and the related tracking, velocity tracking, sprial workplans, etc. In discussing this with a friend from the client in question, he had a good idea on how to work with this as it relates to managemnt - treat the output to management as an interface. Give them the same old stuff based on information we gather however we want at the development team level. This is sound I think, and even matches some of what I was reading from David Anderson about meeting CMMI requirements with MSF Agile.

In any event, the question then becomes "What are the essentials to getting started with an Agile development team?". Can we do SCRUM without stakeholder buyin? No. Can we do TDD without management buyin? Maybe. Can we track velocity without changing the project tracking software management just purchased? Maybe. The answers to the 'maybe' questions becomes, "What does development control as a department" and "How high up in Development can we get buy in". I think in this situation development controls all PM activities, and we can get buy in all the way up. That means we can define tasks in terms of scenarios, we can estimate timelines for management while not tracking our own progress this way. Maybe along the way we will even be able to give them better information, and so win them over. But we'll hold that goal patiently. For now, we also control development methodology, so we can being with TDD and pairing. A few buyin issues do exists here however; it takes time to build test harnesses, and changes must be made to the physical space to facilitate pairing - perhaps even common development machines purchased.

Well, we are going to try. I can't say the team is optimistic yet about their chances. Management has crushed the spirit a bit. Hopefully, I can keep you posted on the progress, and not the failure of this initiative.
Submit this story to DotNetKicks

Friday, July 15, 2005

Just some links, but interesting reads: Talk about performance, implementation, etc.

An Extensive Examination of Data Structures Using C# 2.0
Part 1: An Introduction to Data Structures
http://msdn.microsoft.com/library/en-us/dnvs05/html/datastructures20_1.asp

Part 2: The Queue, Stack, and Hashtable
http://msdn.microsoft.com/library/en-us/dnvs05/html/datastructures20_2.asp

Part 3: Binary Trees and BSTshttp://msdn.microsoft.com/library/en-us/dnvs05/html/datastructures20_3.aspPart 4: Building a Better Binary Search Tree
http://msdn.microsoft.com/library/en-us/dnvs05/html/datastructures20_4.asp

Part 5: From Trees to Graphs
http://msdn.microsoft.com/library/en-us/dnvs05/html/datastructures20_5.asp

Part 6: Efficiently Representing Sets
http://msdn.microsoft.com/library/en-us/dnvs05/html/datastructures20_6.asp
Submit this story to DotNetKicks

Thursday, July 07, 2005

I promised to get back to the question of security in Team Foundation Server, so here we go...

My main problem was that the only way everything was working out of the box was with the TFSSetup user as admin on both boxes. I wanted to scale this back and only give appropriate permissions to the roles.

The best way to start this was to remove this user from the admin roles after everything was working, and then see what broke.

The first issue was the identity of the application pool "VSTF AppPool". Any app pool identity must have access to the temporary asp.net folder in the framework folder under system root, and to the windows temp folder under the system root. I added the identity to the STS_WPG group and gave that group modify permissions on these folders. That made it possible for the app pool to run.

I also had this identity running the TFSServerScheduler service. This of course requires to the 'run as a service' user right in the local policy.

Finally on the application box, I put this identity in the administrator role for the sharepoint services site on the box which will control the Team Server content. Maybe this could be pared down as well, but my main goal for now was to take away box-level permissions.

On the data box, the same domain user referenced on the application box runs the app pool for the reporting services. Again, this user needs access to the same temp folders referenced above. <mini-rant>This is part of why I don't like having to install IIS on the data box - more config that I already did somewhere else. If I bother setting up an IIS box, I shouldn't have to do that everywhere. So, I do hope this limitation on reporting is based on a beta setup, not some full requirement of the solution.</mini-rant>

This done, it was immediately obvious that database permissions would now be required for the user running the app pool. I added the identity to the local 'TFSReportUsers' group, and then gave this group database access in the 'public' role to "BisDB", "BisDWDB", and "CurrituckDB", "VSTEAMSCC", "VSTEAMSCCAdmin", and "VSTEAMTeamBuild".

I gave dbo role access to all of the above databases to another user account which was used as the identity of the "SQL Browser", "Report Server", and "Analysis Server" services.

Finally, I created a domain group for all the users utilizing the team system. I then made this group a member of the "TFSReportUsers" group on the data server. This group has report using permission on the reporting services website. I also made this group admin on the Team Services sharepoint site.

I think that did it for basic functionality. As far as team foundation server itself, you have to give permissions to users and groups there as well. This i did by giving my 'TFSDevelopers' domain group broad permissions in the TF roles.
Submit this story to DotNetKicks

Thursday, June 30, 2005

In standard operating mode, I was pulled from both Team Server and BizTalk Server installation into Sharepoint based on some client needs. The work involved making brand customizations to thier Sharepoint Portal site.

I needed to make color and style changes to the portal site. I am not an artist, so my job was really to figure it out so their graphics person can make changes. I repeat that I am not an artist, and I am sure my color choices will be less than perfect.
In any event Sharepoint has a set of templates for various types of sites that it is capable of supporting.

Many files are repeated throughout the sets of templates. Depending on the type of site created, a different base set of template files may be used. All colors, many graphics, and some behaviors in sharepoint are guided by Cascading Style Sheets. For the portal as a whole, you are permitted to define a style sheet which will override styles. I have defined such a style sheet called "custom.css" at

file:////MySharepointServer/c$/Program%20Files/Common%20Files/Microsoft%20Shared/web%20server%20extensions/60/TEMPLATE/LAYOUTS/1033/STYLES

The custom style sheet now references some custom images as well. These images are all contained in the folder:

file:////MySharepointServer/c$/Program%20Files/Common%20Files/Microsoft%20Shared/web%20server%20extensions/60/TEMPLATE/IMAGES/Custom

This sheet affects the portal directly due to a configuration setting found in

Change Portal Site Properties and SharePoint Site Creation Settings under the Site Settings area for the portal.

For sites in the portal, the configuration method provided by the site administration tools is to apply a theme. Why you can't apply a set of style sheets, I don't know. A theme certainly includes style sheets, but oh well. You can create custom themes by adding a folder to

file:////MySharepointServer/c$/Program%20Files/Common%20Files/Microsoft%20Shared/web%20server%20extensions/60/TEMPLATE/THEMES

and then adding the theme information to

file:////MySharepointServer/c$/Program%20Files/Common%20Files/Microsoft%20Shared/web%20server%20extensions/60/TEMPLATE/LAYOUTS/1033/SPTHEMES.XML

This could be a good way to go if your artist wants to take the time, but it didn't utilize the work I had already done on the custom.css, so I went for a more stylesheet based approach.I placed my stylesheet references into the base template files for document workspace related sites at

file:////MySharepointServer/c$/Program%20Files/Common%20Files/Microsoft%20Shared/web%20server%20extensions/60/TEMPLATE/1033/STS

and its subfolders (esp. DWS and Lists).

Any template which has been customized via frontpage, and any additional pages added are not stored on disk, but rather in the database. In order to edit these pages, you have to go through a web folder (add as network place in network neighborhood folder). This methodology can also be useful for finding base templates because the web folder will show you url style locations you can match against where you are browsing. Looking for base templates on disk is difficult if you aren't familiar with the scheme and naming. Please note that making changes to base templates is liable to cause headaches on an upgrade or patch because these files will not be respected and may be overwritten. In order to change the file, copy it out locally, change it, then copy it back to the web folder.

For the site templates I also had to create a subordinate style sheet which I called "customSite.css". This was required because some changes conflicted with the custom.css on the main site. The css inheritance model has ows.css, then sps.css going first on the portal site, then your customizations in custom.css (or whatever you called yours). On the sites, you only have ows.css. If I put the all the customizations required for sites directly into custom.css it behaved badly in the portal. So in sites i added another to inherit below custom.css, and thus override certain portal only custmizations. The inline stylesheet references are:

<link REL="stylesheet" type="text/css" href="/_layouts/1033/styles/custom.css">
<link REL="stylesheet" type="text/css" href="/_layouts/1033/styles/customSite.css">

The accepted practice in the base templates is actually to make the localization code a dynamic number as such:

<%=System.Threading.Thread.CurrentThread.CurrentUICulture.LCID%>

This works in the templates, but will not work in the customized files retreived from the database.


The short of it is;

  • If you want to change the images: change them in the custom images folder.
  • If you want to change which image and any color: change the css files.
  • If you run into a file which you would like to add the custom styles: then add the custom stylesheet references to the base template.

There are some excellent resources at the following sites:

http://blog.hishambaz.com/archive/2005/01/29/196.aspx

http://weblogs.asp.net/erobillard/archive/2004/09/24/234060.aspx

http://www.sharepointcustomization.com/resources/whitepapers_d.htm

http://office.microsoft.com/en-us/assistance/HA011608361033.aspx

Submit this story to DotNetKicks

Wednesday, June 22, 2005

Ok, so i haven't gotten any further on BizTalk walkthroughs now that it is installed and configured. Hopefully I will have some time soon. However, in between client projects I have been noodling with Visual Studio Team System (Team Foundation Server) installation and configuration.

First, Visual Studio improvements (and attendant .net framework changes) are awesome. The productivity enhancements are first rate, and really demonstrate serious developer input. As I get into more of that, hopefully I can spend time talking about it here.

On with Team System. I did a 2 server install (data and application) in a Windows 2003 domain. First annoyance is that reporting services seems to have to install directly on the box hosting SQL Server 2005. This is totally unncessary from a configuration perspective, so hopefully its a beta thing. It seems that MS methodology is going toward the application centric db that has a web server for web services. I suppose this makes some sense, but I can't get past my ingrained disklike for the mixing of these tiers. Extra config, extra security issues, extra blending of roles that confuses configuration. In any case, after installing IIS and Sharepoint services on my SQL Server I installed the data tier package. Easy enough. No problems.

I then installed the application package on my web server, and pointed it to the reporting server. Again, pretty smooth. I installed everything with the TFSInstall user, and made that user Admin on both boxes. This user is also setup to run the services, and the apppools, and the login to the reporting data sources. I will attempt to wean from this as configuration expertise increases. After the install completed and the reporting service seemed to work, and the web services on both boxes functioned, I removed theTFSInstall user from the admin role. To do this, I needed to give the user write access to %SystemRoot%/Temp, and to the temporary asp.net file folder in the framework folder so the appPools would function. I also gave the user permissions in the proper roles on the database, and database reporting service. I would be defeated in what seemed like an early victory very soon.

So, now the fun of configuration of the Team System from VS 2005 on my client machine. Opening the Team Explorer, I found and connected to my Team Server. Viola, so far so good. Then I selected "Create Project". I answered the wizard questions, and hit failure pretty quick. Unable to download the agile template. After some digging I found this was probably permissions related, so I gave TFSInstall its admin role again. Of course, the project could now be added without incident.

More later...
Submit this story to DotNetKicks

Monday, June 20, 2005

Just sent my most recent homebrew - a simple smoked porter - to secondary fermentation. gravity reading was .020, so I'll give it another few days and check again. Tasted pretty good, but this one will need a few months in the bottle to be worthwhile.
Submit this story to DotNetKicks

Thursday, June 16, 2005

Installing BizTalk Server 2004

I recently embarked on the adventure of installing BizTalk in a distributed environment. I setup the application on 3 servers - all running Windows Server 2003 SP1:

Stand-alone SSO Server
Engine/Rules Server
Database Server (all databases)

The servers all live in a single AD Domain. Users live in the parent domain (currently i don't have any users - as I just installed).

Here are a few things I ran into...

Setup actually went fine, but configuation of the engine server was a bit difficult due to MSDTC setup issues, and a few BizTalk group permissions issues related to SSO. These are the types of issues you can solve in a few minutes if you know where to look, but took me several hours to discover and resolve.

First, DTC - I believe the documentation with BizTalk is lacking in readability, but does mention the basics here.

After this, issues tend to be permissions related. I needed to make the registry hack to turn off DTC security:

http://support.microsoft.com/default.aspx?scid=kb;en-us;839187

For SP1, the DTC configuration editor has been enhanced, and you may be interested in that information, though I didn't have to make any changes here...

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cossdk/html/2627a956-60b3-4d26-bc04-e0676ec97786.asp

Finally, for testing DTC I used DTCTest and DTCPing

http://download.microsoft.com/download/e/a/2/ea20a97f-672d-4826-8e52-1e83e7d9ddfb/dtcping.exe

http://download.microsoft.com/download/b/8/8/b8841bfc-8bd3-4fea-a5f5-06e1f162bd9a/dtctest.exe

OK, with DTC working, the configuration began to fail on a permissions issue. Which, once I looked into the application event log (never forget about your event logs), was quite clear. The setup user was not authorized in SSO. This is clearly stated in the docs, I just missed it. The user performing the setup has to be in the SSO Administrators group.

To be sure, I was a bit unclear from the docs about all the groups, and if in a domain or local how they had to be setup. I created the 2 admin groups (BizTalk and SSO) both in the domain, and on each of the 3 servers. I pointed the local groups to the domain groups, and added my parent domain account locally on the installation server only. During configuration all groups were pointed to domain groups, and these other groups were created only at the domain level.

I created 3 users in the domain - an SSO service account, a biztalk general services account, and a biztalk host account. The biztalk services account was in all groups, the sso account in the SSO admins group, and the host account in the 2 host groups. During configuration the 2 biztalk domain accounts were specified for the appropriate services.

Finally, I did make a couple of changes to the SSO server, but I don't think they actually made anything work. However, I think the documentation is lacking here. The SSO administration tools are in the program files/common files/enterprise single sign-on folder. I used

  • ssoconfig -backupsecret to create a master key backup
  • ssomanage -serverall to set the SSO server for all users (this seemed helpful, but not sure)
  • various ssomanage listings to see the state of SSO installation.

That got me through installation and configuration. Now, I am off to try and follow along with the tutorial.

Submit this story to DotNetKicks
Hello, and welcome to my first blog entry.

I am interested in blogging primarily as a simple way to record the information i come across in daily life, both at work and at home. I am happy to share the info, and also hoping that recording it is useful for me as well.

I have a website at http://jeffgabriel.com (which is currently dedicated to homebrew) where I could have installed blog software I suppose, but figured this would do for now.
Submit this story to DotNetKicks