Tuesday, December 14, 2010

Perfomance improvements using AppFabric's Auto-Start

Ron Jacobs has recently put up a post around how to leverage AppFabric's Auto-Start feature.

The post is here: http://blogs.msdn.com/b/rjacobs/archive/2010/12/09/wcf-and-appfabric-autostart.aspx

Long story short, you need to use a custom service host, and call the code which will perform the caching functionality etc within the ServiceHostFactoryBase.CreateServiceHost method (the factory method which creates the service host).

So this implies, Auto-Start creates the ServiceHost before the first call to the WCF service.

Thursday, November 11, 2010

"Unable to copy the file..." error when dynamically changing item names in a Visual Studio project template

I created a Visual Studio project template including creating a wizard using the instructions here: http://msdn.microsoft.com/en-us/library/ms185301.aspx. As part of the template I wanted to parameterise the name of items within the template - which I found the instructions to do so here: http://msdn.microsoft.com/en-us/magazine/cc188697.aspx#S6 (well not instructions, but what manual updates are required to the .vstemplate file - specifically Class1.cs).

After making this update (where $fileinputname$ is now the parameter which determines the file name, the following error was thrown..."Unable to copy the file Class1.cs from the project template to the project. Can't find the file \Class1.cs".

After some quick Googling, I found the solution in the following thread: http://forums.asp.net/t/941773.aspx. What would have been benefical to me is if I had found this article first http://msdn.microsoft.com/en-us/library/ys81cc94.aspx (as mentioned in the forum) - where it now contains the steps required to update rename files with parameters (i.e update the compile include to have the parameter name, rather than the original file name before it is renamed). The MSDN article fails to mention this.

Monday, October 18, 2010

Performance and loading testing WCF services with JMeter

JMeter is great for performance and load testing WCF services. It's a free Java app available for download here - http://jakarta.apache.org/site/downloads/downloads_jmeter.cgi.

JMeter has numours components available to create a test plan such as
  • logic controllers, which allows you define conditions when requests are sent (e.g. every 2nd thread make a request using this web request).
  • timers, define the stand down period before a thread will make another request
  • assertions, parse the response to assert that the request was successful
  • listeners, various ways to display the response data received by JMeter.
An example on how to create a web service test plan using JMeter is available here - http://jakarta.apache.org/jmeter/usermanual/build-ws-test-plan.html

EDIT: I've created a new post on how to set up with a CSV data set.

Wednesday, October 13, 2010

Troubleshooting AppFabric

Just a quick tip on where to start to help out trouble shooting AppFabric. The error messages displayed with AppFabric aren't very explicit - however a lot of info is logged in the Application Server-System Services Admin event log. This is available in the Event Viewer under Application and Services Logs --> Microsoft --> Windows -->Application Server-Applications.

Make sure the log is enabled by right clicking on the Admin node and selecting Enable Log.

No monitoring data displayed in the AppFabric dashboard

One thing to keep in mind when viewing the monitoring events in AppFabric is that AppFabric uses the user account (that is the user that is viewing the logs), rather than the account that the website's AppPool is running under (the service account) to authenticate to the underlying monitoring SQL Server database.
So each user that needs to use AppFabric will have to have the appropriate rights to the Monitoring database in order to view the events logged. (i.e. be a member of a group which is a member of the Monitoring database's ASMonitoringDbReader database role).

This had me stumped for a while as I confirmed that the events where being processed by the AppFabric ETW service as they were appearing in the staging table - ASStagingTable, and the SQL job was moving them into the events table - ASWcfEventsTable.

However, they weren't being displayed when I viewed them in AppFabric. Unfortunately AppFabric doesn't give much info as to why the events are not being displayed apart from "The specified database is not a valid Monitoring database or is not available.", "Object reference not set to an instance of an object." & "No connection string". Nothing about any login failing, and the connection string hasn’t changed, and other user accounts could view the events.

Fortunately I checked out the Services pane in AppFabric where you can select the Monitoring Statistics tab, and when I selected it I received the following error:
The specified database is not a valid Monitoring database or is not available.
Cannot open database "ApplicationServerMonitoring" requested by the login. The login failed.
Login failed for user ‘[domain\user]'

Wednesday, September 29, 2010

MSDeploy with Declare and Set parameter files

One of the most undocumented parts of MSDeploy is the declare and set parameter xml files you can use to encapsulate the parameters declaration and setting rather than having them clogging up the command line call to MSDeploy.

So firstly, here is how to include them when packaging and deploying:

msdeploy.exe" -verb:sync -source:apphostconfig="default web site/application” -dest:archivedir="C:\packages\application" -declareParamFile="C:\source\application\deployment\declareParamsFile.xml"

msdeploy.exe" -verb:sync -dest:apphostconfig="default web site/application” -source:archivedir="C:\packages\application" -setParamFile="C:\source\application\deployment\setParamsFile.xml"

You can verify that the parameters were correctly generated when the package was generated by examining the parameters file in the package folder.

The format of the declareParamFile is exactly the same as the parameter file that is generated in the package when you specify them manually in the command line – e.g. below is the content of the declareParamsFile.xml file.

<parameters>
<parameter name="enabledProtocols" defaultValue="http,net.pipe">
<parameterEntry kind="DeploymentObjectAttribute" scope="application" match="application/@enabledProtocols" />
</parameter>
<parameter name="applicationPool" defaultValue="synctest">
<parameterEntry kind="DeploymentObjectAttribute" scope="application" match="application/@applicationPool" />
</parameter>
<parameter name="providerServiceAddress" >
<parameterEntry kind="XmlFile" scope="web.config" match="//configuration/system.serviceModel/client/endpoint[@name='NetNamedPipeBinding_ProviderCustomerDetails']/@address" />
</parameter>
<parameter name="physicalPathLocation" description="Physical path where files for this Web application will be deployed." defaultValue="E:\Install\Services\ExampleService\" tags="PhysicalPath">
<parameterEntry kind="DestinationVirtualDirectory" scope="Default\ Web\ Site/iag\.application\.services\.exampleservice/" match="" />
</parameter>


Parameters are defined using various kind types. The common type is DeploymentObjectAttribute which allows you to configure the values within the archive.xml file using an XPath expression to determine where the update should be made. The archive.xml file is generated when the package is created and defines configuration for the deployment – e.g. if a website was packaged using the AppHostConfig provider, then it will contain the setting for which AppPool to use when the site is deployed. Create a deployment package, then open the archive.xml to better understand what this means. For a list of parameter kinds available see here http://64.4.11.252/en-us/library/dd569084(WS.10).aspx.

Once the deployment package is created, the parameter file generated within the package will contain the parameters specified in the declareParamFile.

Below is an example of the setParamfile and the format that should be used. Note that the name of the setParameter elements is the same name as their corresponding declaration in the declareParamFile example above.

<parameters>
<setParameter name="enabledProtocols" value="http, net.pipe"/>
<setParameter name="applicationPool" value="ServiceNameAppPool"/>
<setParameter name="providerServiceAddress" value="net.pipe://server/servicename/endpoint.svc"/>
<setParameter name="physicalPathLocation" value="C:\Services\ServiceName\"/>
</parameters>


When performing the deployment to the destination web server, to include the setParamFile file use the –setParamFile switch in the command line – e.g. setParamFile="C:\source\application\deployment\setParamsFile.xml"

Probably my favourite parameter kind is XmlFile (providerServiceAddress) which allows you to specify a xml file (which would typically be the web.config when deploying web applications) to be updated when the deployment is executed. You specifiy a XPath expression to indicate where you want to update the file. This is great when moving through different environments as the web.config usually contains environment specific settings.

Wednesday, September 22, 2010

Rhino Mocks and constraints on Object Parameters

The Is.Matching Rhino Mocks constraint is really nice way to setup expectation constraints on object parameters.

Is.Matching expects a predicate delegate which means the constraint can be very flexible - which is needed since the expectations on the object can be complex due to the possibility of several properties that require certain values.

Below is an example of setting up the expectation using an annoymous delegate. In this test we are expecting the Query object that is passed in the call to the GetCustomer method having an Id of 10 and BusinessGroup of "Finance". The declarations of request and response has been omitted to keep the code snippet concise.

MockRepository repo = new MockRepository;
IProvider mockProvider = repo.DynamicMock();
mockProvider.Expect( x => x.GetCustomer(request)).Constraints(Is.Matching(delegate(Query qry){ return (qry.Id == 10 && qry.BusinessGroup == "Finance"); })).Return(response);


As an aside, using the DynamicMock factory method rather than StrictMock means the call to the mockProvider in the code under test with the parameter expectations incorrect won't throw an exception, where using StrickMock will. However, using DynamicMock with the incorrect parameter passed in, mockProvider.GetCustomer will return null - so unless you have a check in your code under test for a null being return then an exception will be thrown anyway. So with this in mind, personally I believe that the StrictMock factory should be used when there is a Return object required.

Wednesday, August 18, 2010

WCF Client performance improvement

Recently I had to try various methods to overcome performance problems due to a large schema which represented the canonical model that is to be used in our SOA layer.

The biggest bottleneck was the initial call made to the service which used the model in its service contract. WCF generates serialization code on the fly when the first service request is sent – and due to the size and structure of the schema – the amount of generated code was large – 20mb!

The reason why the structure was an issue is because the model dictates that you use a generic message wrapper type which contains an element which allows you to specify the request and response type (basically in the XSD definition, this element contains several hundred choices that it could be – so when it’s deserialized – this is of type object – with several hundred XmlElementAttribute attributes above it as it could be any of these types specified in the XmlElementAttribute attributes. A truncated example of the deserialized code is below.

[System.Xml.Serialization.XmlElementAttribute("exampleRequest1", typeof(ExampleRequest1))] [System.Xml.Serialization.XmlElementAttribute("exampleResponse1", typeof(ExampleResponse1))]

[System.Xml.Serialization.XmlElementAttribute("exampleRequest200", typeof(ExampleRequest200))] [System.Xml.Serialization.XmlElementAttribute("exampleResponse200", typeof(ExampleResponse200))]
… you get the idea
public object Item {…


So the generated serialization code needs to handle every possible combination that could possibly be thrown at it (so it can handle the all the possible types that are declared in the XmlElement attributes). So in other words, a lot of if statements

To overcome the cost of generating the serialization code, I used the steps described in this MSDN article - http://msdn.microsoft.com/en-us/library/aa751883.aspx. I went with option 3: “Compile the generated serialization code into your application assembly and add the XmlSerializerAssemblyAttribute to the service contract that uses the XmlSerializerFormatAttribute. Do not set the AssemblyName or CodeBase properties. The default serialization assembly is assumed to be the current assembly”.

I used the following method to generate my assembly with the serialization code:

  • Add the XmlSerializerAssembly attribute and XmlSerializerFormat to my service contract (example below).

[ServiceContract(Name = "ITest", Namespace = "http://testnamespace"), XmlSerializerFormat]
[XmlSerializerAssemblyAttribute()]
public interface ITest
{

  • Compile my assembly
  • Run svcutil against my assembly – an example of the command line: svcutil C:\Source\Project\TestServiceContract \bin\TestServiceContract.dll /t:xmlserializer.
  • Copy the generated file that was produced in step 3 to the project folder of the assembly and include the generated file in the project.
  • Rebuild the assembly.
  • Done.
In the above example TestServiceContract.dll only contains the service contract definition, not the implementation. Using this approach to resolve the serialization cost means you have to share the service contract assembly with the client and service (meaning, the client must use the ChannelFactory) – so the service definition should live in its own assembly. Sharing the assembly means both the client and the service has access to the serialization code.

One option I looked at was the ServiceKnownType attribute – which is specific to the DataContractSerializer. The positives with the DataContractSerializer is it really is fast. So even with the below

[System.Xml.Serialization.XmlElementAttribute("exampleRequest1", typeof(ExampleRequest1))] [System.Xml.Serialization.XmlElementAttribute("exampleResponse1", typeof(ExampleResponse1))]

[System.Xml.Serialization.XmlElementAttribute("exampleRequest200", typeof(ExampleRequest200))] [System.Xml.Serialization.XmlElementAttribute("exampleResponse200", typeof(ExampleResponse200))]
you get the idea
public object Item {…


If I update my service contract to the following:
[ServiceContract(Name = "ITest", Namespace = "http://testnamespace"), DataContractFormat]
[ServiceKnownType(typeof(ExampleRequest1))]
[ServiceKnownType(typeof(ExampleResponse1))]
public interface ITest
{


The service performance is amazingly faster. In fact, even though I am still using the ChannelFactory, so the channel isn’t cached (I’ll get onto that later) – the cost of generating the channel is very cheap (I’m assuming since the amount of types that need to be reflected when creating the channel is considerably low since we have specified the types the Service is interested in by using ServiceKnownType).

However, since we build our services using the Contract First pattern – the DataContract serializer is not an option since the payload that is produced across the wire doesn’t resemble at all the original XSD – in other words you have no control on how the XML will look.

So the next bottleneck in performance was the generation of the channel. Since the ChannelFactory is being used, the channel is not cached; however if you are using a generated proxy class by svcutil, ClientBase caches its internal ChannelFactory. More details are here: http://blogs.msdn.com/b/wenlong/archive/2007/10/27/performance-improvement-of-wcf-client-proxy-creation-and-best-practices.aspx.

Again an assumption on my part for why the channel generation is costly in this example: in the client improvement article by Wenlong Dong, when the channel is created, all of the required CLR types are reflected, and because of the Item property in the example and all of the possible types it could be – there are a lot of the types that are reflected (which is where I believe ServiceKnownType mention before resolves this issue since the channel generation is fast, ServiceKnownType must supress the unneeded types that are not used by the service). I needed a solution like this using the XmlSerializer. And I worked it out using the below process

  1. Generate the XmlSerializer file against the service dll with all the required XmlElement attributes (i.e. many XmlElement attributes above the Item property).
  2. Copy the generated serialization file to the service folder; add it to the project etc…
  3. Remove all the XmlElement attributes for the Item property – and add [System.Xml.Serialization.XmlElementAttribute(Order = 0)]
  4. Rebuild the project that contains the CLR schema definitions.
  5. Latency is very low.
So the key with the above is the XmlSerializer code must be generated with the expected types for the Item property (Item property decorated with multiple XmlElement attributes). Doing so means you can remove all the expected types since the XmlSerializer can still handle serializing since it’s an expected type in the serialization code (since we generated the code with the required types in step 1).

So using the assumption for why the channel generation is so costly - the cost of generating the channel is now reduced as it doesn’t have to worry about reflecting as many types (which according to Wenlong, is the biggest cost) – however serialization is successful as I said before since we generated the serialization code is aware of these types.

Monday, August 9, 2010

Serializing with CDATA as the content of an element

Recently I came across the problem where I needed to serialize a property of type string whose content contained a CDATA element section - the content of the property/element was HTML markup. If you serialize the property using the XmlSerializer out of the box, you get the following example:

&lt ;![CDATA[&lt ;h3&gt ;Service Job 1561245&lt ;/h3&gt ;

The solution is to update the property type from string to XmlCDataSection. You can leave the field type to string (so that's descriptionField in the below example), but the get and set of the property needs to look something like this:


get
{
XmlDocument xmlDocument = new XmlDocument();
return xmlDocument.CreateCDataSection(descriptionField);
}
set
{
descriptionField = value.Value;
}

Sunday, August 8, 2010

Building blocks of SOA article

The following SOA Mag article uses a great analogy (Lego blocks) to describe the principles of SOA - http://www.soamag.com/I36/0210-1.php

Thursday, August 5, 2010

Great Provider model article

Had to implement the Provider model recently, this is a great article on how to do so using the .Net framework - http://dotnetslackers.com/articles/designpatterns/HowToWriteAProviderModel.aspx

Wednesday, July 28, 2010

WCF and FitNesse - how to load the client's config file

Firstly, here is a great post on how to get started with FitNesse - http://schuchert.wikispaces.com/Acceptance+Testing.UsingSlimDotNetInFitNesse. This is what I used to get started. What I wanted to do was use FitNesse to perform integration tests for my web services - which means the assembly that the fitsharp runner loads, must also load the config file for that assembly (as it contains the service model configuration which is required when we create the client proxy within the loaded assembly).

Under the Create a page with necessary configuration heading in the above link, it defines how to setup your test fixtures to use the .Net test runner. The line we're interested in the one below.


!define COMMAND_PATTERN {%m -r fitSharp.Slim.Service.Runner,c:\tools\nslim\fitsharp.dll %p}

We need to update this line to include a reference to the our test project's config file. So an example would be:


!define COMMAND_PATTERN {%m -a c:\projects\someapplication\acceptancetests\acceptancetests.dll.config -r fitSharp.Slim.Service.Runner,c:\tools\nslim\fitsharp.dll %p}

This allows you to load the config where your acceptancetests.dll makes a call through a WCF proxy (i.e. acceptancetests.dll contains the type which Runner.exe will call - and this type calls the service)

Sunday, July 25, 2010

MSDeploy, MSBuild and environment specific config files

MSBuild's property switch (/p:) allows you to override project-level properties. Using /p, you can specify which configuration you wish to build against - therefore if you have your environment specific config files transforming correctly (more details here http://blogs.msdn.com/b/webdevtools/archive/2009/05/04/web-deployment-web-config-transformation.aspx) – you can automate building environment specific deployment packages using MSBuild by using the target switch (/t:) and specifying Package - which results in MSDeploy being the deployment mechanism used.

So an example of this: MSBuild "[project name].csproj" /T:Package /P:Configuration=[environment configuration name i.e. Staging];PackageLocation="[zip file name & path]"

So the model would be - create a config file per (configuration) environment, and create a deployment package per (configuration) environment using the above command line. Therefore you have a deployment package per environment ready to go without have to make any environment specific changes to the config files. Nice and clean, plus with the added bonus that the package generated contains a deploy.cmd file which encapsulates MSDeploy to roll out the package.

More useful info on MSBuild and packaging:
http://vishaljoshi.blogspot.com/2009/02/web-packaging-creating-web-packages.html

Thursday, July 8, 2010

MSDeploy bug with config providers

Discovered a small but annoying bug when synching config files where the destination config file doesn't have the configSections element declared (which is common when synching the root web.config file), and the source config file does have this section (so in other words the configSections element will be added along with the new section/sectionGroup definition)

So assuming you have the required synching rules turned off (SchemaSection rule, or editing the MSDeploy config settings file), after the synching occurs, the new configSections element is created by MSDeploy... however, it's not placed at the top of the config as the first child element of the root configuration element.

If the configSections element is in a configuration file, the configSections element must be the first child element of the configuration element.

So the work around, is to manually add the configSections to the destination config file before performing the sync. Doing so means that new section/sectionGroup is added in the correct place in the config file.

MSDeploy 1.1 does not sync 4.0 assemblies

Doesn't seem to be a lot on info on this, hence I spent a while trying to sync some .Net 4.0 assemblies using MSDeploy - however, received confirmation today from the IIS team that version 1.1 of MSDeploy can't do this. Obviously this is on the feature roadmap since you can sync the 4.0 machine and web config files.

http://forums.iis.net/p/1168877/1949692.aspx

Wednesday, May 12, 2010

IIS 7 Extension Authoring

Tutorial on how create a UI module from scratch. Note that the module can not be built targeting the .Net 4.0 Framework.

Link

AppFabric's exception handling pattern and the IErrorHandler behaviour

So currently the following is true:
  • AppFabric only logs unhandled exceptions from the service implementation (i.e. the exception that is handled by IErrorHandler - if that behaviour is implemented)
  • Throwing an exception in IErrorHandler.HandleError will result in w3wp.exe throwing a unhandled Microsoft .Net framework exception and the exception isn't logged again in AppFabric. Regardless of this, the fault (constructed in IErrorHandler.ProvideFault) is still successfully provided to the client as this method is called before HandleError.
  • AppFabric does log the inner exception if provided.