Tuesday, December 20, 2011

Adding Soap Headers at the client using OperationContextScope

Pretty much a reminder for me, because I always forget exactly how to add custom headers at the client end dynamically, i.e. that is not part of the service contract. 

You can achieve this using the OperationContextScope - http://msdn.microsoft.com/en-us/library/aa395196.aspx

What this also means that you can access the headers in the response message using OperationContext.Current.IncomingMessageHeaders

Sunday, November 13, 2011

Performance and loading testing WCF services with JMeter - Part 2

A while ago I wrote a brief post about JMeter and performance / load testing WCF services. This post will list the steps I'm normally go through to set up JMeter from scratch to test a service, it also includes the steps for parameterising the request payload using a CSV file. 

  • Add a new thread group to the test plan. The loop count when using a CSV data set means how many times to process the file.

  • Under the thread group add a CSV Data Set Config element (available under Config Element). Typically I have the CSV file in the same location as the .jmx file which allows you to just specify the file name. In the below example RequestPayloads.csv contains a payload per line, request is the variable name that will be used as a placeholder to insert the sample into the soap envelope. This will become clearer in the next step. 

  • Add a Loop Controller to the thread group (which is located under Logic Controller), and tick the Forever option.  This will ensure all entries in the CSV file will be processed (rather than having to configure the thread group to have the right amount of loops to completly process the file). Under the Loop Controller add a SOAP/XML-RPC Request element. Enter the appropriate endpoint (URL) and SOAP action (ensure that this is ticked). The request body is where it gets interesting. Here I have stripped out the entire body of the request (i.e. the elements between the SOAP Body element - <soapenv:Body>), and replaced it with ${request} - where request is the name of the variable we set in the CSV Data Set Config element. 

  • Each line in the the CSV file contains the appropriate xml to replace ${request} - so for example, the service that JMeter is testing is expecting an body element called DoStuffRequest, so each line in my CSV file contains:

<DoStuffRequest><anotherElement><name>john</name></anotherElement></DoStuffRequest>

<DoStuffRequest><anotherElement><name>sam</name></anotherElement></DoStuffRequest>

<DoStuffRequest><anotherElement><name>peter</name></anotherElement></DoStuffRequest>

  • To determine how successful our request are (and to prove that the the CSV file is being processed) we'll need to add some listeners. Add a View Results Tree element under the thread group. Start the test, you should see something similar to the below where the each individual request is logged. Select the Request tab and check that it's been formatted correctly using the CSV file etc. Select the Response data tab and check the it's the response that you're expecting. 

  • A couple of other listeners that I normally use is the Summary Report, and a Spline Visualizer. Summary report is good for latency metrics, where the Spline visualizer is good to determine the behaviour of a service over time (e.g. does the latency slowly increase?). 

  • To increase the load that the service is under, up the thread count in the Thread Group element, which will result in more threads processing the CSV data set.

Tuesday, October 4, 2011

WCF Routing Service and routing on content

If you're wanting to do some content based routing on the message payload using XPath (i.e. creating a XPath filter type), ensure you have routeOnHeadersOnly set to true (which is set on the routing behaviour <routing routeOnHeadersOnly="True">), if not a FilterInvalidBodyAccessException will be thrown, which has the following message:

A filter has attempted to access the body of a Message. Use a MessageBuffer instead if body filtering is required.

Setting routeOnHeadersOnly to false, the message will be buffered.

Monday, August 15, 2011

ReplyAction="*"

Recently came across the problem where the ReplyAction property in the OperationContract attribute was set to * - ReplyAction="*". wscf.blue was generating it with this, since the format SOAP action wasn't selected.

This was a problem as the operation was excluded when WCF generated the wsdl - after working out that is was ReplyAction="*" was the problem, I googled it, and found this on StackOverflow  

Monday, August 8, 2011

AppFabric’s E2EActivityId and non .Net 4.0 clients

Recently came across the scenario where the client wanted to supply the ActivityId that is to be used in AppFabric, however they weren't a .Net 4.0 client. If the client supplied the ActivityId in SOAP header request, then the framework will use this, rather than creating its own (assuming the monitoring level is End-To-End).

For .Net 4.0 clients, it's very easy to specify the value at the client end using Trace.CorrelationManager.ActivityId and ensuring that <endToEndTracing propagateActivity="true"/> is set in the client config. However endToEndTracing is new to .Net 4.0.

Initially I tried: OperationContext.Current.OutgoingMessageHeaders.Add(MessageHeader.CreateHeader("ActivityId", "http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics", activityId.ToString()))

where activityId was my guid in the client, however the ActivityId type also has a correlation id property, which is an attribute in XML. And that's the problem, I'm not aware of a method of adding a SOAP header element where you want to set a custom attribute on the element. The framework will ignore the ActivityId header if no correlation id is supplied as well.

So the option I went with was modifying the MessageContract to include the ActivityId header, so when you create the request at the client, you have the option of setting the Activity and Correlation Id. Here is the ActivityId type:

[System.Xml.Serialization.XmlTypeAttribute(Namespace = "http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics")]

[System.Xml.Serialization.XmlRootAttribute(Namespace = "http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics", IsNullable = false)]

public class ActivityId

{

[System.Xml.Serialization.XmlAttributeAttribute()]

public Guid CorrelationId;

[System.Xml.Serialization.XmlTextAttribute()]

public Guid Value;

}

Next, add a property of this type to the request message contract:

        [System.ServiceModel.MessageHeaderAttribute(Namespace = "http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics")]


public ActivityId ActivityId;

Done. FYI – the schema definition for the ActivityId type is available here http://msdn.microsoft.com/en-us/library/cc485806(PROT.10).aspx

Monday, June 13, 2011

soapUI and wsHttpBinding

Below are the steps required to allow soapUI to consume a WCF service which is using wsHttpBinding.

  1. By default Message Security is turned on for wsHttpBinding, which isn't supported by soapUI. So the security mode for the binding needs to be set to None:


    <bindings>
    <wsHttpBinding>
    <binding>
    <security mode="None">
    </security>
    </binding>
    </wsHttpBinding>
    </bindings>


  2. After creating the new soapUI project, the only update that is required is to select the 'Add default wsa:To' check box (as WS-Addressing is part of the wsHttpBinding). Enable/Disable WS-A addressing should already be selected, along with the appropriate SOAP action. The below options are available when you click the WS-A button, available as part of the Request window.


Wednesday, May 25, 2011

The specified database is not a valid Monitoring database or is not available

During a rollout into production, the windows sys admin was responsible for executing the AppFabric installation scripts that I had put together, which includes using the Initialize-ASMonitoringSqlDatabase cmdlet, which allows you to create the monitoring database for AppFabric to use.
Just a reminder that the account running the script / cmdlet must be a sysadmin on the target SQL server, or you'll get the not a valid Monitoring database.... error, which we were getting.
More details are here.
The error message returned from the Initialize-ASPersistenceSqlDatabase cmdlet is a little more descriptive - Failed to connect to the master database on the SQL server.... Could not open a connection to SQL Server. However, this error message is caused for the same reason.

Monday, May 9, 2011

Default physical path when deploying web applications using MSDeploy

MSDeploy will use the physical location of the web site that the web application is being deployed under, if you're deploying a web application without specifying a physical location in the parameter settings.

Specifically, MSDeploy uses the physicalpath attribute for the site defined in the applicationHost.config file, which is located C:\Windows\System32\inetsrv\config.

So for example, my default web site is defined as physicalPath="E:\inetpub\wwwroot", which is specified in the virtualDirectory element. I have an application whose virtual path is under the default web site. When I deploy this application, which doesn't have a physical location specified as a parameter, it will use E:\inetpub\wwwroot as the base, and create a sub folder - e.g. E:\inetpub\wwwroot\NewApplication.

One thing to keep in mind, once the deployment has been performed, a new application element is created in the applicationHost.config file for the web application deployed. If you delete the application from IIS - the configuration remains for the application. This is key to remember, as this will now become the default physical location for the application, regardless of the physical location of the default web site. So if you want MSDeploy to use the default web site's physical address as the base address (assuming your deploying your applicaion under DWS), then you'll have to manually delete the application element for the web application in the applicationhost.config file before executing the deployment.

Monday, May 2, 2011

IErrorHandler.ProvideFault & serialization issues using XmlSerializer

I'm pretty much repeating what I posted here on the MSDN forum, but my work around seems to be doing fine so I thought it's worth noting down in case someone else comes across this problem.

If you're using the XmlSerializer rather than the default DataContractSerializer, beware that once you are out of the context of the service implementation (i.e. in a behaviour), WCF will use the DataContractSerializer. Why is this a problem?

Well, if you have implemented IErrorHandler, and are recreating the fault message that is returned to the client (e.g. using the CreateMessageFault method from the FaultException instance), the message has been serialized again. The trouble is, there is no way as far as I am aware of specifying which serializer to use (hence the reason for my post on MSDN) and WCF resorts back to its default of the DataContractSerializer.

I'm using a fault contract when creating the FaultException - i.e. FaultException and my ErrorInfo type in decorated with the appropriate Xml attribute indicating its Xml type and namespace. Eg:

[XmlType("ErrorInfo", Namespace="http://FaultContract")]
public class ErrorInfo

When this is returned back to the client, a different namespace arrives for the ErrorInfo type (rather than FaultContract), which means the client can't deserialize it. The xml namespace instead is http://schemas.datacontract.org/2004/07/ etc which is not what the client is expecting.

The reason this is happening is that the DataContractSerializer is looking for the DataContract attribute to determine what to use for the xml namespace. Since the ErrorInfo type isn't decorated with this attribute, which makes sense - I'm not using the DataContractSerializer, a default namespace is used instead.

To fix this, I have decorated the ErrorInfo type with both the XmlType attribute, and DataContract attribute:

[DataContract(Name="ErrorInfo", Namespace="http://FaultContract")]
[XmlType("ErrorInfo", Namespace="http://FaultContract")]
public class ErrorInfo

This now handle both scenarios, the Fault message created within the context of the service (where you can specify the serializer to use), and the Fault message created within a behaviour.
This took me a while to determine why this was occuring, but eventually I was able to isolate where the problem was (the behaviour). When I turned the IErrorHandler behaviour off, the client was able to deserialize the ErrorInfo type correctly (as the correct namespace was used).

Thursday, April 14, 2011

EventProvider class - make sure it's a singleton

For emitting user defined events into AppFabric's monitoring database, I'm using the WCFUserEventProvider class (which is available in the WCF 4.0 samples). Basically this class wraps the EventProvider class which is the underlying class thatprovides the ability to emit user defined events to ETW.

During some performance/load testing of a WCF service, the apppool process continued to use more and more memory, which resulted in gradual reduction in throughput. Eventually I discovered I was instantiating a new instance if the WCFUserEventProvider class (therefore creating a new instance of the EventProvider class) everytime an user defined event was logged.

Once I changed my implementation to just have a static instance of EventProvider, no more memory leakage, and the throughput remained constant.

It's the construction of the EventProvider class that is the costly part, not the actual submitting of the event.

Tuesday, March 29, 2011

Creating MSI installation scripts using Orca

ADDLOCAL is an argument you can pass to a MSI installation package when invoking through the command line. It allows you to specify which features you want to install that are available as part of the MSI. Which is great, however how do you know what the features are called?

Orca is the answer. Orca is available in the Windows SDK, although it’s an additional install once you’ve installed the SDK.

Open the MSI using Orca, where you’ll be present with a lot of meta data about the MSI. The table we’re interested in is Feature. Listed are all the feature names which can be passed in the ADDLOCAL argument.

So for example, I wanted to make sure the MSDeploy Service Agent feature is installed (it’s not installed by default so if you install the MSI using quiet mode / no UI, it’s not included). Using Orca I was able to figure out what the name of the Agent Service feature is called according to the MSI (which was MSDeployAgentFeature)

So my powershell script to install MSDeploy looked something like: start-process -filepath msiexec -argumentlist /q, "/l log.txt", "/package webdeploy_x64_en-us.msi", "ADDLOCAL=""MSDeployAgentFeature,MSDeployUIFeature""" –wait ...where I’ve indicated I want the UI and Agent features installed. Without Orca I would have no idea what they are called. Orca can also indicate what public properties are available when invoking the MSI, which is the Property table.

Friday, February 25, 2011

appHostSchema provider ignoring files when synching

I was packaging the schema config files for IIS using the appHostSchema provider which included my custom file. When the package was created, my custom file wasn't included. This was also occuring when I was synching server to server, rather than using a package to see if that would make a difference. Again, my custom file was being excluded from the synch. This was working fine a couple of weeks ago.

Eventually I found out why this was occuring, I had recently put my custom file in source control which meant the file had become read-only. I manually copied the file to the config\schema folder on my local pc. When I tried to packing the files on my pc, it was now being excluded.

Once I turned off the read only setting, my file was once again included when synching.