Thursday, December 15, 2016

Using environment variables with AWS Lambda & C#

Environment variables are a great new feature that has been added to Lambda functions in AWS. But how do you access them in a C# lambda function? Digging through the context parameter I thought I'd found it @: ILambdaContext.ClientContext.Environment

But it turns out, they are available through the static class System.Environment, e.g:

var variableValue = Environment.GetEnvironmentVariable("nameOfVariable");

....

edit: wow just realised it's been almost two years since my last post.... time to bring this thing back to life :)

Thursday, January 29, 2015

A process to identify when the SRP principle is broken

Without concrete examples, software principles and practices can be hard to understand. Take the principles of SOLID - for a developer who is new to these principles, I could attempt to describe them, but without showing concrete examples it would be difficult to understand what each principle means.

Excluding SRP - after some quick Googling, it is easy to find examples when OLID principles are broken (and how to adhere to them). 

I would argue that SRP is the most subjective out the five principles: "the single responsibility principle states that every class should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility."

Ensuring that a class has only a single responsibility is not an exact science. Asking multiple developers to decompose functionality into responsibilities will nearly always result in different classes. Comparing to another principle like Dependency Inversion, you would expect most of the time developers would adhere to this principle the same way. 

So the following is an attempt to demonstrate a thought process that I use when building classes - and it is a way of having a science behind determining when a new responsibility has popped up.

It can basically be summed as - Inheritance vs Composition.

When adding a new behaviour / function to a existing class (responsibility) - I will ask myself: if I wanted to extend this function later on, would it make sense to create a new derived class (i.e. inherit from the existing class, and then override the function), or would it make sense that this new function would be encapsulated in new class (responsibility) resulting in the existing class being composed of this new class (or interface if you're correctly adhering to the DI principle).

The way to work that is to ask yourself another question - could the implementation of the function be extended / changed regardless of the inheritance hierarchy?

If the answer is yes, then it's looking very likely to be a new responsibility.

Answering no means the function does belong to the existing class (responsibility) since answering this question as no means you have identified you are specifically creating a new class to extend the function - i.e its the only reason why you would create a new class. You have determined that the behaviour is part of the responsibility.  Here is a concrete example:

Below is a trivial Address class, along with a derived MailingAddress. FormattedStreet is what we will focus on.

So, assuming you need to add a formatting behaviour somewhere - the Address class might seem the appropriate place to implement it (as it is below). However, if you ask the question: "could the implementation of the function be extended / changed regardless of the inheritance hierarchy", the answer should be yes.

This is because MailingAddress will automatically inherit this behaviour, and if we need to change the formatting, extending Address (i.e. overriding the FormattedStreet property in a new class) will mean MailingAddress will miss out and that may not be acceptable. Therefore you could safely say that SRP has been broken as the class is implementing more than one responsibility.  

 

The following is an implementation which still appears the same to the consumer of the class - i.e. there is still a FormattedStreet property, however the formatting implementation lives on a new class / interface, and Address is now composed of this class - since we've discovered it is actually it's own responsibility. The best part is not only will MailingAddress inherit the concrete version of IAddressFormatter through Address, you can also change the concrete version specific to MailingAddress (i.e. a concrete version of IAddressFormatter for mailing addresses). So it's the best of both worlds.

 

Tuesday, October 28, 2014

Onion application architecture and web deploy packaging

A principle of Onion architecture is the inner layers cannot have dependencies on outer layers (i.e. dependencies are inward only). This implies that dependency inversion is key to ensure this principle is not broken.

This does not play nicely with Web Deploy and packaging out of the box. The dependency inward only principle results in outer layer libraries referencing the inner layer libraries. This means when you build the inner layer libraries, the outer layer libraries won't be built (because there is no reference to them). An option to get around this issue is to perform a post build action for the outer libraries to copy the dll to a location that the inner library can resolve an instance of the concrete dependency at runtime (e.g. using MEF). If you use Web Deploy you will need another solution...

Long story short, Web Deploy won't include extra files without some customization. By extra files I mean files that the project / library is not aware of (or in other words is not referencing) - which ties in with the above problem where inner projects don't reference outer projects - when when publishing a web project, extra files / outer layer projects will not be included in the publishing process. 

You can instruct MSBuild / Web Deploy to include extra files by adding the following to your web project file:

  <PropertyGroup>

    <CopyAllFilesToSingleFolderForMsdeployDependsOn>

      ExternalDependencies;

      $(CopyAllFilesToSingleFolderForMsdeployDependsOn);

    </CopyAllFilesToSingleFolderForMsdeployDependsOn>

  </PropertyGroup>

And,

  <Target Name="ExternalDependencies">

    <ItemGroup>

      <_CustomFiles Include="..\ExternalDependencies\*" />

      <FilesForPackagingFromProject Include="%(_CustomFiles.Identity)">

        <DestinationRelativePath>bin\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>

      </FilesForPackagingFromProject>

    </ItemGroup>

  </Target>

This blog post has a great summary on these statements. However I'll summarize the highlighted parts above:

  • CopyAllFilesToSingleFolderForMsdeployDependsOn: Pretty descriptive, and will add the ExternalDependencies target to the packaging process.
  • ExternalDependencies: Custom target added will will define where the additional file (external dependencies) are located.
  • ..\ExternalDependencies\*: Actual location of the additional files to be included in the packaging process, i.e. the outer layer libraries. So typically I would have my external / out layer libraries have their build output copied to this location (e.g. post build action). And then when a publish is kicked off, these libraries would be picked up, and copied to the bin folder (bin\%(RecursiveDir)%(Filename)%(Extension), the last highlight part) of the web app. 

Monday, July 14, 2014

Onion architecture and MEF

Correctly applying Onion architecture results in stable application concerns being protected from volatile application concerns.

Volatile concerns / responsibilities are coupled with specific frameworks, platforms or tools. For example, persistence is volatile because over time the platform used may change (e.g. Oracle to SQL Server). Another example is UI.

Stable concerns / responsibilities of an application remain the same over time, regardless of the latest technology trends. In other words, business / application functionality is not driven by how a particular platform or framework is used, the requirements for what business functions the application should implement don't change (i.e. stable). In Onion architecture, this would be defined as the Application Core. This is very similar to a fundamental goal of SOA - protecting client applications from volatile provider applications by using stable abstract business services. 

A principle of Onion architecture is the inner layers cannot have dependencies on outer layers (i.e. dependencies are inward only). As you move further into the center, the concerns become more stable - where the domain is the center and most stable part of the application. This implies that dependency inversion is key to ensure this principle is not broken.

MEF can resolve these dependencies (e.g. act as an IOC container) dynamically. In other words, MEF can discover concrete dependencies at runtime. This has the huge advantage of allowing the Application Core (stable concerns) not needing explicit registration of available components. The Application Core can be completely clean of any knowledge of components that implement volatile responsibilities. This results in the volatile responsibilities being pluggable. The below diagram illustrates this concept, where the library responsible for implementing the volatile responsibility (in this example it is persistence) references the Application Core library. It is referencing the Application Core library because it needs to be able to implement IOrderRepository.

Application Core has an abstract dependency on an order repository - IOrderRepository - and needs somehow to resolve a concrete implementation of this interface (i.e. resolve the concrete implementation defined in the SqlPersistence library). With the appropriate usage of MEF (Export, Import attributes etc), this can easily be achieved. It's important to point out again, that nothing is referencing the volatile library. You could easily swap in another library which is implementing IOrderRepository, and the Application Core would require no changes - i.e. pluggable. 

Tuesday, November 19, 2013

MEF and deciding which export instance to use

Having used MEF in the past, I was really keen to use more extensively in a up coming project, which will involve extending 'core' functionality with customer extensions (i.e. overriding) . I'm enjoying the simplicity of MEF in regards to resolving dependencies, and the lack of configuration required / setup code when comparing with IOC containers. With the amount of customers extensions that will be implemented eventually, the amount of configuration required to wire these up won't scale (like it didn't for the previous version of the product where Unity was used).

So essentially, if I drop an assembly into the bin folder with customer extensions on base functionality, then the derived Exports would be used over the 'base' Exports. If the customer extension is not present, (i.e. just the base Export is), then just use this one.

In code this would look like:

public class CoreApplicationQueryService : IApplicationQueryService

public class CustomerApplicationQueryService : CoreApplicationQueryService

If CustomerApplicationQueryService is present, use that, if not, default to CoreApplicationQueryService.

[Import] won't suffice because there will be multiple Exports that match if the Customer version is present, therefore an exception will be thrown. Therefore [ImportMany] will have to be used. But, once the multiple Exports have been picked up, I need a way of deciding which instance to use. That is where [ExportMetadata] comes in.

I've used [ExportMetadata] to indicate is the Export is defined as an (Customer) extension of not:

[ExportMetadata("Extension", false)]
public class CoreApplicationQueryService : IApplicationQueryService

[ExportMetadata("Extension", true)]
public class CustomerApplicationQueryService : CoreApplicationQueryService

... where true ("Extension", true) indicates that this instance is a extension instance.

There is a little bit of magic where you then need to create an interface to match the parameters in the ExportMetadata attribute - e.g:

public interface IExportMetaData
{
bool Extension { get; }
}

The next step is to import the parts, e.g. set this property. IExportMetaData is part of the property definition as part of the Lazy type:

[ImportMany(typeof(IApplicationQueryService))]
public IEnumerable<Lazy<IApplicationQueryService, IExportMetaData>> ApplicationQueryServices { get; set; }

Next compose the parts, and then cycle through the instances resolved to find the extended instance (if there). An example is below:

var directoryCatalog = new DirectoryCatalog("bin");
var compositionContainer = new CompositionContainer(directoryCatalog);
compositionContainer.ComposeParts(this);
foreach (var item in ApplicationQueryServices)
{
   if(item.Metadata.Extension)
{
var message = item.Value.GenerateMessage();
}
}

Monday, November 18, 2013

Not your typical MVC validation requirements

When reviewing the wireframes and requirements for a new project I am about to work on, it quickly became clear validation (data and business rules) would have to be implemented differently than previous web based projects I've worked on. 

Typical validation, especially for a web application, means you can't submit (POST) your changes unless all is well on the page - in other words you can't move to another page / screen until there are no validation errors. ASP.Net's validation does this out of the box - any validation errors, the POST isn't performed. Assuming here you're using Client side validation which will be necessary for this new project as the validation errors still need to be displayed on screen (so POSTing to the server to determine the validation errors won't make sense).

This project will be different:

  1. Users need to know about the validation errors, however it shouldn't stop them moving through out the application to fill out details on other pages. 
  2. The validation errors need to be shown on screen.
  3. The validation errors will remove the ability to perform a commit of all the data collected (in this particular project, the commit is to a legacy banking system via web services). Which means the validation rules will be used in multiple parts of the application (the page were the data is collected, and a final commit page) - so the validation rules need to be centralized (because I don't want to repeat them)

So, using MVC, and with client side validation enabled (meaning the JQuery Validation plug-in will be used), The solution to these requirements were:

  1. After a while searching the web, I found that adding class="cancel" to the submit button, means the submit will still be performed (which is good, so we don't lose the data even if it is invalid) - and when the user returns, it will be loaded as is. More details can be found here (specifically 'Skipping validation on submit'). 
  2. Invoking the validation on the page being loaded can be performed by doing the following (i.e. perform the validation for everything within the form element):

    $(document).ready(function () {

        $('form').validate();

        $('form').valid();

    });

3. And finally the centralization of the validation / business rules. Because I don't want to 'embed' the validation in the view model for the page because I want re-use, I'm going to centralize on the domain (I'm going to use the Validation block in Enterprise Library). However, because these rules will be on the server, and I'm using Client side validation, the Remote attribute will allow the rules to be invoked via AJAX - e.g.

[Remote("ValidateAge", "Applicant", ErrorMessage = "Age is invalid")]

This results in the following attributes being added to the text input element (i.e. using Razor to create the textbox via Html.TextBoxFor(x => x.Age) )

<input data-val="true" data-val-remote="Age is invalid" data-val-remote-additionalfields="*.Age" data-val-remote-url="/Applicant/ValidateAge" id="Age" name="Age" type="text" class="input-validation-error">

So after every change in the texbox Applicant/ValidateAge will be called (data-val-remote-url), which means the Age validation logic can be invoked on the server. This same validation logic can be invoked again when needed on additional pages (e.g. when determining if the commit call to the banking system can be made). 

Wednesday, April 10, 2013

MEF 101

A new project that I've been working on involved separating customer extensions into their own assemblies. More specifically, a WCF service has a dependency on types within an assembly, however these types may be extended in a customer specific library/assembly depending on the customer we are building the solution for.

WCF Service --> Library with Interface / Base classes etc <-- Client extensions library implementing interfaces, and extending base classes

However I didn't want the WCF service to have a reference to all of the client extension assemblies and use Unity for example to resolve the concrete dependency at runtime through config - this will grow over time so won't scale, and is clumsy. Its worth pointing out that Unity won't load the assembly into the AppDomain by it's self - so another mechanism is needed (eg. project reference)
Never used MEF before, but it's perfect for this scenario - dynamic application composition.
Continuing with the WCF service example, I create a property which is the dependency I want to 'import' - below example is a logger dependency, where the ILogger interface is defined in the base library (i.e. what the WCF service is referencing), I have also decorated the property with the MEF Import attribute:

[Import(typeof(ILogger))]
public ILogger Logger { get; set; }

A customer wants to log in a particular way, so we'll create a specific implementation by implementing the ILogger interface in a customer specific assembly. This class has been decorated with the MEF Export attribute - which indicates it available as a composable part:

Export(typeof(ILogger))]
public class FlatFileLogger : ILogger

The next step is to build up the type with the dependencies / needs to composed using parts (i.e. needs to 'import' an implementation of ILogger). To do this you need to use the CompositionContainer, AggregateCatalog and the ComposablePartCatalog types. In my example, I just wanted to drop an assembly in a specified folder, and MEF would pick it when attempting to compose, so the DirectoryCatalog is the catalog type (there are others) that will allow me to do this.

new DirectoryCatalog("bin")

In the above snippet, I've created a Directory Catalog where MEF will evaluate all the assemblies in the bin folder - relative to the root folder of the AppDomain. Next I need to add this catalog instance to the AggregateCatalog, and then add the AggregateCatalog to the CompositionContainer.

var aggregationCatalog = new AggregateCatalog();
var compositionContainer = new CompositionContainer(aggregationCatalog);

So, assuming I've copied the customer extension assembly into the bin folder (i.e. the FlatFileLogger), I can then compose the parts for my instance the needs to be built up - so in the below example, instance that is passed into ComposeParts, is an instance of my type that needs to be composed with a ILogger (decorated with the Import attribute). Using the configured catalogs, MEF will then try and compose the instance. So since in this example a DirectoryCatalog is used (for the bin folder) - MEF will evaluate all the assemblies in that folder to determine if there are any types that are defined as being a composable part (i.e. decorated with Export). If so, MEF will instantiate the part - e.g. the Logger property will be instantiated as a FlatFileLogger.

compositionContainer.ComposeParts(instance);