Creating Personal Information Exchange (.pfx) Files from Separate Public and Private Key Files

This blog post forms part of a larger series of posts looking at setting-up a SFTP Server for integration testing purposes.

Some Certificate Authorities (CAs) use different file formats to store public and private keys. For example, some CA’s store the certificate’s private key in a Private Key (.pvk) file and store the certificate and public key in a .spc or .cer file. The makecert tool will also generate separate .cer (public key) and .pvk (private key) files. Where this is the case, you may need to merge the two files into a Personal Information Exchange (.pfx) file.

Imagine you have created a set of self-signed keys using the makecert command on the VS Developer Command Prompt (Server.cer is the public key and Service.pvk is the private key):

makecert -r -pe -n "CN=Modhul" -sky exchange Server.cer -sv Server.pvk

In order to create a PFX file, we need to merge the .cer (public key) and .pvk (private key) files using the following command, again on the VS Developer Command Prompt:

pvk2pfx.exe -pvk Server.pvk -spc Server.cer -pfx Server.pfx

The Server.pfx file is our newly created Personal Information Exchange (.pfx) file.

Further information about the pvk2pfx tool can be found at: http://msdn.microsoft.com/en-us/library/windows/hardware/ff550672%28v=vs.85%29.aspx.

In the Pipeline

A couple of blog posts in the pipeline:

  • Using the new Azure Batch functionality to process 1.29 million records into Dynamics CRM 2013 Online.
  • Converting Windows Certificates, Private Keys, .pfx files into their corresponding OpenSSH and OpenSSL formats (part of a larger series on SFTP).
  • Configuring a Linux SFTP Server to help with SFTP integration testing (part of a larger series on SFTP).
  • Configuring the BizTalk SFTP Adapter to use key-based authentication (part of a larger series on SFTP).

Installing Redis Cache Locally in a Development Environment

I recently blogged about using the excellent Redis Cache – which is now the preferred Azure caching solution – for a recent CRM integration project.

In my development environment, I’m pointing against the Azure Redis Cache and while performance is fantastic, I recently saw that Chocolatey have an MS Open Tech version of Redis in their repository that I can run locally in my development environment.

I wondered whether I could easily use the Chocolatey version as a direct drop-in replacement for Azure Redis and even better, eek more performance out of a local install, especially to increase the speed of my unit-tests. The answer is ‘yes you can’ on both points.

Installing Redis via Chocolatey

With Chocolatey installed, we can go ahead and install Redis by issuing the extremely simple command from an Administrator Command Prompt:

Chocolatey - Redis Install Command

As we’re using defaults, Chocolatey will install Redis into C:\ProgramData\chocolatey\lib\redis-64.2.8.17 (note that the version number might change for you if you try this with later versions) and a shim into the C:\ProgramData\chocolatey\bin directory (the shim is a link that points to the actual .exe in the lib directory when the installation package contains .exe files and not an MSI file):

Chocolatey - Redis Install Screenshot

Configure and Start Redis

Due to Redis’ dependence on the Linux fork() system call, the Windows version has to simulate fork() by moving the Redis heap into a memory mapped file that can be shared with a child process. If no hints are given on startup, Redis will create a default memory mapped file that is equal to the size of physical memory; there must be disk space available for this file in order for Redis to launch.

During fork() operations the total page file commit will max out at around:

(size of physical memory) + (2 * size of maxheap)

For instance, on a machine with 8GB of physical RAM, the max page file commit with the default maxheap size will be (8)+(2*8) GB , or 24GB.

If you don’t give any hints to Redis, you get an error similar to the following:

The Windows version of Redis allocates a large memory mapped file for sharing
the heap with the forked process used in persistence operations. This file
will be created in the current working directory or the directory specified by
the ‘heapdir’ directive in the .conf file. Windows is reporting that there is
insufficient disk space available for this file (Windows error 0x70).

You may fix this problem by either reducing the size of the Redis heap with
the –maxheap flag, or by moving the heap file to a local drive with sufficient
space.
Please see the documentation included with the binary distributions for more
details on the –maxheap and –heapdir flags.

Redis can not continue. Exiting.

To get around this limitation, specify the –maxheap flag when starting Redis, using a value that is relevant to your machine:

redis-server –maxheap 1gb

which will successfully start Redis:

Redis Server - Started

Allowing us to connect to the local Redis server with a connection string similar to the following:

“localhost:6379,ssl=false”

Note that Redis will create the memory mapped file on your file-system at %USERPROFILE%\AppData\Local\Redis that is the size you specify with the –maxheap flag.

Redis Memory Mapped File

Shuting down the server (Ctrl+C in the command prompt window where Redis was started) deletes the file.

Performance Testing

So, what is the performance difference between Azure Redis and a local install of Redis? I created a simple console test app that would create 1000 cache items (integers) and then retrieve the same 1000 cache items; the cache is flushed before I execute each test.

The following results are based on the console test app running locally against my development VM.

Executing against a local Redis instance (all times in ms):

Run 1 Run 2 Run 3
Iteration 1 2515 2799 2526
Iteration 1 2380 2285 2380
Iteration 1 2234 2703 2641
Avg: 2495

Executing against an Azure Redis (1Gb Standard Pricing Tier) instance (all times in ms):

Run 1 Run 2 Run 3
Iteration 1 47955 45139 45725
Iteration 1 48549 47773 46422
Iteration 1 45311 49194 46144
Avg: 46912

I was quite shocked at just how slow the same test was against the Azure Redis instance (2.495 seconds vs. 46.912 seconds). So, to investigate whether this issue was network latency, I tried the same test running on a basic A2 Azure VM (Windows Server 2012, 2 cores, 3.5 GB memory) in the same region as the Azure Redis Cache:

Run 1 Run 2 Run 3
Iteration 1 1211 1185 1186
Iteration 1 1439 1257 1245
Iteration 1 1343 1187 1196
Avg: 1249

The results indicate that executing from the Azure platform to an Azure Redis cache executes faster than a simple install on my local dev. environment (2.495 seconds vs. 1.249 seconds). Kudos to Microsoft for such an excellent and performant service!

Capturing Custom Logs from Azure Worker Roles using Azure Diagnostics

In this post I’ll show you how to correctly configure diagnostics in an Azure Worker Role to push custom log files (NLog, Log4Net etc.) to Azure Storage using the in-built Azure Diagnostics Agent.

 

Configuring our Custom Logger – NLog

I’m not a massive fan of the recommended Azure Worker Role logging process, namely using the Trace.WriteLine() method as I don’t feel as though it provides sufficient flexibility for my logging needs and I think it looks crap when my code is liberally scattered with Trace.WriteLine() statements, code is art and all that.

NLog, on the other hand, provides all the flexibility I need including log file layout formatting, log file archiving – with granulatrity down to one minute – and archive file cleanup to name just a few. Having used it as the main logging tool on several projects, I feel completely at home with this particular library and want to use it within my Worker Role implementations.

I won’t go into how to add NLog (or any other logging framework you may use) to your project as there are loads of examples on the interweb, however I will share with you my Azure Worker Role app.config file which shows the configuration I have for my logging:


<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog" />
</configSections>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd&quot; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"&gt;
<targets>
<target name="MessageWorker" xsi:type="File" fileName="logs\MessageWorker_Current.log" layout="${longdate} ${level:uppercase=true:padding=5} ${processid} ${message}" archiveFileName="logs\MessageWorker_{#}.txt" archiveEvery="Day" archiveNumbering="Date" archiveDateFormat="yyyy-MM-dd_HH-mm-ss" maxArchiveFiles="14" concurrentWrites="false" keepFileOpen="false" encoding="iso-8859-2" />
</targets>
<rules>
<logger name="*" minlevel="Trace" writeTo="MessageWorker" />
</rules>
</nlog>
</configuration>

view raw

NLogApp.config

hosted with ❤ by GitHub

On line 8 we define our MessageWorker target which instructs NLog to:

  • Write our (current) log file to ‘MessageWorker_Current.log‘ (the fileName property) with a particular log-file layout (the layout property);
  • Create archives every day (the archiveEvery property) with an archive filename of ‘MessageWorker_{#}.log‘ (the archiveFilename property) – the ‘{#}’ in the archive filename is expanded to a date/time when an archive file is created (the archiveNumbering and archiveDateFormat properties);
  • Maintains a rolling 14 day window of archive files, deleting anything older (the maxArchiveFiles property);
  • Not perform concurrent writes (the concurrentWrites property) or to keep files open (the keepFilesOpen property) which I discovered helps the Azure Diagnostics Agent consistently copy log files to Blob Sstorage – your mileage may vary with these settings.

These configuration settings result in a number of files in our log directory – in the screenshot below I am archiving every minute:

Message Worker Logs

 

Define a Local Storage Resource

It is recommended that custom log files are written to Local Storage Resources on the Azure Worker Role VM. Local Storage Resources are reserved directories in the file system of the VM in which the Worker Role instance is running. Further information about Local Storage can be found online at MSDN.

In order to define Local Storage, we add a LocalResources section to the ServiceDefinition.csdef file and define our ‘CustomLogs’ Local Storage, as shown on line 16 below. The cleanOnRoleRecycle tell Azure not to delete the Local Storage if the role is recycled (restarted or scaled up/down) and the sizeInMB tells Azure how much Local Storage to allocate on the VM.


<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MessageWorker" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition&quot; schemaVersion="2014-06.2.4">
<WorkerRole name="MessageWorker.WorkerRole" vmsize="Small">
<Imports>
<Import moduleName="Diagnostics" />
<Import moduleName="RemoteAccess" />
<Import moduleName="RemoteForwarder" />
</Imports>
<ConfigurationSettings>
<Setting name="ServiceBusConnectionString" />
<Setting name="StorageConnectionString" />
<Setting name="MinimumLogLevel" />
</ConfigurationSettings>
<LocalResources>
<LocalStorage name="DiagnosticStore" cleanOnRoleRecycle="false" sizeInMB="8192" />
<LocalStorage name="CustomLogs" cleanOnRoleRecycle="false" sizeInMB="1024" />
</LocalResources>
</WorkerRole>
</ServiceDefinition>

On an Azure Worker Role VM, the Local Resources directory could be found at: C:\Resources\Directory\[GUID].[WorkerRoleName]

On a local development machine, the Local Resources directory can be found at: C:\Users\[USER]\AppData\Local\dftmp\Resources\[GUID]\directory\CustomLogs

 

Configure NLog to Write Log Files to the Local Storage Resource

When using Local Storage Resources, we don’t know the actual directory name for the resource (and therefore our log files) until runtime. As a result, we need to update our NLog Target’s fileName and archiveFilename property as the Worker Role starts. To do this, we need a small helper to re-configure the targets:


public static class LogTargetManager
{
private static Logger logger = LogManager.GetCurrentClassLogger();
public static void SetLogTargetBaseDirectory(string targetName, string baseDirectory)
{
logger.Debug("SetLogTargetBaseDirectory() – Log Target Name: {0}.", targetName);
logger.Debug("SetLogTargetBaseDirectory() – (New) Base Directory: {0}.", baseDirectory);
var logTarget = (FileTarget)LogManager.Configuration.FindTargetByName(targetName);
if (logTarget != null)
{
// Capture the Target's current Filename and Archive Filename
var currentTargetFilename = Path.GetFileName(logTarget.FileName.ToString().TrimEnd('\''));
var currentTargetArchiveFilename = Path.GetFileName(logTarget.ArchiveFileName.ToString().TrimEnd('\''));
// Re-base the Target's Filename and Archive Filename Directory to that supplied in 'baseDirectory'.
logTarget.FileName = Path.Combine(baseDirectory, currentTargetFilename);
logTarget.ArchiveFileName = Path.Combine(baseDirectory, currentTargetArchiveFilename);
logger.Debug("Logger Target '{0}' Filename re-based to: {1}", targetName, logTarget.FileName);
logger.Debug("Logger Target '{0}' Archive Filename re-based to: {1}", targetName, logTarget.ArchiveFileName);
}
// Re-configure the existing logger with the new logging target details
LogManager.ReconfigExistingLoggers();
}
}

This helper is passed the NLog Target to be updated and the directory path of the Local Storage Resource that is made available at runtime:

  • On line 11, we retrieve the FileTarget from our NLog configuration based on the supplied Target name.
  • On lines 16 & 17, we retrieve the current log filename and the archive filename (stripping a trailing apostrophe which appears to come from somewhere).
  • On lines 20 & 21, we combine the current log filename and archive filename with the supplied base directory (which contains the Local Storage directory passed to this method).
  • On line 28, we re-configure the existing loggers, activating these changes.

This helper is called within the Worker Role’s OnStart() method:


public class MessageWorker : RoleEntryPoint
{
/* This GIST only demonstrates the OnStart() method. */
public override bool OnStart()
{
Trace.WriteLine("Starting MessageWorker Worker Role Instance…");
// Initialize Worker Role Configuration
InitializeConfigurationSettings();
// Re-configure Logging (update logging to runtime configuration settings)
LogTargetManager.SetLogTargetBaseDirectory("MessageWorker", RoleEnvironment.GetLocalResource("CustomLogs").RootPath);
LogLevelManager.SetMinimimLogLevel(_workerRoleConfiguration.MinimumLogLevel);
// Formally start the Worker Role Instance
logger.Info("Starting Worker Role Instance (v{0})…", Assembly.GetExecutingAssembly().GetName().Version.ToString());
// Log Configuration Settings
LogConfigurationSettings();
// Set the maximum number of concurrent connections
ServicePointManager.DefaultConnectionLimit = 12;
return (base.OnStart());
}
}

On line 15, we call the SetLogTargetBaseDirectory() method passing the NLog Target from our app.config file and the directory of the Local Stroage Resource, obtained by calling:

RoleEnvironment.GetLocalResource(“CustomLogs”).RootPath

where “CustomLogs” is the name of the Local Storage defined in the ServiceDefinition.csdef file (see the earlier Define a Local Storage Resource section). RoleEnvironment is a class located in the Microsoft.WindowsAzure.ServiceRuntime namespace.

 

Configuring Azure Diagnostics

Our final step in the process is to configure diagnostics, which instructs the Azure Diagnostics Agent what diagnostics data should be captured and where it should be stored. This configuration can be performed either by configuration (via the diagnostics.wadcfg file), or through code in the OnStart() method of the Worker Role. Be aware that there is an order of precendence for configuration diagnostic data – take a look at Diagnostics Configuration Mechanisms and Order of Precedence for more information.


<?xml version="1.0" encoding="utf-8"?>
<DiagnosticMonitorConfiguration configurationChangePollInterval="PT5M" overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"&gt;
<DiagnosticInfrastructureLogs scheduledTransferPeriod="PT5M" />
<Directories scheduledTransferPeriod="PT1M">
<CrashDumps container="wad-crash-dumps" />
<DataSources>
<DirectoryConfiguration container="wad-nlog2" directoryQuotaInMB="512">
<LocalResource relativePath="." name="CustomLogs" />
</DirectoryConfiguration>
</DataSources>
</Directories>
<Logs bufferQuotaInMB="1024" scheduledTransferPeriod="PT15M" scheduledTransferLogLevelFilter="Error" />
<PerformanceCounters bufferQuotaInMB="512" scheduledTransferPeriod="PT15M">
<PerformanceCounterConfiguration counterSpecifier="\Memory\Available MBytes" sampleRate="PT3M" />
<PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT1S" />
<PerformanceCounterConfiguration counterSpecifier="\Memory\Committed Bytes" sampleRate="PT1S" />
<PerformanceCounterConfiguration counterSpecifier="\LogicalDisk(_Total)\Disk Read Bytes/sec" sampleRate="PT1S" />
<PerformanceCounterConfiguration counterSpecifier="\Process(WaWorkerHost)\% Processor Time" sampleRate="PT1S" />
<PerformanceCounterConfiguration counterSpecifier="\Process(WaWorkerHost)\Private Bytes" sampleRate="PT1S" />
<PerformanceCounterConfiguration counterSpecifier="\Process(WaWorkerHost)\Thread Count" sampleRate="PT1S" />
</PerformanceCounters>
<WindowsEventLog bufferQuotaInMB="1024" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Error">
<DataSource name="Application!*" />
</WindowsEventLog>
</DiagnosticMonitorConfiguration>

The important lines for our custom logging solution are found in the <DataSources> element between lines 6 & 10:

  • Line 7 – <DirectoryConfiguration> details the Azure Storage Container where the custom log files should be written; and
  • Line 8 – <LocalResource> details the Local Storage Resource where the log files reside.

 

Bringing it all Together

Lets recap on the various moving parts we needed to get this up and running:

  • Configure the app.config file for your Worker Role, adding the required NLog configuration. Check that this is working locally before trying to push the settings to the cloud.
  • Define a Local Storage Resource in your the ServiceDefinition.csdef file for your Cloud Service Project where log files will be written to – in this blog post we have called our resource ‘CustomLogs’.
  • Add functionality to update the NLog target’s filename and archive filename at runtime by querying the Local Resource’s Root Path in the Worker Role’s OnStart() method.
  • Add a DataSources section to the diagnostics.wadcfg file instructing the Azure Diagnostics Agent to push custom log files from the Local Storage Resource to the specified Azure Blob Container.

With all of the required pieces in place we can deploy to our Cloud Service on Azure as usual, either through Visual Studio or the Management Portal. Give the service time to spin-up and we should hopefully start to see our log files appear in the specified container within Azure Storage – in the screenshot below, log files are being archived once a day and copied to the Storage Container at approx. 2 mins past midnight every morning:

Custom Log Files Copied to Azure  Storage Container
Custom Log Files Copied to Azure Storage Container – Click to enlarge

 

Gotchas

In closing, there is one gotcha that I would like to highlight. When a Cloud Service Project is deployed to a Cloud Service in Azure, the diagnostics configuration (derived from the diagnostics.wadcfg file) is written to a control configuration blob in the wad-control-container container, as specified in the Diagnostics Connection String (setting Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString) from the deployed ServiceConfiguration.Cloud.cscfg configuration file.

Azure wad-control-container
Azure PaaS Control Configuration Blob – Click to enlarge

The purpose of this file is detailed as follows in the MSDN article Diagnostics Configuration Mechanisms and Order of Precedence:

A control configuration blob will be created for each role instance whenever a role without a blob starts. The wad-control-container blob has the highest precedence for controlling behavior and any changes to the blob will take effect the next time the instance polls for changes. The default polling interval is once per minute.

In order for any changes that you make to the diagnostics.wadcfg file to take effect, you will need to delete this control configuration blob and have Azure re-create it when the role re-starts. Otherwise, Azure will continue to use the outdated diagnostics configuration from this control configuration blob and log-files won’t be copied to Azure Storage.

Serializing Custom .Net Types for use with the Azure Redis Cache

In my previous post I looked at a real-world example of using the new Azure Redis Cache. One thing that was missing was the storing of custom .Net Types as cache values, which we’ll look at here.

The RedisValue Type

The Microsoft Azure documentation recommends using the StackExchange.Redis API for accessing the Redis Cache. This API stores cache values within the RedisValue type, which has a number of implcit conversions from primitive .Net types (Int32, Int64, Double, Boolean etc.), but conversions from the RedisValue type to the primitive .Net type need to be made explicitily.

For example, setting an int as the cache value is implicit, while retrieving an int from a cache value needs to be cast (the following screenshot is from the MSDN documentation):

Cache Serializer - Get and Set of Primitive Types

In addition to the implicit/explicit conversion of primitive data types, the RedisValue type can store binary data as the cache value. This means that we can serialize a custom .Net type into a byte array and pass that as the cache value within the StringSet() method; we can then retrieve that cache value using the StringGet() method and deserialize the byte array back to the custom type.

To complete the serialization/deserialization, we need a helper class, which is shown below. This class is inspired by the sample on MSDN (Work with .Net objects in the cache) but is tweaked slightly:


/*
* A simple generic Serialization Helper allowing custom .Net types to be stored
* in the Azure Redis Cache. Inspired by the sample at:
* http://msdn.microsoft.com/en-us/library/azure/dn690524.aspx#Objects
*/
using System.IO;
using System.Runtime.Serialization.Formatters.Binary;
namespace AzureRedisCacheSample
{
public static class CacheSerializer
{
public static byte[] Serialize(object o)
{
if (o == null)
{
return null;
}
BinaryFormatter binaryFormatter = new BinaryFormatter();
using (MemoryStream memoryStream = new MemoryStream())
{
binaryFormatter.Serialize(memoryStream, o);
byte[] objectDataAsStream = memoryStream.ToArray();
return objectDataAsStream;
}
}
public static T Deserialize<T>(byte[] stream)
{
if (stream == null)
return (default(T));
BinaryFormatter binaryFormatter = new BinaryFormatter();
using (MemoryStream memoryStream = new MemoryStream(stream))
{
T result = (T)binaryFormatter.Deserialize(memoryStream);
return result;
}
}
}
}

Note that during deserialization, if the byte array is null, a default instance of type T will be returned. This caught me out when I was initially testing, so beware that it is returning the value you expect.

Using these helpers with the StackExchange.Redis API’s StringSet() and StringGet() methods to store custom .Net types makes life really easy:


var cacheItem = CacheSerializer.Deserialize<Guid>(cache.StringGet(cacheKey));
if (cacheItem == Guid.Empty)
{
// Cache item doesn't exist, retrieve Guid from source system.
// Add (serialized) Guid to the Azure Redis Cache with a lifespan of 90 minutes
cache.StringSet(cacheKey, CacheSerializer.Serialize(cacheItem), TimeSpan.FromMinutes(90));
}

To retrieve a custom type, we call the StringGet() method, passing the cache Key and deserialize the returned byte array into the .Net Guid type; In order to determine whether the cache item was found, I check whether the returned cacheItem equals Guid.Empty, which is the value returned by default(T) for the Guid type from the Deserialize() method.

To store a custom .Net type (in this case a GUID), call the StringSet() method and instead of passing a primitive type as the cache value, we serialize our cache item and pass the resulting byte array.

Azure Redis Cache – A Real World Example

Read the second post in this series: Serializing Custom .Net Types for use with the Azure Redis Cache.

I spend a lot of my time at the moment architecting and writing integration code against Dynamics CRM 2011 and 2013. My current project is based on Azure Worker Roles and sucks CRUD messages from an SFTP Server (yep, FTP), maps them to CRM Entities before finally calling the appropriate CRUD operation on the XRM Service Proxy with the new Entity instance.

I need to repeatedly retrieve an Entity Reference to common ‘reference’ data on Create and Update – data that is loaded once and hardly ever changes (such as Products, Agent Codes, Cost Centres etc.) At the moment, I’m hitting CRM each and every time I need to grab the Entity Reference, which isn’t particularly efficient, slows down the overall solution and will start to incur unnecessary Azure data-egress charges at some point in the life-cycle of the application.

Azure Redis Cache

The Azure Redis Cache is the new cache offering from Microsoft which supersedes the Managed Service Cache (an instance of the AppFabric Cache) and In-Role Cache (a cache based on Cloud Services).

The Azure Redis Cache is based on the open-source Redis cache, an advanced key-value cache and store. More information about the project can be found at http://redis.io/.

The Azure offering comes in two flavours: Basic (a single-node deployment, primarily for dev/test) and Standard (a replicated cache in a two-node primary/secondary configuration, with automatic replication between the two nodes (managed by the Azure Fabric) and a high-availability SLA, primarily for production use). For the purposes of this blog-post I am using a Basic instance.

A Real World Example

I use an Entity Reference Provider in my code which implements a bunch of simple methods to retrieve the Guid of the Entity I’m interested in – in the example below, I am retrieving the Guid to the Product Entity:

RetrieveProductEntityReference() Method - Pre Cache Implementation

In order to cache-ify this method using the Azure Redis Cache we first need to spin-up  a new instance of the Redis Cache in the Azure Portal (see Getting Started with Azure Redis Cache in the Azure Documentation for further information).

We then create a connection to the Redis Cache using the ConnectionMultiplexer.Connect() method. Be aware that calling this method is expensive and the resulting object should be re-used thoughout your code, hence why I create an instance in the constructor for the provider:

Entity Reference Provider - Constructor

That’s all the setup required to use the Azure Redis Cache and we can now concentrate on retrieving and storing values in our cache. At a high level, the approach will be:

  • Attempt to retrieve a Guid for the referenced Entity from the cache based on a specified cache key.
  • If nothing is found in the cache for the key, we will retrieve the value from XRM itself and then populate the cache with that value.
  • We will finally return the Guid (retrieved either from the Cache or XRM).

The full implementation of the reworked RetrieveProductEntityReference() method with a cache implementation is shown below.

Note that I am keeping things simple here and converting my Guid to a string and vice-a-versa (I plan on showing how to cache custom types in a separate post).

With a connection to the Cache we need to retrieve a reference to the cache database. The GetDatabase() method is a lightweight method call and can be performed as required:

RetrieveProductEntityReference - Breakdown-1

We then define a cache key and attempt to get the item from the cache that has that key:

RetrieveProductEntityReference - Breakdown-2

If the cache item is null, there was no item in the cache with the supplied key (either it has never been set, or a previous cache item with that key has expired).

RetrieveProductEntityReference - Breakdown-3

If that is the case, we go off and retrieve the actual Entity Guid from XRM and assign the Guid to the (previously defined) cacheItem:

RetrieveProductEntityReference - Breakdown-4

We then add the cache item (containing the newly retrieved Guid) to the cache using the cache.StringSet() method, passing the previously defined cache key, item and a timespan which defines when the item will expire in the cache (the expiry timespan is optional):

Finally, we return the Guid contained within the cache item which has either been retrieved directly from XRM or the cache itself:

RetrieveProductEntityReference - Breakdown-6

A Performance Improvement?

Just to give you an idea of numbers, using a very small test sample of 50 iterations over this method, retrieving from the cache takes (on average) 32ms, while hitting the XRM Service Proxy takes 209ms (we are hitting Dynamics CRM 2013 Online).

Putting this into context, if I didn’t use the cache, it would take me 10.45 seconds to perform 50 iterations of this method, vs. 1.8 seconds for the cache implementation (which includes the initial hit to retrieve the data from XRM).

RESTfully GETting JSON Formatted Data with BizTalk 2013

With the new WCF-WebHttp Adapter that shipped in BizTalk Server 2013 we now have the ability to specify which HTTP verbs to use – this opens up the possibility of invoking RESTful style applications.

In this post, I’ll investigate how we can RESTfully invoke an endpoint that exposes data via JSON using the GET verb – we’ll be ‘RESTfully GETting JSON Formatted Data with BizTalk 2013’; I’ll be retrieving stock tickers from Yahoo! Finance in JSON format.

Send/Receive Adapter

The RESTful endpoint is invoked via a solicit/response Send Port configured to use the WCF-WebHttp Adapter, providing the Yahoo! Finance URL and a single HTTP method of ‘GET’:

WCF-WebHttp Adapter Configuration

 

JSON Decode Receive Pipeline Component

The JSON response is passed to a custom Receive Pipeline via the adapter which contains a newly developed ‘JsonDecode’ component. This component invokes the fantastic Json.Net package (see http://james.newtonking.com/pages/json-net.aspx for more information) to do the required heavy lifting and convert the JSON into an Xml representation. The Execute() method of the Pipeline Component looks something like this:

JSON Decode Pipeline Component Source Code

Once we have an Xml representation of the JSON response, we add a Root Node (and associated namespace) so we can leverage our usual magic BizTalk sauce. The root node name and namespace are exposed as properties via the Pipeline Component (think WCF-SQL Adapter):

JSON Decode Pipeline Component Configuration

Simply create a schema that represents your new message and voilà, you’re receiving JSON data and processing it as an Xml message within BizTalk; you’ll also have access to all of the usual property promotion/routing capabilities etc.

JSON in, Xml Out

Now that we have configured our solution, what do the input and output messages look like?

Something like this for the JSON response to our GET request:

JSON Response Data

and our corresponding Xml representation:

JSON Data as Xml Message to be Consumed by BizTalk

Feel free to download a copy of the solution if you’re interested –> Modhul.PipelineComponents.JsonDecode.zip. Please note that the sample code is for demonstration purpose, and it is NOT production quality code.

Stay tuned for Part 2, where I will look at POSTing JSON data from BizTalk.

 

Beware of windows.old when upgrading to Windows 8

I have just taken the plunge and upgraded from Windows 7 to Windows 8 (partly because of the £25 upgrade offer). After the upgrade I noticed a severe lack of space on my otherwise roomy C: drive.

A quick check in Windows Explorer reveals a windows.old directory which contains all of my old Windows 7 ‘Windows’ directory, Program Files, Users and PerfLogs directories – and its just short of 20Gb. I’ll be shifting it to an external backup drive shortly…

windows.old

Including Custom Databases in the Backup BizTalk Server Job

I’m sure its common knowledge that other databases can be backed-up using the Backup BizTalk Server Job, meaning that the transaction consistency that is that is written the the BizTalk databases can be included in your own custom databases.

I thought it was pretty easy to include custom databases, simply by adding an entry into the adm_OtherBackupDatabases table in the BizTalkMgmtDb database. In the screenshot below, I’ve added the BizTalk RosettaNet Accelerator (BTARN) databases:

I thought it was that simple, until I tried it for the first time in real life and was greeted with the following error from SQL Agent (notice the error in bold):

Executed as user: NT AUTHORITYSYSTEM. Processed 17520 pages for database ‘BizTalkDTADb’, file ‘BizTalkDTADb’ on file 1. [SQLSTATE 01000] (Message 4035)  Processed 2 pages for database ‘BizTalkDTADb’, file ‘BizTalkDTADb_log’ on file 1. [SQLSTATE 01000] (Message 4035)  BACKUP DATABASE successfully processed 17522 pages in 10.454 seconds (13.093 MB/sec). [SQLSTATE 01000] (Message 3014)  Processed 2264 pages for database ‘BizTalkMgmtDb’, file ‘BizTalkMgmtDb’ on file 1. [SQLSTATE 01000] (Message 4035)  Processed 3 pages for database ‘BizTalkMgmtDb’, file ‘BizTalkMgmtDb_log’ on file 1. [SQLSTATE 01000] (Message 4035)  BACKUP DATABASE successfully processed 2267 pages in 2.284 seconds (7.751 MB/sec). [SQLSTATE 01000] (Message 3014)  Processed 3280 pages for database ‘BizTalkMsgBoxDb’, file ‘BizTalkMsgBoxDb’ on file 1. [SQLSTATE 01000] (Message 4035)  Processed 2 pages for database ‘BizTalkMsgBoxDb’, file ‘BizTalkMsgBoxDb_log’ on file 1. [SQLSTATE 01000] (Message 4035)  BACKUP DATABASE successfully processed 3282 pages in 2.156 seconds (11.890 MB/sec). [SQLSTATE 01000] (Message 3014)  Processed 256 pages for database ‘BizTalkRuleEngineDb’, file ‘BizTalkRuleEngineDb’ on file 1. [SQLSTATE 01000] (Message 4035)  Processed 1 pages for database ‘BizTalkRuleEngineDb’, file ‘BizTalkRuleEngineDb_log’ on file 1. [SQLSTATE 01000] (Message 4035) Could not find stored procedure ‘BTARNARCHIVE.dbo.sp_BackupBizTalkFull’. [SQLSTATE 42000] (Error 2812) BACKUP DATABASE successfully processed 257 pages in 0.326 seconds (6.158 MB/sec). [SQLSTATE 01000] (Error 3014).  The step failed.

‘Could not find stored procedure ‘BTARNARCHIVE.dbo.sp_BackupBizTalkFull’. Oh really?

It turns out that it isn’t as simple as adding a custom database to the adm_OtherBackupDatabases table – a number of stored procedures and tables also need to be added to your custom database to get the backup job to successfully work, however these can be easily added by executing the two SQL scripts Backup_Setup_All_Procs.sql and Backup_Setup_All_Tables.sql against your custom databases (the scripts can be found in the ‘[BizTalk Installation Directory]Schema’ directory). You may want to force a full backup to ensure you have full backups across all of your databases:

BEGIN TRAN
UPDATE [BizTalkMgmtDb].[dbo].[adm_ForceFullBackup] SET ForceFull = 1
COMMIT TRAN

Re-run your backup job and you will see your custom databases included in the backup.

The requirement to include additional stored procedures and tables in custom databases is well worth remembering, especially if you are deploying (or re-deploying) custom databases that don’t include these artifacts by default.

The procedure is detailed in more depth on MSDN.

BTARN ‘Service Content’ Error in the RNDisAssembler Component

We came across the following error late last night which was a bit of a show-stopper. We were trying to load a custom PIP (specifically a PIDX OrderChange), but kept hitting this issue time and time again:

Event Type: Error
Event Source: BizTalk Accelerator for RosettaNet
Event Category: None
Event ID: 4096
Description:
Source module:
RNDisAssembler
Correlation information:
Description:
Receive pipeline rejected incoming message
due to the following RNIF exception:
UNP.SCON.VALERR : A failure occurred while validating the service content.
Details:
Event Type: ErrorEvent Source: BizTalk Accelerator for RosettaNetEvent Category: NoneEvent ID: 4096Date: 18/08/2010Time: 10:33:47User: N/AComputer: [COMPUTER NAME]Description:Source module:RNDisAssembler
Correlation information:

Description:

Receive pipeline rejected incoming message due to the following RNIF exception: UNP.SCON.VALERR : A failure occurred while validating the service content.
Details:

At first I didn’t quite understand the cause of the error – the PIDX OrderChange message contained within the Service Content was valid (as far as I was aware), all of the other messages within the payload looked correct and it was 3am….

It turns out that the RNDisassembler does in-fact attempt to validate the message contained within the Service Content against a deployed schema just like the standard XmlDisassembler. The message that our trading partner was sending did not validate and hence the RosettaNet Accelerator threw this error message; once we had corrected the schema and redeployed, the error went away.

This is certainly one to be aware of if you are developing custom PIP’s to use with the RosettaNet Accelerator: ensure that the message in the Service Content validates against your custom PIP schema!