Creating Personal Information Exchange (.pfx) Files from Separate Public and Private Key Files

This blog post forms part of a larger series of posts looking at setting-up a SFTP Server for integration testing purposes.

Some Certificate Authorities (CAs) use different file formats to store public and private keys. For example, some CA’s store the certificate’s private key in a Private Key (.pvk) file and store the certificate and public key in a .spc or .cer file. The makecert tool will also generate separate .cer (public key) and .pvk (private key) files. Where this is the case, you may need to merge the two files into a Personal Information Exchange (.pfx) file.

Imagine you have created a set of self-signed keys using the makecert command on the VS Developer Command Prompt (Server.cer is the public key and Service.pvk is the private key):

makecert -r -pe -n "CN=Modhul" -sky exchange Server.cer -sv Server.pvk

In order to create a PFX file, we need to merge the .cer (public key) and .pvk (private key) files using the following command, again on the VS Developer Command Prompt:

pvk2pfx.exe -pvk Server.pvk -spc Server.cer -pfx Server.pfx

The Server.pfx file is our newly created Personal Information Exchange (.pfx) file.

Further information about the pvk2pfx tool can be found at:

In the Pipeline

A couple of blog posts in the pipeline:

  • Using the new Azure Batch functionality to process 1.29 million records into Dynamics CRM 2013 Online.
  • Converting Windows Certificates, Private Keys, .pfx files into their corresponding OpenSSH and OpenSSL formats (part of a larger series on SFTP).
  • Configuring a Linux SFTP Server to help with SFTP integration testing (part of a larger series on SFTP).
  • Configuring the BizTalk SFTP Adapter to use key-based authentication (part of a larger series on SFTP).

Installing Redis Cache Locally in a Development Environment

I recently blogged about using the excellent Redis Cache – which is now the preferred Azure caching solution – for a recent CRM integration project.

In my development environment, I’m pointing against the Azure Redis Cache and while performance is fantastic, I recently saw that Chocolatey have an MS Open Tech version of Redis in their repository that I can run locally in my development environment.

I wondered whether I could easily use the Chocolatey version as a direct drop-in replacement for Azure Redis and even better, eek more performance out of a local install, especially to increase the speed of my unit-tests. The answer is ‘yes you can’ on both points.

Installing Redis via Chocolatey

With Chocolatey installed, we can go ahead and install Redis by issuing the extremely simple command from an Administrator Command Prompt:

Chocolatey - Redis Install Command

As we’re using defaults, Chocolatey will install Redis into C:\ProgramData\chocolatey\lib\redis- (note that the version number might change for you if you try this with later versions) and a shim into the C:\ProgramData\chocolatey\bin directory (the shim is a link that points to the actual .exe in the lib directory when the installation package contains .exe files and not an MSI file):

Chocolatey - Redis Install Screenshot

Configure and Start Redis

Due to Redis’ dependence on the Linux fork() system call, the Windows version has to simulate fork() by moving the Redis heap into a memory mapped file that can be shared with a child process. If no hints are given on startup, Redis will create a default memory mapped file that is equal to the size of physical memory; there must be disk space available for this file in order for Redis to launch.

During fork() operations the total page file commit will max out at around:

(size of physical memory) + (2 * size of maxheap)

For instance, on a machine with 8GB of physical RAM, the max page file commit with the default maxheap size will be (8)+(2*8) GB , or 24GB.

If you don’t give any hints to Redis, you get an error similar to the following:

The Windows version of Redis allocates a large memory mapped file for sharing
the heap with the forked process used in persistence operations. This file
will be created in the current working directory or the directory specified by
the ‘heapdir’ directive in the .conf file. Windows is reporting that there is
insufficient disk space available for this file (Windows error 0x70).

You may fix this problem by either reducing the size of the Redis heap with
the –maxheap flag, or by moving the heap file to a local drive with sufficient
Please see the documentation included with the binary distributions for more
details on the –maxheap and –heapdir flags.

Redis can not continue. Exiting.

To get around this limitation, specify the –maxheap flag when starting Redis, using a value that is relevant to your machine:

redis-server –maxheap 1gb

which will successfully start Redis:

Redis Server - Started

Allowing us to connect to the local Redis server with a connection string similar to the following:


Note that Redis will create the memory mapped file on your file-system at %USERPROFILE%\AppData\Local\Redis that is the size you specify with the –maxheap flag.

Redis Memory Mapped File

Shuting down the server (Ctrl+C in the command prompt window where Redis was started) deletes the file.

Performance Testing

So, what is the performance difference between Azure Redis and a local install of Redis? I created a simple console test app that would create 1000 cache items (integers) and then retrieve the same 1000 cache items; the cache is flushed before I execute each test.

The following results are based on the console test app running locally against my development VM.

Executing against a local Redis instance (all times in ms):

Run 1 Run 2 Run 3
Iteration 1 2515 2799 2526
Iteration 1 2380 2285 2380
Iteration 1 2234 2703 2641
Avg: 2495

Executing against an Azure Redis (1Gb Standard Pricing Tier) instance (all times in ms):

Run 1 Run 2 Run 3
Iteration 1 47955 45139 45725
Iteration 1 48549 47773 46422
Iteration 1 45311 49194 46144
Avg: 46912

I was quite shocked at just how slow the same test was against the Azure Redis instance (2.495 seconds vs. 46.912 seconds). So, to investigate whether this issue was network latency, I tried the same test running on a basic A2 Azure VM (Windows Server 2012, 2 cores, 3.5 GB memory) in the same region as the Azure Redis Cache:

Run 1 Run 2 Run 3
Iteration 1 1211 1185 1186
Iteration 1 1439 1257 1245
Iteration 1 1343 1187 1196
Avg: 1249

The results indicate that executing from the Azure platform to an Azure Redis cache executes faster than a simple install on my local dev. environment (2.495 seconds vs. 1.249 seconds). Kudos to Microsoft for such an excellent and performant service!