Network Port Requirements for Windows Server System

Came across Microsoft Knowledge Base Article 832017 – Service overview and network port requirements for the Windows Server system – today while researching required firewall ports for UNC share access. A good one to have in your back-pocket if you ever need to know what port is required to allow obscure protocols such as the NetBIOS datagram service (for example).

The New CPU Bottleneck

Over on the Pluralsight blog, Joe Hummel talks about supercomputing in 2007 and some of the worrying problems the chip manufacturers are starting to encounter as we move to dual-, quad- (and above) cores.

In a nutshell, the chip manufacturers seem to have hit a brick wall in terms of CPU speed (levelling off at around 3GHz) and are therefore focusing on the number of cores on a chip. However, compiler optimisation has brought us to a point with the current chip technology where optimised code needs between 16Gb and 24Gb bandwidth to memory which simply doesn’t exist (even in high-end corporate servers). As a result, CPU’s spend a lot of time hanging-around waiting for data to come from RAM or cache; factor in dual, quad or the new range of eight-core processors and you’ve got one massive waste of CPU cycles waiting for data to come from memory (the cost of each memory level read is roughly a factor of 10, so CPU to L1 is 10 cycles, CPU to L2 is 100, CPU to L3 is 1,000, and CPU to RAM is 10,000 cycles).

Hummel argues that optimising compilers shouldn’t just look at reducing the number of cycles to accomplish the task, but should rather look at how best to use the multiple-core technology. One such trick might be to have one core reading data into the cache and a second performing compute functions, swapping roles once there is no more data. Initial calculation suggest that a performance increase of between 1.5x and 1.7x are possible using this method.

If you’re interested in hardware and all that stuff, well worth a read (hey, the Pluralsight blog is worth a read just for the great BizTalk content if nothing else!)

Microsoft’s SOA & BPM Conference lands in the UK

The Microsoft SOA & BPM Conference finally landed at the Microsoft Campus in Reading yesterday and I had the opportunity to represent Edenbrook at the event. Although the majority of my time was spent talking to delegates, I managed to attend several of the excellent seminars and demos including the new WCF LOB Adapters (to be released in January of 2008) and PNMSoft’s workflow tools. Of particular interest however was the closing address from Mike Woods, BizTalk Server Product Manager at Microsoft, who gave the ‘Oslo’ brief.

Following on from Tim Rayburns Oslo thoughts after the Keynote, here are my notes from the keynote:

Oslo – ‘Delivering the Vision’

Oslo is aiming to move services from the server into ‘the cloud’, giving architects much more freedom on where to deploy services, allowing services to be much more accessible. Furthermore, it plans on increasing functionality across a range of applications to increase development productivity by a factor of 10, primarily through new unified Business Process Management (BPM) modeling tools.

Oslo will incorporate the following technologies and is likely to be available in beta at the beginning of 2008:

  • Windows Server 2008
  • BizTalk Server v.6
  • BizTalk Services v.1
  • Visual Studio v.10
  • The .Net Framework 4.0
  • System Centre v.5

Oslo will introduce new versioning strategies that span across the technology stack (above) rather than each technology having separate versioning policies [answered during the Q&A and not that much detail offered].

During the Q&A it was also revealed that every element of the Oslo platform will be natively 64-bit capable, however there are no intentions to make BizTalk Server 64-bit only during this next release cycle.

BizTalk Services – ‘The Cloud’

Microsoft see the emerging BizTalk Service as Software plus Services (S+S), rather than Software as a Service (SaaS). Microsoft see S+S as the encapsulation of a number of internally hosted and cloud services in an enterprise mashup fashion, rather than a single completed service (SaaS).

BizTalk Services enables firewalled applications to create publically available endpoints and currently supports federated identity and message routing. Further features will come on-stream during the beta and RC phases. BizTalk Services is not built on top of BizTalk (and therefore does not require a BizTalk licence).

Any WCF enabled .Net application can publish messages on the BizTalk Services platform and equally subscribe (via WCF) to messages published from other sources (Mike presented an excellent manufacturing stock allocation demo based on BizTalk Server and BizTalk Services with a great Live Earth mashup showing worldwide factories sending stock availability messages back over the BizTalk Services cloud).

Enterprises are encouraged to start using the BizTalk Services functionality, however it is unlikely to be released for production use until this time next year. They are not yet certain on the pricing structures for the service – some elements will be free (but without SLA’s) and others will be chargeable. They are looking for feedback on the services as a whole and suggestions on pricing are welcome.

With regards to hosting, Microsoft will be providing the hosting platform for the initial release, but plan on offering the platform for other hosting companies post Oslo, similar to the Exchange model. It is also hoped that the service will be available for hosting internally within the enterprise, allowing companies to move from a hosted to internal platform simply by deploying a copy of the [hosted] configuration locally and changing WCF relevant pub/sub end-points, similar to the hosted CRM model. Unfortunately, there are no firm plans on when this functionality will be available.

Further details about BizTalk Services, including the beta, can be found at: http://labs.biztalk.net/

Business Process Managed (BPM) Modelling

The new unified BPM modeling platform aims to take current ‘siloed’ view of BPM (where each department only looks at its own processes) and generate models of the business process across the whole enterprise, describing them in a new modeling language, which is likely to be similar in concept to MSIL.

Although this new language is not yet finalized, the product team do want to submit it to the standard bodies at a later stage. Furthermore, it aims to overcome the difficulties encountered with Case Management tools in the early- to mid-nineties by having the new abstracted language executed directly, rather than compiling down into C#, losing essential definition and meaning (apparently the cause of Case Management’s downfall)

Modeling will be accomplished in four stages:

  • Analysts will create models that define both electronic and human business processes.
  • Models will be translated into a common ‘model language’, stored, versioned and shared in a central business-visible repository (possibly Sharepoint). This repository will serve as the reference environment for future review.
  • Development teams will enhance the process models with process functionality.
  • Models will be deployed to processing servers (BizTalk v.Next) or The Cloud’ for execution, also allowing services to discover and communicate with each-another.

Managed Services Engine

Mike outlined a new product to be released as part of Oslo, called the Managed Services Engine (MSE). From what I could tell, MSE is designed to act as a broker between apps calling web-services and the those exposing web-services. Details were a bit scant, so I’m including the following from the Codeplex site:

The Managed Services Engine (MSE) is one approach to facilitating Enterprise SOA through service virtualization. Built upon the Windows Communication Foundation (WCF) and the Microsoft Server Platform, the MSE was developed by Microsoft Services as we helped customers address the challenges of SOA in the enterprise.

The MSE fully enables service virtualization through a Service Repository, which helps organizations deploy services faster, coordinate change management, and maximize the reuse of various service elements. In doing so, the MSE provides the ability to support versioning, abstraction, management, routing, and runtime policy enforcement for Services.

The beta of the MSE can be downloaded from Codeplex at: http://www.codeplex.com/servicesengine

Update: Jesus Rodriguez has a bit more on the Managed Services Engine at: Managed Services Engine

And Finally

I remember reading on one of the BizTalk blogs when the SOA & BPM conference started that said WCF was the next big thing for us BizTalkers and that ‘if you’re not doing WCF at the moment, you should be’. That prompted me to go out and buy O’Reilly’s ‘Learning WCF’, but following this closing speech I would like to add a caveat to that statement: If you are a developer, I would agree that WCF will start to play a big part in your career post Oslo, if it doesn’t already. However for the system architects out there, I believe that the cloud (and similar services) is likely to play an even bigger part in future projects and this area that should be given serious focus. Time to download the BizTalk Services SDK……

Finally, I was lucky enough to introduce myself to Mike which was a little like meeting a celebrity – especially after dedicating your professional career for the last four years to the product he manages!

Optimising System Memory for SQL Server – Part I

While researching a problem with AWE allocating memory in SQL Server (see my previous post Using Address Windowing Extensions (AWE) with SQL Server), I found myself deep-diving into the Windows memory management model. I’d like to share my findings here to help those tasked with setting up a SQL Server instance (either for BizTalk or any other application), implement the best possible system memory configuration.

In this two part post I’ll cover the various memory management options available, to ensure you get the most out of your physical memory / operating system choice, before moving on to review SQL Server options in Part II. Please note that the following information relates to a 32-bit architecture, unless otherwise stated – I plan on posting a similar article on the 64-bit architecture sometime in the near future.

Virtual Memory and Memory Management

On the x86 family of processors, all processes are provided with 4Gb of virtual memory. By default, the first 2Gb is allocated to the operating system kernel and the latter 2Gb to the user process. This virtual memory isn’t real ‘physical’ memory – as processes make memory allocations, physical storage is provided, mixed between physical memory and the system paging file*. Windows transparently handles copying data to and from the paging file, so that the application can allocate more memory than physically exists in the machine and, so multiple applications can have equal access to the machine’s physical RAM.

* (this is how multiple applications can run on a system with 512Mb of RAM, each with a virtual address space of 4GB – it’s not real memory, but it seems like it to the application)

Tuning Virtual Memory

Windows NT 4.0 introduced the concept of the /3Gb switch (added to the [operating systems] section of the boot.ini file), which allows system administrators to modify the allocation of virtual memory between the OS and user processes. By adding the /3GB switch (and restarting), Windows will allocate just 1Gb to the kernel mode address space, allowing a process’s user mode address space to increase to 3Gb.

In addition to the /3GB switch, Windows XP, Windows Server 2003 & 2008 includes the /USERVA switch. This allows finer granularity over the amount of virtual memory provided, to the kernel mode and user mode address space. To use the /USERVA switch, simply indicate how much memory should be provided for user mode address space. For example, /USERVA=2560 configures 2.5GB for user mode space and leaves the remaining 1.5GB for the kernel. When using the /USERVA switch, the /3GB switch must also be present.

To add either the the /3GB or /USERVA switches, go to System Properties -> Startup and Recovery and click Edit under System Startup. Once reconfigured, you’re [operating systems] section should look something like this:

[operating systems]
multi(0)disk(0)rdisk(0)partition(1)WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /3GB

[operating systems]
multi(0)disk(0)rdisk(0)partition(1)WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /3GB /USERVA=2560

More information on the /3GB and /USERVA switches can be found in the Microsoft KB articles 316739 and 810371.

Utilising all Physical Memory: Physical Address Extension (PAE)

PAE support was added by Intel starting with the Pentium Pro family and later* and provides a memory-mapping model, which allows Windows to access to up to 64Gb of physical memory, rather than the standard 4Gb. In PAE mode, the memory management unit implements page directory entries (PDEs) and page table entries (PTEs) that are 36-bits wide (rather than the standard 32-bits) and adds a page directory pointer table to manage these high-capacity tables and indexes into them, allowing the operating system to recognise up to 64Gb.

In practice, this means that although Windows processes are still given a 4Gb allocation of virtual memory (virtual memory is still allocated using 32-bit pointers, limiting their maximum size to 4Gb), multiple processes can immediately benefit from the increased RAM as they are less likely to encounter physical memory restrictions and begin paging.

A specific version of the Windows kernel is required to use PAE, either Ntkrnlpa.exe for uniprocessor systems, or Ntkrpamp.exe for multiprocessor systems, both are located in the i386Driver.cab file. No additional work needs to be undertaken by the system administrator, apart from adding the /PAE switch in a similar fashion to the /3GB or /USERVA switches. If however you are running hardware that supports hot-adding memory , the PAE switch will be added by default (hot-add memory is only supported by Windows Server 2003 Enterprise and Datacenter editions). Note: 64-bit versions of Windows do not support PAE. The PAE switch can be added with or without the /3GB switch, as detailed later.

To manually add the PAE switch add the following to your boot.ini file:

[operating systems]
multi(0)disk(0)rdisk(0)partition(1)WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /PAE

The following table details the maximum physical memory that a Windows version can recognise, with the PAE switch enabled:

  • Windows 2000 Server – 4Gb Maximum
  • Windows 2000 Advanced Server – 8Gb RAM Maximum
  • Windows 2000 Datacenter Server – 32Gb RAM Maximum
  • Windows Server 2003 Web Edition – 2Gb RAM Maximum
  • Windows Server 2003 Standard Edition – 4Gb RAM Maximum
  • Windows Server 2003 Enterprise Edition – 32Gb RAM Maximum
  • Windows Server 2003 Datacenter Edition – 64Gb RAM Maximum
  • Windows Server 2008 Web Edition – 4Gb RAM Maximum
  • Windows Server 2008 Standard Edition – 4Gb RAM Maximum
  • Windows Server 2008 Enterprise Edition – 64Gb RAM Maximum
  • Windows Server 2008 Datacenter Edition – 64Gb RAM Maximum
  • Windows Server 2008 Datacenter Edition (Server Core) – 64Gb RAM Maximum

Unless you have a system with more than 4Gb of physical memory, there is little point in enabling PAE; however, PAE can be enabled on Windows XP SP2, Windows Server 2003 and later 32-bit versions of Windows, to support hardware enforced Data Execution Prevention (DEP).

I’ve provided only a brief overview of the Physical Address Extensions here; for more background reading please see the following: Microsoft KB articles 283037 and 268363, Windows Hardware Developer Central article Physical Address Extension – PAE Memory and Windows.

* the PAE extension is supported on AMD chipsets, although I can’t find any hard evidence on the AMD website.

Addressing Windows Extensions (AWE)

Unlike the PAE switch, the AWE facility in Windows exists to allow applications – such as SQL Serverto access more than 4GB of physical memory. AWE removes the 4Gb physical memory limit of 32-bit software architecture by enabling code to allocate large chunks of physical memory and then, map access to the physical memory into a window of virtual memory that is 32-bit addressable. Because AWE allows the OS to allocate memory above the 4Gb boundary, there is little point enabling it on a system with 4Gb or less of physical RAM.

One thing to note with AWE memory is that it is never swapped to the system paging file (i.e. disk). If you review the AWE API, you’ll see that the methods refer to physical memory allocation: AWE memory is physical memory that is never swapped to or from the system paging file. This explains why that in order to use the ‘Use AWE to Allocate Memory’ flag in SQL Server, requires the ‘Lock Pages in Memory’ Local Security Policy setting (see Using Address Windowing Extensions (AWE) with SQL Server) – pages can only be locked in memory if this local security policy is set. This also explains why (or how) applications such as SQL Server, Exchange etc. when using AWE to allocate memory can consume such great amounts of physical RAM.

Best Practice Configurations

Based on the information provided above, Microsoft recommend the following physical memory / operating system memory switch combinations:

  • 4Gb Physical RAM – /3GB switch (or /USERVA switch)
  • > 4Gb Physical RAM – /3GB and /PAE switch
  • > 16Gb Physical RAM – /PAE switch

A Final Note about the /3GB Switch

You will notice that in the table above recommends that a server with greater than 16Gb of physical RAM should not be configured with the /3GB switch. When you apply the /3GB switch, you limit the size of the virtual memory address space available to the kernel to 1Gb (from the usual 2Gb), which is too small for the virtual memory manager to store the memory mapping tables needed to access more than 16Gb of RAM. As a result, the memory manager imposes a virtual memory limit of 16 GB on a system with both the /3GB and /PAE enabled. Even if a system has 32 GB or more of physical memory, if both options are enabled, only 16 GB of memory will be recognised.

However, although 16Gb is a hard upper limit imposed by the kernel, most workloads will actually show decreased throughput on systems with 12Gb of memory, and many on systems/workloads with as low as 8GB of memory. Therefore, ensure you thoroughly test the use of the /3GB switch within you UAT/reference environment before applying it to live systems.

I’m going to close Part I here because there is plenty for the reader to take onboard before we start to look at how these considerations affect SQL Server. Part II will be out in the next few days.

References

  1. Comparison of Windows Server 2003 Editions
  2. RAM, Virtual Memory, Pagefile and all that stuff
  3. Protecting RAM Secrets with Address Windowing Extensions
  4. PAE and /3GB and AWE oh my…
  5. Memory Limits for Windows Releases
  6. Do I have to assign the Lock Pages in Memory privilege for Local System?
  7. Inside SQL Server 2000’s Memory Management Facilities
  8. Lock escalation in SQL Server 2005
  9. Large Memory Support – 4-Gigabyte Tuning
  10. Myth: PAE increases the virtual address space beyond 4GB
  11. Hot-Add Memory Support in Windows Server 2003

Please read my disclaimer in relation to this post.

Using Address Windowing Extensions (AWE) with SQL Server

I discovered the following while researching why a recently installed instance of SQL Server wouldn’t use any more memory than 2Gb, even though the ‘Use AWE to allocate memory’ flag was set.

Using AWE with SQL Server

In order to allow SQL Server to use all of the memory available to the operating system, the Windows ‘Address Windowing Extensions’ (AWE) facility must be used by enabling the ‘Use AWE to allocate memory’ flag in the SQL Server Properties dialog, or alternatively by issuing the following command in a query against the target server:

sp_configure 'awe enabled', 1
RECONFIGURE
GO

The change requires a restart to SQL Server, however before you do that, ensure that you add the user that the SQL Server service is running under to the ‘Lock Pages in Memory’ Local Security Policy (see Microsoft KB 811891 for the exact details on how to do this). If you don’t update the local security policy, SQL Server will not actually use AWE and continue to use only 2Gb of memory; furthermore, you’re likely to see the following in the SQL Server log:

Cannot use Address Windowing Extensions because lock memory privilege was not granted.

A Small Caveat

I mentioned above that the AWE flag allows SQL Server to use all of the memory available to the operating system. This isn’t exactly true, as it depends on the actual physical memory available *and* how much Windows can actually ‘see’.

In researching this problem, I spent some time digging into the memory management model of Windows and found some extremely interesting information relating to the /3GB, /PAE etc. boot.ini switches. This research was a real learning curve and information I think anyone putting together an enterprise infrastructure should be aware of – I plan on blogging the various options over the next few days.

SQL Server Agent – Supported Service Account Types

A SQL Server gotcha for today – following the installation of a SQL Server 2005 active/passive cluster I ran the Microsoft Baseline Security Analyser (sorry Analyzer) to check that nothing was missing or incorrectly configured. Halfway through the report I noticed the following warning regarding the SQL Server Agent Service:

SQL Server Service [CLIENTSQL02SQLSERVERAGENT] In Unrecommended Account On Host [CLIENTSQL02].
We recommend that the service [SQLSERVERAGENT] on host [CLIENT02] be run under Network Service Account. Currently it is designated to run under the account [DOMAINSqlSvrSvc].

This warning threw me as the installation wizard requires you to use a domain user as the account for the SQL Server Agent Service and explicitly does not allow you to select the Network Service Account.

So whats the deal here Microsoft? After a little Googling, it would appear that there is one of two explainations:

  • The MBSA does not report correctly for a clustered environment – the SQL Server Books Online page ‘Service Account Types Supported for SQL Server Agent‘ reports that Network Service account (NT AUTHORITYNetworkService) is supported on a non-clustered server, but it is not supported on a clustered server. There is lots more gumpf on the page and I would recommend that you read it all before coming to a conclusion if you are experiencing the same problem; Or
  • It appears the client had evicted one of the nodes to perform some motherboard firmware upgrade (damn you fancy new HP Blades!!) before running the MBSA and it is possible that it thought it was looking at a standalone rather than clustered environment.

I think the point to take home from here is not to take the MBSA report as gospel – before you go ahead and implement a change on live, test it first on your UAT or reference environment and check that it does produce the desired effect.

Update: Also just found this in the SQL Server Books Online page Selecting an Account for the SQL Server Agent Service:

Because multiple services can use the Network Service account, it is difficult to control which services have access to network resources, including SQL Server databases. We do not recommend using the Network Service account for the SQL Server Agent service.

I suppose that answers it then – don’t use the Network Service account for the SQL Server Agent service.