With the standalone array created, as discussed in Part 2, we can now provide high availability by using the NLB feature of Forefront TMG Enterprise Edition. In addition, NLB will also provide load balancing and allows for array members to operate in an Active/Active workload. More information on Forefront TMG Integrated NLB can be found here.
At a high-level, configuration of integrated NLB includes the following steps:
- Prepare the environment for NLB
- Enable Integrated NLB
- Configure Forefront TMG Firewall Policy for NLB Manager (Optional)
|Please Note: Although this series is based upon the use of a workgroup deployment, the preparation and configuration of integrated NLB is exactly the same for a domain joined TMG scenario.|
Prepare the Environment for NLB
Before enabling integrated NLB in TMG, you need to consider the the potential implications of NLB to the existing networking environment and choose the most appropriate NLB operating mode. An overview of NLB considerations for ISA Server 2006 Enterprise Edition is provided in one of my previous blog posts here which contains a lot of information that is still relevant for Windows Server 2008 and Forefront TMG.
If you are running Forefront TMG on a virtualisation platform, like Hyper-V or VMware ESX, there are also additional considerations as discussed next:
If you are using Hyper-V RTM as your virtualisation platform, you will need to enable Static MAC address entries on each virtual machine as discussed here. This will need to be completed for each virtual NIC that will be NLB enabled.
If you are using Hyper-V R2 as your virtualisation platform, you will need to configure a new feature in Hyper-V called Enable Spoofing of MAC addresses on each virtual machine. This will need to be completed for each virtual NIC that will be NLB enabled. This option was first added in Hyper-V R2 as discussed here and an example is provided below for completeness:
First for TMG03:
Next for TMG04:
If you are using VMware ESX as your virtualisation platform, you will need to apply the relevant VMware guidance as discussed here.
Assuming you have the environment ready for NLB, we can now finally enable integrated NLB for Forefront TMG!
Enable Integrated NLB
With all preparations complete, we can now enable integrated NLB for Forefront TMG Enterprise Edition.
For TMG03 only:
|Important!: A recent TMG update was released to addresses an NLB issue as discussed here and summarised as “In an array-based TMG 2010 deployment with Integrated NLB enabled, traffic may not reach its destination”. It is therefore strongly recommended to install this update on all NLB enabled Forefront TMG servers. The update can also be downloaded from here.|
Configure TMG Firewall Policy for NLB Manager (Optional)
Many people who have used NLB before will be used to using the Network Load Balancing Manager tool. However, when running this on TMG clusters you may notice that the tool is unable to connect to remote cluster nodes/hosts. This is caused by TMG blocking NLB Manager communications between cluster hosts. The default system policies included with TMG allow RPC communication between hosts, but the NLB Manager tool also uses DCOM calls as discussed here which are blocked by the Default Deny rule included with TMG.
When running TMG, it is not actually necessary to use NLB Manager as similar functionality is already provided within the TMG Management Console; hence why I have marked this step as optional.
However, for those who do want to use NLB Manager this can be achieved by adding a custom access rule which permits the necessary DCOM communications between NLB cluster nodes (array members). An example is provided below.
For TMG03 Only:
So there we have it, you should now have a fault tolerant, two-node standalone array which can be deployed to an environment without Active Directory. As discussed throughout the articles in this series, many of the elements discussed (NLB in particular) are also relevant to a domain joined TMG deployments, and can be used as standalone configuration guides for this purpose.
Finally, for those that have followed the entire series, maybe now you can see why I still choose to implement a dedicated Intra-Array Network as this makes the entire configuration easier to manage, more logical, performance optimised and ultimately more secure; hopefully you agree ;)
Hope this helps…