Views:

Optimizing Internet Information Services and WAN Performance

Configuring Microsoft Internet Information Services (IIS) settings on the server that is running Microsoft Dynamics CRM can benefit both Microsoft Dynamics CRM itself and any custom applications, plug-ins, or add-ins that you may have developed by using the Microsoft CRM 3.0 SDK. This section describes changes that you can make to IIS settings that can improve performance.

For general information about how to configure IIS to improve performance of Web service calls from ASPX pages, see the “At Your Service: Performance Considerations for Making Web Service Calls from ASPX Pages” article on MSDN:

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnservice/html/service07222003.asp

Working around the HTTP Specification’s Two-Connection Limit

The HTTP specification indicates that an HTTP client should make a maximum of two simultaneous TCP connections to any single server. This keeps a single browser from overloading a server that has connection requests when it browses to a page that has lots of images, such as 120 embedded thumbnail images. Instead of creating 120 TCP connections and sending an HTTP request on each, the browser will only create 2 connections and then start to send the 120 HTTP requests for the thumbnail images on those two connections.

The problem with this approach becomes clear when you consider an example with 50 simultaneous users. If you had to make a Mappoint Web Service call for each of those users, you would have 48 users sitting around waiting for one of those two connections to become available.

You may be able to discover the source of a performance bottleneck by manipulating the two-connection limit. Because the two-connection limit is part of the HTTP specification, we do not recommend making this change permanently on a production server.

The default two-connection limit for connecting to a Web resource can be controlled by using a configuration element called connectionManagement. The connectionManagement setting enables you to add the names of sites where you want a connection limit that differs from the default. The following code can be added to a typical Web.config file to increase the default value for all servers with which you are connecting, to a connection limit of 40:

<configuration>

  <system.net>

    <connectionManagement>

      <add address="*" maxconnection="40" />

    </connectionManagement>

  </system.net>


 

  <system.web>

 

Note:  There is never a limit to the number of connections that you can make to a local computer. Therefore, if you are connecting to localhost, this setting has no effect.

 

Configuring Microsoft .NET ThreadPool Settings

For more information about how to configure Microsoft .NET ThreadPool settings, see the Knowledge Base article “Contention, poor performance, and deadlocks when you make Web service requests from ASP.NET applications”:

·         http://support.microsoft.com/kb/821268

You can tune the parameters in your Machine.config file to best fit your situation. However, if you are making one Web service call to a single IP address from each ASPX page, we recommend that you tune the parameters in the Machine.config file so that they use the following settings:

·         Set the values of the maxWorkerThreads parameter and the maxIoThreads parameter to 100.

·         Set the value of the maxconnection parameter to 12*N (where N is the number of CPUs that you have).

·         Set the values of the minFreeThreads parameter to 88*N and the minLocalRequestFreeThreads parameter to76*N.

·         Set the value of minWorkerThreads to 50.

Important:          By default, minWorkerThreads is not in the configuration file. You must add it.

Several of these recommendations include a formula that calculates the number of CPUs on a server. The variable that represents the number of CPUs in the formulas is N. For these settings, if you have hyperthreading enabled, you must use the number of logical CPUs instead of the number of physical CPUs. For example, if you have a four-processor server for which hyperthreading has been enabled, the value of N in the formulas will be 8 instead of 4.

Note:  When you use this configuration, you can execute a maximum of 12 ASP.NET requests per CPU at the same time because 100-88=12. Therefore, at least 88*N worker threads and 88*N completion port threads are available for other uses (such as Web service callbacks).

For example, you have a server that has four processors and hyperthreading enabled. Based on these formulas, you would use the following values for the configuration settings that are mentioned in this section.

<processModel maxWorkerThreads="100" maxIoThreads="100" minWorkerThreads="50"><httpRuntime minFreeThreads="704" minLocalRequestFreeThreads="608"><connectionManagement><add address="[ProvideIPHere]" maxconnection="96"/></connectionManagement>

 

For more information, see “Improving ASP.Net Performance” in the “Improving .NET Application Performance and Scalability” section of the MSDN Library:

·         http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt06.asp

Configuring the Memory Limit

Configuring and tuning the memory limit is very important for the cache to perform optimally. The ASP.NET cache starts trimming the cache based on a least recently used page (LRU) algorithm and the CacheItemPriority enumerated value assigned to the item after memory consumption is within 20% of the configured memory limit. If the memory limit is set too high, the process can be recycled unexpectedly. The application might also experience out-of-memory exceptions. If the memory limit is set too low, it could increase the time that must be spent performing garbage collections. This decreases overall performance.

Empirical testing shows that the possibility of receiving out-of-memory exceptions increases when private bytes exceed 800 MB. A good rule to follow when you are determining when to increase or decrease this number is that 800 MB is only relevant for the .NET Framework 1.0. If you have .NET Framework 1.1 and if you use the /3 GB switch, you can increase this number to 1,800 MB.

When you use the ASP.NET process model, you configure the memory limit in the Machine.config file as follows.

Note:  If this limit is set too low, the Microsoft Dynamics CRMAppPool will recycle too frequently and will prevent some larger processes from completing correctly. By default this is set to a value of 60 with Microsoft .Net Framework 1.1.

<processModel memoryLimit="50">

 

This value controls the percentage of physical memory that the worker process can consume. The process is recycled if this value is exceeded. In the previous sample, if there are 2 GB of RAM on the server, the process recycles after the total available physical RAM falls below 50% of the RAM; in this case 1 GB. In other words, the process recycles if the memory used by the worker process goes beyond 1 GB. You monitor the worker process memory by using the process performance counter object and the private bytes counter.

More Information

For more information about how to tune the memory limit and about the /3 GB switch, see "Configure the Memory Limit" and "/3GB Switch" in "Tuning .NET Application Performance" in the “Improving .NET Application Performance and Scalability” section of the MSDN Library:

·         http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt17.asp

Configuring Web Gardens

By default, ASP.NET uses all CPUs available. In Web garden mode, ASP.NET creates one worker process for each CPU. Each process creates an affinity to a single CPU. Web gardens offer an addition layer of reliability and robustness. If a process crashes, there are other processes that can still service incoming requests.

Web gardens may perform better under the following scenarios:

·         The application uses STA objects heavily.

·         The application accesses a pool of resources that are bound by the number of processes. For example, a single process is restricted to using a particular number of resources.

To determine the effectiveness of Web gardens for your application, run performance tests, and then compare your results with and without Web gardens. Typically, in the two scenarios that are described in this section, you are likely to notice a larger benefit with servers that contain four or eight CPUs.

Note   Do not use the in-process session state store or any technique that causes process affinity if Web gardens are enabled.

IIS 6.0 vs. the ASP.NET Process Model

By default, the ASP.NET Process Model is not enabled in IIS 6.0. If you enable Web gardens, you may adversely affect the performance of the garbage collector, which makes sure that unused memory is released. The performance of the garbage collector may be affected because the server version of the garbage collector is still used when bound to a single CPU. The disadvantage is that this creates one worker process per CPU. Because there is a worker process for each CPU, additional system resources are consumed.

Enabling Web Gardens by Using IIS 6.0

You can enable Web gardens in IIS 6.0 by using the Internet Information Services Manager. This is the recommended method for enabling Web gardens for Microsoft Dynamics CRM.

To enable Web gardens:

1.    Right-click the application pool, CRMAppPool, for which you want enable Web gardening, and then click Properties.

2.    Click the Performance tab.

3.    In the Web garden section, specify the number of worker processes that you want to use.

Enabling Web Gardens by Using the ASP.NET Process Model

In the <processModel> section of the Machine.config file, set the webGarden attribute to true, and then configure the cpuMask attribute as follows.
<processModel webGarden="true" cpuMask="0xffffffff" />
 

Configuring the cpuMask Attribute

The cpuMask attribute specifies the CPUs on a multiprocessor server that are eligible to run ASP.NET processes. By default, all CPUs are enabled and ASP.NET creates one process for each CPU. If the webGarden attribute is set to false, the cpuMask attribute is ignored, and only one worker process runs. The value of the cpuMask attribute specifies a bit pattern that indicates the CPUs that are eligible to run ASP.NET threads. The following table includes several examples.

 

CPUs

Hexadecimal

Bit Pattern

Results

2

0x3

11

2 processes, uses CPU 0 and 1.

4

0xF

1111

4 processes, uses CPU 0, 1, 2, and 3.

4

0xC

1100

2 processes, uses CPU 2 and 3.

4

0xD

1101

3 processes, uses CPU 0, 2 and 3.

8

0xFF

11111111

8 processes, uses CPU 0, 1, 2, 3, 4, 5, 6, and 7.

8

0xF0

11110000

4 processes, uses CPU 4, 5, 6, and 7.

More Information

For more information about how to use ASP.NET Web gardens, see the Knowledge Base article "How to restrict ASP.NET to specific processors in a multiprocessor system":

·         http://support.microsoft.com/default.aspx?scid=kb;en-us;815156

Disabling Tracing and Debugging

Tracing and debugging may cause performance issues. We do not recommend that you use tracing and debugging when the application is running in a production environment. This is not typically enabled in a Microsoft Dynamics CRM Server environment, but can be disabled using the following steps.

Disable tracing and debugging in the Machine.config and Web.config files, as shown in the following sample:

<configuration>

  <system. Web>

    <trace enabled="false" pageOutput="false" />

      <compilation debug="false" />

  </system. Web>

</configuration>

Disabling Microsoft Dynamics CRM Server Platform Tracing

If you are not using Microsoft Dynamics CRM 3.0 Server platform tracing for troubleshooting, it should be disabled. Platform tracing can affect the performance of a production server.

To enable Microsoft Dynamics CRM 3.0 Server platform tracing:

1.    Start REGEDIT.

2.    Open HKEY_LOCAL_MACHINE | Software | Microsoft | MSCRM

3.    If Platform tracing is enabled, you will see the following registry keys:

·         TraceDirectory

·         TraceRefresh

·         TraceSchedule

·         TraceCategories

·         TraceCallStack

·         TraceEnabled

4.    Double-click the TraceEnabled registry key and set it to a value of 0.

5.    Double-click the TraceRefresh registry key

6.    Increase the value of the TraceRefresh key by 1, and then save it.

Working with Microsoft Windows Server Terminal Services

Users in your organization may be able to use Microsoft Windows Server Terminal Services to optimize their performance in any of the following circumstances:

·         Users are connecting over a WAN or VPN using the Microsoft Dynamics CRM Web application or Microsoft Dynamics CRM client for Outlook.

Note:        The Microsoft Dynamics CRM laptop client is not supported with Terminal Services with Microsoft Dynamics CRM 3.0.

·         Round-trip client/server latency does not meet your expectations.

·         Remote users are connecting with a finite (or budgeted) amount of bandwidth. Terminal Services can enable more users to use that connection with better results (faster, with reduced latency) because more work is performed on the Terminal Services computer instead of the remote clients.

Monitoring and Optimizing Microsoft Dynamics CRM System Performance

Performance monitoring checks systems to ensure that your organization is making optimal use of the hardware and software resources at your disposal and that the resources are meeting your performance goals. Through effective monitoring, you can determine whether you are meeting performance goals. If you are not, you can determine the areas that are causing problems. Over time, you can also use performance monitoring to generate data that can be used in trend analysis. This enables you to predict possible performance and availability issues in the future and helps you solve problems before they occur.

To effectively monitor the performance of the Microsoft Dynamics CRM system, you should examine performance monitor counters on each server that makes up the environment.

This section provides details about the following information:

·         How to analyze some of the specific counters that have been found to be most useful when monitoring a Microsoft Dynamics CRM environment.

·         How to tune the Microsoft .NET Framework and common language runtime (CLR).

·         How to use performance counters to help determine CLR bottlenecks.

Frequently, this analysis will indicate the changes to make to optimize Microsoft Dynamics CRM. You can use the information here to help determine which, if any, hardware upgrades are necessary together with any operational practices to help improve the performance of Microsoft Dynamics CRM.

Warning         Before you perform any of the following optimization procedures, back up your databases and Active Directory. If you do not back up these items, you risk losing the information that is contained in them.

Monitoring Performance on Windows 2000 and Windows Server 2003

The Windows 2000 and Windows Server 2003 operating systems include System Monitor for analyzing the performance of the system. System Monitor consists of Performance Monitor and Network Monitor. You can add objects and counters to these monitors. For example, when you add SQL Server, Exchange 2000 Server, or Exchange Server 2003 to that environment, additional objects and counters are installed. These can prove very useful in determining the overall health of Microsoft Dynamics CRM.

Note:  Remote monitoring is usually better than self-monitoring, because performance is not tainted by the load caused by monitoring. For more information about remote monitoring, see the following Knowledge Base articles:

Knowledge Base Article Title

Knowledge Base Link

Creating a Log File to Send to Customers for Remote Monitoring

http://support.microsoft.com/kb/243283

Log is Not Started When You Try to Start a Log with Remote Counters in System Monitor 

http://support.microsoft.com/kb/240389

Creating a Baseline with System Monitor

If you use System Monitor to collect many Performance Counters, together with other tools to collect server-specific information, how can you know what numbers to expect? In some cases, there are specific figures to look for. In many more the answer will depend on several factors, such as the specifics of the hardware that you have, the network environment in which it exists, and the functionality of the application.

To help you understand what figures to expect for your environment, you should use System Monitor to generate a baseline. You do this by measuring counters in a functional environment that works well. You can measure a baseline in your test environment. However, in your test environment, you should make sure that you are effectively simulating potential real-world use in your production environment.

As you collect your baseline figures, be aware that in typical use, the Microsoft Dynamics CRM environment will face different stresses at different times of the day. For example, there may be more stress on the system at the start of the work day, during database backup, or when reports are being run. It can be very useful to combine logs over 24 hour periods with more intensive logging during stressful periods as you collect your baseline.

Many organizations are also seasonal in nature. Your organization may, for example, have more CRM activity immediately before a holiday shopping season, or at the end of a financial year. You should continue to update baseline figures to guarantee that they accurately reflect the usage of Microsoft Dynamics CRM in your environment.

Heavily used Windows 2000 and Windows Server 2003 servers may have bottlenecks in several areas. Monitoring only the applications that are running on Windows 2000 and Windows Server 2003 will not give you information about the condition of the server itself. You should also monitor for bottlenecks in the Disk Subsystem, Memory, Processor, and Network Subsystem. Frequently, there will be multiple instances of disks and processors. Therefore, make sure that you monitor all instances (that is, each disk or each processor).

You should measure the counters shown in the following table for all servers in the Microsoft Dynamics CRM environment.

Note   When monitoring disk counters, you must use the diskperf –y command to enable them to start on startup.

 

Object

Counter

Comments

Logical Disk

% Free Disk Space

Especially important on computers that are running Exchange Server and SQL Server, as databases and transaction logs may fill disk space. This results in loss of availability.

Physical Disk

Disk Reads/sec

The main reason for variation in this value is variation in the usage of your environment. If you are experiencing performance problems and these figures are still low, this counter may help provide evidence of the problem.

Physical Disk

Disk Writes/sec

The main reason for variation in this value is variation in the usage of your environment. If you are experiencing performance problems and these figures are still low, this counter may help provide evidence of the problem.

Physical Disk

Current Disk Queue Length

Generally, this should be at or near zero. On computers that are running SQL Server 2000, this counter can spike at high values but should not stay high for more than 30 seconds. Any longer indicates a potential bottleneck.

Physical Disk

Avg secs per read

Generally similar to published disk speed.

Physical Disk

Avg secs per write

Generally similar to published disk speed or 1-2 milliseconds (ms) if you have write-back caching enabled on your RAID controller.

Memory

Pages/sec

Exchange 2000 and Exchange 2003 servers make heavy use of a pagefile. On an Exchange server lots of paging is not in itself an indication of a problem.

For computers that are running SQL Server 2000, any paging is a detriment to performance. This number should stay fairly low.

For other servers, measure your paging against your baseline.

Memory

Page Reads/sec

This value should generally be less than 100. If the value is consistently high, you may have to increase system memory.

Memory

Page Writes/sec

This value should generally be less than 100. If the value is consistently high, you may have to increase system memory.

Paging File

% Usage

You may have to increase the size of your pagefile for Exchange Server. Try to keep this counter under 70%.

Process

Page Faults/sec

A page fault can be either a cache fault or a hard disk fault. For the true number of hard disk faults, subtract the number of Cache Faults/sec from the Page Faults/sec value.

For the SQLSERVR instance, this value should be at or near zero. Any SQL Server paging beyond this indicates a bottleneck.

Processor

Interrupts/sec

Will vary depending on usage in your environment.

Processor

%Processor Time

Measure for a specific processor instance. When you are using the _TOTAL instance, the total percentage can be 100 times the number of processors. When 100% of the available processor time is being used for an extended period, this indicates a need for more processors. Also see the Processor Queue Length counter for this processor.

Process

%Process Time

On Exchange servers and Microsoft Dynamics CRM servers, measure inetinfo (IIS). On domain controllers, measure lsass (security system including Active Directory), and on SQL Server computers, measure the SQLSERVR instance.

System

Processor Queue Length

This is a cumulative value for all processors. A sustained value of more than double the number of processors indicates a processor bottleneck.

Network Segment

% Net Utilization

Will vary depending on usage in your environment.

Redirector

Bytes Total/sec

Will vary depending on usage in your environment.

Redirector

Network Errors/sec

A high figure will generally indicate the Redirector and one or more servers having communication difficulties.

Server

Bytes Total/sec

Will vary depending on usage in your environment.

Server

Pool Paged Peak

Indicates the correct sizes of the page files and physical memory.

Server Work Queues

Queue Length

A sustained queue length of more than four may indicate processor congestion.

 

For more information about how to monitor Windows 2000 objects, see the Windows 2000 Server Resource Kit:

·         http://www.microsoft.com/windows2000/techinfo/reskit/

For information about performance parameters and settings for Windows Server 2003, see the “Performance Tuning Guidelines for Windows Server 2003” document:

·         http://www.microsoft.com/windowsserver2003/evaluation/performance/tuning.mspx

Monitoring the Performance of the Server That Runs IIS

Microsoft Dynamics CRM server is basically an Internet Information Services (IIS) server that runs a Microsoft.NET-connected application. To monitor the overall health of the servers, you should collect information about the Windows 2000 and Windows Server 2003 counters mentioned in the previous section. One of the key counters to be measured against a baseline is the %Process Time for the inetinfo (IIS). Generally, if the Microsoft Dynamics CRM server meets the recommended hardware requirements and does not perform any other tasks, you should find no performance issues on this server.

Monitoring the Performance of Exchange 2000 and Exchange 2003

Because Microsoft Dynamics CRM uses the Exchange implementation of Simple Mail Transfer Protocol (SMTP), you must monitor the SMTP Server object. Specifically, the Microsoft Dynamics CRM-Exchange E-Mail Router (the Router) is implemented as a transport event sink that occurs on the pre-categorization event. Therefore, you should monitor counters that refer to the message categorizer. This is in addition to the Windows 2000 and Windows Server 2003 counters shown in the previous table.

The following table shows the most important counters to monitor.

 

Object

Counter

Comments

SMTP Server

Bytes Received/sec

The rate bytes are received by the SMTP server.

SMTP Server

Cat: Address Lookups/sec

Number of Address Lookups sent to the Active Directory per second.

SMTP Server

Cat: Categorization Completed/sec

The total number of messages submitted to the categorizer that have been categorized.

SMTP Server

Cat: LDAP Searches/sec

LDAP searches successfully dispatched per second.

SMTP Server

Cat: Messages Submitted/sec

The total number of messages submitted to the categorizer.

SMTP Server

Message Bytes Received/sec

The rate that bytes are received in messages.

SMTP Server

Messages Delivered/sec

The rate messages are delivered to local mailboxes (this refers to Exchange mailboxes).

SMTP Server

Messages Received/sec

The rate at which incoming messages are received.

SMTP Server

DNS Queries/sec

The rate of DNS lookups on the server.

 

All the counters listed in this table will vary depending on how busy the server is. Frequently, this depends on how heavily the Exchange server is being used for Exchange e-mail purposes. However, monitoring these counters will enable you to see which Exchange servers are less heavily used. You may then decide to put the E-mail Router on one of these servers.

On the Exchange server itself, you may want to use the Monitoring and Status tool. This will enable you to monitor items such as the SMTP Queue Growth and issue notifications if they continue to grow for longer than a specified length of time.

Monitoring the Performance of SQL Server 2000

Microsoft Dynamics CRM depends heavily on Microsoft SQL Server. You should make sure that you measure the Windows 2000 and Windows Server 2003 counters discussed in earlier sections. However, you should also monitor the SQL Server counters on the computer that is running SQL Server.

Use the performance counters listed in the following table to help determine performance problems with Microsoft SQL Server:


 

Object

Counter

Comments

SQLServer:Access Methods

Full Scans/sec

When the number of full scans is significantly more than a baseline comparison, it may indicate index statistics are out of date.

SQLServer:Buffer Manager

Buffer Cache Hit Ratio

If this value is less than 80%, the system may need additional memory resource for SQL Server. Ideally, this value is at or near 100%. When this percentage is near 100%, the server is operating at optimal efficiency (as far as disk I/O is concerned).

SQLServer:Databases

Log Growths (run against the application database instance)

Log files growing during times of heavy system usage will result in poor performance.

SQLServer:Databases Application Database

Percent Log Used (run against the application database instance)

If the percentage of log space that is used approaches 100%, transaction log backups should be performed more frequently, or the transaction log file sizes should be increased.

SQLServer:Databases Application Database

Transactions/sec (run against the application database instance)

The number of transactions started for the database.

SQLServer:Locks

Lock Waits/sec

Although blocking locks are unavoidable, a value significantly more than a baseline comparison that appears for a long time indicates a performance penalty caused by blocking locks. Blocking locks occur when read operations block write operations, writes block reads, or writes block other writes.

SQLServer:Locks

Number of Deadlocks/sec

Although deadlocks are unavoidable, a value significantly more than a baseline comparison that appears for a long time indicates a performance bottleneck. Deadlocks occur when operations each want a resource the other has locked. If the operations both involve writes, SQL Server must select one of the transactions and roll it back for the other transaction to continue. The undo and redo operations are the causes of less than optimal performance.

SQLServer:Memory Manager

Memory Grants Pending

The current number of processes waiting for a workspace memory grant. This counter, together with Buffer Cache Hit Ratio, can confirm a memory resource bottleneck.

 

On the computer that is running SQL Server, you should also consider using alerts. This will enable you to send notifications to an administrator if a particular state is reached on that computer.

Optimizing Deletion Service Performance

The Microsoft Dynamics CRM Deletion service deletes records from the Microsoft Dynamics CRM database. By default, this service runs every 4 hours. This can have adverse effects on performance. To reduce this as a risk during the day, the following steps can be performed to disable the service and run it manually during a time when users are not heavily using the system.

To schedule the deletion service as a Windows job:

1.    Click Start, click Run, type Services.msc, and then press Enter.

2.    In the Service console, find the Microsoft Dynamics CRM Deletion Service.

3.    Right-click Microsoft Dynamics CRM Deletion Service and then click Properties.

4.    Change the selection in the Startup type list to “Disabled”, and then click OK.

Note:        Now that the service is disabled, it must be run manually by using the Windows task scheduler

5.    On the Microsoft Dynamics CRM server, click Start, and then open Control Panel.

6.    In Control Panel, double-click Schedule Tasks.

7.    Double-click Add Scheduled Task.

8.    In the Scheduled Task Wizard, locate your CRMDeletionService.exe file (usually located in C:\program files\Microsoft Dynamics CRM\Server\Bin).

9.    Select a schedule that fits your organization’s usage patterns. For example: Run daily at 1am Every Day.

10. After you configure the schedule of the task, configure the user to execute the task in the wizard. This user should be an administrative user who also has administrative access to Microsoft SQL Server.

11. On the final screen of the Scheduled Task Wizard, select the Open advanced properties for this task when I click finish check box.

12. In the Run box, add the RunOnce argument to the task.

For example:

·         If your Run command was:

“C:\Program Files\Microsoft Dynamics CRM\Server\bin\CrmDeletionService.exe”

·         It should now read:

“C:\Program Files\Microsoft Dynamics CRM\Server\bin\CrmDeletionService.exe” –runonce

13. Repeat these steps to create another job to create indexes for new entities. Step 12 should use the argument of –runindexonce.

Load Balancing

You can install multiple Microsoft Dynamics CRM 3.0 Servers in order to balance the processing load across several servers. With multiple servers, you can also implement departmental Microsoft Dynamics CRM systems that still have access to the same Microsoft Dynamics CRM database.

Network Load Balancing (NLB) technology is designed to spread the load between the different nodes of a cluster. With NLB, administrators can add another server to the node as traffic increases (referred to as “scaling out”).

For more information about how to implement Microsoft Dynamics CRM 3.0 in an NLB environment, refer to the following article on Microsoft.com:

·         http://www.microsoft.com/dynamics/crm/using/deploy/clusteringmscrmservers.mspx.

Optimizing the Microsoft .NET Framework

To tune the .NET Framework, you must tune the common language runtime (CLR). Tuning the CLR affects all managed code, regardless of the implementation technology. Next, you tune the relevant .NET Framework technology, depending on the nature of the application. For example, tuning the relevant technology might include tuning ASP.NET-connected applications or Web services, Enterprise Services, and ADO.NET code. You can also use performance counters to identify CLR bottlenecks. The following sections address CLR tuning and how to use counters to identify bottlenecks.

Tuning the Common Language Runtime

Common language runtime (CLR) tuning is mostly achieved by designing and then optimizing your code to enable the CLR to perform its tasks efficiently. Your design must enable efficient garbage collection (for example, when you use the Dispose pattern and considering object lifetime correctly).

The main CLR-related bottlenecks are caused by contention for resources, inefficient resource cleanup, misuse of the thread pool, and resource leaks. For more information about how to optimize your code for efficient CLR processing, see “Improving Managed Code Performance”:

·         http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt05.asp

Use the performance counters shown in the following table to help identify CLR bottlenecks.

 

Area

Counter

Memory

Process\Private Bytes

.NET CLR Memory\% Time in GC

.NET CLR Memory\# Bytes in all Heaps

.NET CLR Memory\# Gen 0 Collections

.NET CLR Memory\# Gen 1 Collections

.NET CLR Memory\# Gen 2 Collections

.NET CLR Memory\# of Pinned Objects

.NET CLR Memory\Large Object Heap size

Working Set

Process\Working Set

Exceptions

.NET CLR Exceptions\# of Exceps Thrown /sec

Contention

.NET CLR LocksAndThreads\Contention Rate /sec

.NET CLR LocksAndThreads\Current Queue Length

Threading

.NET CLR LocksAndThreads\# of current physical threads

Thread\% Processor Time

Thread\Context Switches/sec

Thread\Thread State

Code Access Security

.NET CLR Security\Total Runtime Checks

.NET CLR Security\Stack Walk Depth

For more information about how to measure these counters, their thresholds, and their significance, see “ASP.NET Chapter 15, Measuring .NET Application Performance”:

·         http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenetchapt15.asp

Identifying Common Bottlenecks

The following list describes several common bottlenecks that occur in applications written using managed code and explains how you identify them using system counters.

·         Excessive memory consumption: Excessive memory consumption can result from poor managed or unmanaged memory management. To identify this symptom, watch the following performance counters:

·         Process\Private Bytes

·         .NET CLR Memory\# Bytes in all Heaps

·         Process\Working Set

·         .NET CLR Memory\Large Object Heap size

An increase in Private Bytes when the # of Bytes in all Heaps counter remains the same indicates unmanaged memory consumption. An increase in both counters indicates managed memory consumption.

·         Large working set size. The working set is the set of memory pages currently loaded in RAM. This is measured by Process\Working Set. A high value might indicate that you have loaded several assemblies. Unlike other counters, Process\Working Set has no specific threshold value to watch, although a high or fluctuating value can indicate a memory shortage. A high or fluctuating value accompanied by a high rate of page faults clearly indicates that the server has insufficient memory.

·         Fragmented large object heap. Objects larger than 83 KB are allocated in the large object heap. This is measured by .NET CLR Memory\Large Object Heap size. Frequently, these objects are buffers (large strings, byte arrays, and so on) used for I/O operations (for example, creating a BinaryReader to read an uploaded image). Such large allocations can fragment the large object heap. You should consider recycling those buffers to avoid fragmentation.

·         High CPU usage. High CPU usage is usually caused by poorly written managed code, such as code that does the following:

·         Causes excessive garbage collection. This is measured by % Time in GC.

·         Throws many exceptions. This is measured by .NET CLR Exceptions\# of Exceps Thrown /sec.

·         Creates many threads. This causes the CPU to spend large amounts of time switching between threads instead of performing real work. This is measured by Thread\Context Switches/sec.

·         Thread contention: Thread contention occurs when multiple threads try to access a shared resource. To identify this symptom, watch the following performance counters:

·         .NET CLR LocksAndThreads\Contention Rate / sec

·         .NET CLR LocksAndThreads\Total # of Contentions

To reduce the contention rate, identify and fix the code that accesses shared resources or uses synchronization mechanisms.

For more information, see the “Improving .NET Application Performance and Scalability” article in the .NET Performance section of the MSDN Library:

·         http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpag/html/scalenet.asp.

Optimizing Performance in Wide Area Network Environments

Optimizing the performance of Microsoft Dynamics CRM in wide area network (WAN) or low-bandwidth/high-latency environments requires special considerations.

No files, including graphics, icons, and static IIS content, are cached if Microsoft Dynamics CRM is installed on a site that is using SSL. If SSL is used, other ways of WAN optimization should be considered, such as using Terminal Server access for users to access Microsoft CRM through a Web browser session on a Terminal Server that is located in the same local area network as the Microsoft CRM server to eliminate any latency delays.

Configuring Content Expiration

Microsoft Dynamics CRM uses content expiration to control the Web objects cache for the clients accessing Microsoft Dynamics CRM. The original configuration of content expiration is set to 3 days. For customers using Microsoft Dynamics CRM over a slower link connection (50-200 ms latency) you may benefit from increasing the content expiration value to 15 days. This would enable a client computer that uses the Microsoft Dynamics CRM Web application or the Microsoft Dynamics CRM client for Outlook to download items into their temporary Internet files without refreshing them for 15 days. This configuration change will have the most effect when combining it with the client-side Web browser settings configuration.

To configure content expiration:

1.    Open Internet Information Services (IIS) Manager from the Administrative Tools area on the Microsoft Dynamics CRM Server.

2.    Right-click Microsoft Dynamics CRM v3.0 Web Site, and then click Properties.

3.    Open the HTTP Headers tab.

4.    Change the content expiration “expires after” selection to 15 days, and then click OK.

This change in settings will take effect on client systems after their current content expires (in less than 72 hours).

Configuring Anonymous Access Settings

In the default configuration of Microsoft Dynamics CRM, all items within the Microsoft Dynamics CRM 3.0 Web site are protected by Windows integrated authentication. For security reasons, all static content, graphics, and dynamic content require authentication.

Authenticated elements require an additional request from the client to the server to access information for a particular object. An anonymous request is always sent first, and if the page requires authentication, the server then returns an IIS 401 error indicating that it needs authentication from the client. This in turn returns an IIS 200 success message after authentication. Turning off authentication for static elements can improve performance by reducing the number of requests and responses between the client and server, especially in a WAN environment in which there is low network bandwidth or high network latencies between the Microsoft Dynamics CRM server and the client workstation.

If you decide that some static content and graphics do not require this level of security, you can set their anonymous access flag, and allow client systems to download the content without any authentication.

Important:    In most cases, turning off authentication for static content does not pose a security risk. However, you should carefully consider the security implications and potential business impact of each object for which you are considering turning off authentication.

To turn off security authentication for an object:

1.    Open IIS Manager from the Administrative Tools area on the Microsoft Dynamics CRM Server.

2.    Expand Microsoft Dynamics CRM Web Site and find the static content file for which you want to disable authentication.

Caution:  To prevent unknown consequences for your business and for your Microsoft Dynamics CRM users, do not disable authentication for any ASPX files or dynamic content files.

3.    Right-click the object and then click Properties.

4.    Click the File Security tab.

5.    In the Authentication and Access Control area, click Edit.

6.    In the Authentication Methods dialog box select the Enable Anonymous Access check box, and then click OK.

7.    In the object properties window, click OK to return to IIS Manager.

The object no longer requires authentication when it is downloaded.

Modifying the 401.1 and 401.2 Error Pages

The size of the default 401.1 and 401.2 error pages can have an adverse effect on Microsoft Dynamics CRM performance. You can replace these default pages with much smaller files. This improves performance.

To replace the default 401.1 and 401.2 error pages:

1.    On the server that is running Microsoft Dynamics CRM and IIS, start Notepad.

2.    Enter the following text:

<html><body>ERROR: 401.1</body></html>

3.    On the File menu, click Save As.

4.    In the Save As dialog box open the following path:

C:\Windows\Help\iisHelp\common\

5.    From the Save as type list, select “All Files,” and in the File name box, type 401-1_custom.htm.

6.    Click Save.

7.    Start Notepad again.

8.    Enter the following text:

<html><body>ERROR: 401.2</body></html>

9.    On the File menu, click Save As.

10. In the Save As dialog box open the following path:

C:\Windows\Help\iisHelp\common\

11. From the Save as type list, select “All Files,” and in the File name box, type 401-2_custom.htm.

12. Click Save.

13. Open IIS Manager from the Administrative Tools area on the Microsoft Dynamics CRM Server.

14. Right-click Microsoft Dynamics CRM Web Site and then click Properties.

15. Click the Custom Errors tab and change the 401.1 and 401.2 error pages to the custom pages that you created earlier in this procedure.

Software Updates for WAN Environments

This section includes information about how to improve Microsoft Dynamics CRM 3.0 performance in WAN environments by applying hotfixes, performance enhancement updates, and security updates that are currently available.

All available Microsoft Dynamics CRM 3.0 hotfixes can be seen with the following KB article, “Microsoft Dynamics CRM 3.0 updates and hotfixes”:

·         http://support.microsoft.com/kb/908951

The hotfixes specifically related to WAN environments are listed in the following table.

Hotfix Article Title

Knowledge Base Link

You experience slow performance when you try to load forms in Microsoft Dynamics CRM 3.0

http://support.microsoft.com/kb/927854

An entities grid is populated slower than you expect when you move to a custom entity in Microsoft Dynamics CRM 3.0

http://support.microsoft.com/default.aspx?scid=kb;EN-US;913462

 

Find a remedy.