Setting up load balancing on a SharePoint farm running on Windows Server 2008

1. Install Network Load Balancing Feature on each Web Front End

On each front end in the farm, within Server Manager, add the NLB feature:

Click Install and wait a bit

2.Add a New Cluster

Through the start menu, Administrator Tools, click Network Load Balancing Manager:

Right click Network Load Balancing Clusters, and choose New Cluster

Type the IP address of one of the web fronts in the farm to serve as the first host in the cluster

Click Connect.

Click next.

Leave the defaults and click next again:

3.Set Cluster IP Address

This IP Address is the dedicated IP address for the cluster and is what DNS will point to, to get load balanced between the front ends. On the Cluster IP Addresses box, click Add and type an available dedicated IP address and subnet mask:

Optionally, you can setup many clusters IPs for fault tolerance purposes, but for most cases you’ll just have one:

Click next.

4.Specify Cluster Parameters

Select the Multicast operation mode, and click next:

5.Specify Port Rules

Click edit on the default port rule:

Deselect the “All” checkbox, and choose the Network filtering mode:

Click Ok.

Click Finish.

After finish, the NLB manager will show it has begun the configurations changes. If you’re in a remote desktop to the server, you’ll lose your connection temporarily while it re-configures:

6.Add Any Additional Hosts to the Cluster

Now that the cluster is ready to go, you can add additional hosts/web front ends. Right click on the cluster IP address and click Add Host To Cluster and type the IP of another web front end in the farm. Repeat until they’re all added.

Advertisements

Prepare Windows Cluster SharePoint

This part demonstrate how to configure windows cluster for two server, to be used as SQL Cluster.

Before you start

· You need to have two network adapters on each node, one Public and one Private(for heartbeat communication).

· Shared storage (like SAN storage) should be present and connected to both cluster nodes  with at least:

  • Quorum Disk (5GB)
  • DTC Disk (1GB)
  • SQL data files and log file disk(s)

· domain user account (SPSadmin): add SPSadmin user as administrator on both servers

· Prepare a preserved static IP and Cluster Name to be used.

· Prepare a preserved static IP and DTC Name to be used.

Windows Cluster Configuration

1. Install latest windows updates on all server nodes.

2. Install Application role and IIS role on both SQL DB server nodes

3. Install Fail over clustering feature on both SQL DB server nodes.

4. Provide a Cluster Name and Cluster IP for the database nodes:

Note: make sure that the public network is used here not the private (heartbeat)

5. Below are the servers info

6. Cluster Disk files are configured as the following:

7. Configure DTC as clustered service , this is a pre requisite for SQL Cluster installation

8. DTC cluster configuration

9. Assign the DTC a cluster disk

10. Create SQL Group which is a logical group to include all SQL resources in :

Parallel Query Processing

  • SQL Server provides parallel queries to optimize query execution and index operations for computers that have more than one microprocessor (CPU). Because SQL Server can perform a query or index operation in parallel by using several operating system threads, the operation can be completed quickly and efficiently.
  • During query optimization, SQL Server looks for queries or index operations that might benefit from parallel execution.

  • For these queries, SQL Server inserts exchange operators into the query execution plan to prepare the query for parallel execution. 

  • An exchange operator is an operator in a query execution plan that provides process management, data redistribution, and flow control. The exchange operator includes the Distribute Streams, Repartition Streams, and Gather Streams logical operators as subtypes, one or more of which can appear in the Show plan output of a query plan for a parallel query.

  • After exchange operators are inserted, the result is a parallel-query execution plan.

  • A parallel-query execution plan can use more than one thread. A serial execution plan, used by a nonparallel query, uses only one thread for its execution. The actual number of threads used by a parallel query is determined at query plan execution initialization and is determined by the complexity of the plan and the degree of parallelism.

  • Degree of parallelism determines the maximum number of CPUs that are being used; it does not mean the number of threads that are being used. The degree of parallelism value is set at the server level and can be modified by using the sp_configure system stored procedure.

  • You can override this value for individual query or index statements by specifying the MAXDOP query hint or MAXDOP index option.

The SQL Server query optimizer does not use a parallel execution plan for a query if any one of the following conditions is true:

  • The serial execution cost of the query is not high enough to consider an alternative, parallel execution plan.
  • A serial execution plan is considered faster than any possible parallel execution plan for the particular query.
  • The query contains scalar or relational operators that cannot be run in parallel. Certain operators can cause a section of the query plan to run in serial mode, or the whole plan to run in serial mode.

To configure the max degree of parallelism option

  1. In Object Explorer, right-click a server and select Properties.

  2. Click the Advanced node.

  3. In the Max Degree of Parallelism box, select the maximum number of processors to use in parallel plan execution.

storage related performance issues sharepoint

Here are five storage-related issues in SharePoint that can kill performance, with tips on how to resolve or prevent them.

Problem #1:

Unstructured data takeover. The primary document types stored in SharePoint are PDFs, Microsoft Word and PowerPoint files, and large Excel spreadsheets. These documents are usually well over a megabyte.

SharePoint saves all file contents in SQL Server as unstructured data, otherwise known as Binary Large Objects (BLOBs). Having many BLOBs in SQL Server causes several issues. Not only do they take up lots of storage space, they also use server resources.

Because a BLOB is unstructured data, any time a user accesses a file in SharePoint, the BLOB has to be reassembled before it can be delivered back to the user – taking extra processing power and time.

Solution:

Move BLOBs out of SQL Server and into a secondary storage location – specifically, a higher density storage array that is reasonably fast, like a file share or network attached storage (NAS).

Problem #2:

An avalanche of large media. Organizations today use a variety of large files such as videos, images, and PowerPoint presentations, but storing them in SharePoint can lead to performance issues because SQL Server isn’t optimized to house them.

Media files, especially, cause issues for users because they are so large and need to be retrieved fairly quickly. For example, a video file may have to stream at a certain rate, and applications won’t return control until the file is fully loaded. As more of this type of content is stored in SharePoint, it amplifies the likelihood that users will experience browser timeout, slow Web server performance, and upload and recall failures.

Solution:

For organizations that make SharePoint “the place” for all content large and small, use third-party tools specifically designed to facilitate the externalization of large media storage and organization. This will encourage user adoption and still allow you to maintain the performance that users demand.

Problem #3:

Old and unused files hogging valuable SQL Server storage. As data ages, it usually loses its value and usefulness, so it’s not uncommon for the majority of SharePoint content to go completely unused for long periods of time. In fact, more than 60 to 80 percent of content in SharePoint is either unused or used only sparingly in its lifespan. Many organizations waste space by applying the same storage treatment for this old, unused data as they do for new, active content, quickly degrading both SQL Server and SharePoint performance.

Solution:

Move less active and relevant SharePoint data to less expensive storage, while still keeping it available to end users via SharePoint. In the interface, it helps to move these older files to different parts of the information architecture, to minimize navigational and search clutter. Similarly, we can “unclutter” the storage back end.

A third-party tool that provides tiered storage will enable you to easily move each piece of SharePoint data through its life cycle to various repositories, such as direct attached storage, a file share, or even the cloud. With tiered storage, you can keep your most active and relevant data close at hand, while moving the rest to less expensive and possibly slower storage, based on the particular needs of your data set.

Problem #4:

Lack of scalability. As SharePoint content grows, its supporting hardware can become underpowered if growth rates weren’t accurately forecasted. Organizations unable to invest in new hardware need to find alternatives that enable them to use best practices and keep SharePoint performance optimal. Microsoft guidance suggests limiting content databases to 200GB maximum unless disk subsystems are tuned for high input/output performance. In addition, huge content databases are cumbersome for backup and restore operations.

Solution:

Offload BLOBs to the file system – thus reducing the size of the content database. Again, tiered storage will give you maximum flexibility, so as SharePoint data grows, you can direct it to the proper storage location, either for pure long-term storage or zippy immediate use.

It also lets you spread the storage load across a wider pool of storage devices. This approach keeps SharePoint performance high and preserves your investment in existing hardware by prolonging its useful life in lieu of buying expensive hardware. It’s simpler to invest in optimizing a smaller SQL Server storage core than a full multi-terabyte storage footprint, including archives.

Problem #5:

Not leveraging Microsoft’s data externalization features. Microsoft’s recommended externalization options are Remote BLOB Storage (RBS), a SQL Server API that enables SharePoint 2010 to store BLOBs in locations outside the content databases, and External BLOB Storage (EBS), a SharePoint API introduced in SharePoint 2007 SP1 and continued in SharePoint 2010.

Many organizations haven’t yet explored these externalization capabilities, however, and are missing out on significant storage and related performance benefits. However, native EBS and RBS require frequent T-SQL command-line administration, and lack flexibility.

Solution:

Use a third-party tool that works with Microsoft’s supported APIs, RBS, and EBS, and gives administrators an intuitive interface through SharePoint’s native Central Administration to set the scope, rules and location for data externalization.

In each of these five problem areas, you can see that offloading the SharePoint data to more efficient external storage is clearly the answer. Microsoft’s native options, EBS and RBS, only add to the complexity of managing SharePoint storage, however, so the best option to improve SharePoint performance and reduce costs is to select a third-party tool that integrates cleanly into SharePoint’s Central Administration. This would enable administrators to take advantage of EBS and RBS, choosing the data they want to externalize by setting the scope and rules for externalization and selecting where they want the data to be stored.

 

Improving SharePoint performance using SQL Server settings

SharePoint performance is a recursive problem and preoccupation. As a Database Administrator, we have to deal with SharePoint when configuring SQL Server databases.

In this article, I will propose a list of best practices in SQL Server settings aimed to reduce SharePoint performance issues.

Autogrowth

Do not keep the default value which is 1 MB. We can illustrate with a simple example why this is a bad idea.

When a document of 5 MB is uploaded, it means there are 5 Autogrowth which are activated. In fact, there are 5 allocations of space which must slow your system.

Moreover, your uploaded document will be fragmented across your different data files. This configuration will decrease your performance a second time.

To avoid performance issues and reduce fragmented data files, you should set the autogrowth value to a fixed number of megabytes.

My recommendation is 1024 MB for data files and 256 MB for log files. But keep in mind, this is a global recommendation. In fact, the bigger the database, the bigger the growth increment should be.

SQL Server disk cluster size

The default value of SQL Server is 4 KB. But in fact, it is nearly the worst value you can choose for this configuration!

Globally, 64 KB is a safe value. Indeed, the server reads 64 KB at the time and can deliver larger chunks of data to the SQL Server database.

TempDB Optimization

First, the TempDB recovery model should be set to simple. Indeed, this model automatically reclaims log space to keep space requirements small.

Also, you should put your TempDB on the fastest disks you have, because TempDB is heavily used by SharePoint. Do not let SQL Server use this disk for any other needs, except TempDB utilization!

Furthermore, each TempDB file should be 25% larger than the largest content database. Not many DBAs realize how a TempDB is used by SharePoint and to what extent a TempDB can grow!

Index Fragmentation

WSS_Content database, for example, is used to store site collection as well as lists and its tables are shared. Therefore, indexes are very important!

So do not forget to manage the fragmentation of your databases.

My recommendation is to perform a Reorganize when your fragmentation is between 10% and 30 % as well as a Rebuild index when your fragmentation is above 30%.

Take care about indexes with more than 1’000 pages!

Statistics

Do not enable Auto-Create Statistics on an SQL Server that supports SharePoint Server! Let SharePoint Server configure the required settings alone.

Auto-Create Statistics can significantly change the execution plan of a query from one instance of SQL Server to another.

Therefore, do not enable Auto-Update Statistics and use instead SharePoint Auto-Update capability instead.

SQL Server Memory Allocation

The default values of SQL Server for memory allocation are 0 MB for Minimum server memory and 2147483647 MB for Maximum server memory.

The default value of the Maximum server memory is not optimized at all!

You should set a custom value depending on the total amount of physical memory, the number of processors, and the number of cores.

To calculate your SQL Max Memory, I suggest you to read this article.

Recycle Bin

Be aware that items in the recycle Bin may affect the performance.

Moreover, after a certain limit of days or after a deletion, these items are moved to a second stage recycle bin that may also affect your performance.

As a result, you have to manage your recycle bin depending on your needs to ensure that the size of your recycle bin will not continue to grow out of control.

MAXDOP

The default value of your MAXDOP is 0. But for better performance, you should make sure that a single SQL Server process serves each request.

Therefore, you must set MAXDOP to 1.

Fill Factor

The default value is 0, which is equal to 100. It means that you do not provide space for index expansion.

But when a new row is added to a full index page, the Database Engine make a reorganization called Page Split.

Page Split can take time to perform, and can cause fragmentation increasing I/O operations.

I recommend to set a Fill Factor value of 80. It means that 20 % of each-level page will be left empty.

Therefore, you can support growth and reduce fragmentation.

Instant File initialization

This feature, when enabled, allows SQL Server to initialize database files instantly, without physically zeroing out each and every 8K page in the file.

Therefore, depending on the size of files you have, you can save a lots of time.

Conclusion

The default settings of the content database in SQL Server are pretty bad and far from what we really need. You should always opt for a pre-allocate size strategy and not rely on autogrowth.

Monitoring your databases for space and growth to avoid bad surprises is very important.

Also, do not forget to modify your model database for size allocation rules.

Ans if you do not want to suffer from bad performances, do not use the Auto-Shrink capability.