storage related performance issues sharepoint

Here are five storage-related issues in SharePoint that can kill performance, with tips on how to resolve or prevent them.

Problem #1:

Unstructured data takeover. The primary document types stored in SharePoint are PDFs, Microsoft Word and PowerPoint files, and large Excel spreadsheets. These documents are usually well over a megabyte.

SharePoint saves all file contents in SQL Server as unstructured data, otherwise known as Binary Large Objects (BLOBs). Having many BLOBs in SQL Server causes several issues. Not only do they take up lots of storage space, they also use server resources.

Because a BLOB is unstructured data, any time a user accesses a file in SharePoint, the BLOB has to be reassembled before it can be delivered back to the user – taking extra processing power and time.


Move BLOBs out of SQL Server and into a secondary storage location – specifically, a higher density storage array that is reasonably fast, like a file share or network attached storage (NAS).

Problem #2:

An avalanche of large media. Organizations today use a variety of large files such as videos, images, and PowerPoint presentations, but storing them in SharePoint can lead to performance issues because SQL Server isn’t optimized to house them.

Media files, especially, cause issues for users because they are so large and need to be retrieved fairly quickly. For example, a video file may have to stream at a certain rate, and applications won’t return control until the file is fully loaded. As more of this type of content is stored in SharePoint, it amplifies the likelihood that users will experience browser timeout, slow Web server performance, and upload and recall failures.


For organizations that make SharePoint “the place” for all content large and small, use third-party tools specifically designed to facilitate the externalization of large media storage and organization. This will encourage user adoption and still allow you to maintain the performance that users demand.

Problem #3:

Old and unused files hogging valuable SQL Server storage. As data ages, it usually loses its value and usefulness, so it’s not uncommon for the majority of SharePoint content to go completely unused for long periods of time. In fact, more than 60 to 80 percent of content in SharePoint is either unused or used only sparingly in its lifespan. Many organizations waste space by applying the same storage treatment for this old, unused data as they do for new, active content, quickly degrading both SQL Server and SharePoint performance.


Move less active and relevant SharePoint data to less expensive storage, while still keeping it available to end users via SharePoint. In the interface, it helps to move these older files to different parts of the information architecture, to minimize navigational and search clutter. Similarly, we can “unclutter” the storage back end.

A third-party tool that provides tiered storage will enable you to easily move each piece of SharePoint data through its life cycle to various repositories, such as direct attached storage, a file share, or even the cloud. With tiered storage, you can keep your most active and relevant data close at hand, while moving the rest to less expensive and possibly slower storage, based on the particular needs of your data set.

Problem #4:

Lack of scalability. As SharePoint content grows, its supporting hardware can become underpowered if growth rates weren’t accurately forecasted. Organizations unable to invest in new hardware need to find alternatives that enable them to use best practices and keep SharePoint performance optimal. Microsoft guidance suggests limiting content databases to 200GB maximum unless disk subsystems are tuned for high input/output performance. In addition, huge content databases are cumbersome for backup and restore operations.


Offload BLOBs to the file system – thus reducing the size of the content database. Again, tiered storage will give you maximum flexibility, so as SharePoint data grows, you can direct it to the proper storage location, either for pure long-term storage or zippy immediate use.

It also lets you spread the storage load across a wider pool of storage devices. This approach keeps SharePoint performance high and preserves your investment in existing hardware by prolonging its useful life in lieu of buying expensive hardware. It’s simpler to invest in optimizing a smaller SQL Server storage core than a full multi-terabyte storage footprint, including archives.

Problem #5:

Not leveraging Microsoft’s data externalization features. Microsoft’s recommended externalization options are Remote BLOB Storage (RBS), a SQL Server API that enables SharePoint 2010 to store BLOBs in locations outside the content databases, and External BLOB Storage (EBS), a SharePoint API introduced in SharePoint 2007 SP1 and continued in SharePoint 2010.

Many organizations haven’t yet explored these externalization capabilities, however, and are missing out on significant storage and related performance benefits. However, native EBS and RBS require frequent T-SQL command-line administration, and lack flexibility.


Use a third-party tool that works with Microsoft’s supported APIs, RBS, and EBS, and gives administrators an intuitive interface through SharePoint’s native Central Administration to set the scope, rules and location for data externalization.

In each of these five problem areas, you can see that offloading the SharePoint data to more efficient external storage is clearly the answer. Microsoft’s native options, EBS and RBS, only add to the complexity of managing SharePoint storage, however, so the best option to improve SharePoint performance and reduce costs is to select a third-party tool that integrates cleanly into SharePoint’s Central Administration. This would enable administrators to take advantage of EBS and RBS, choosing the data they want to externalize by setting the scope and rules for externalization and selecting where they want the data to be stored.



Cache Monitoring SharePoint 2013

SharePoint 2013 provides three types of caches that help improve the speed at which web pages load in the browser: the BLOB cache, the ASP.NET output cache, and the object cache.

  • The BLOB cache is a disk-based cache that stores binary large object files that are used by web pages to help the pages load quickly in the browser.
  • The ASP.NET output cache stores the rendered output of a page. It also stores different versions of the cached page, based on the permissions of the users who are requesting the page.
  • The object cache reduces the traffic between the web server and the SQL database by storing objects such as lists and libraries, site settings, and page layouts in memory on the front-end web server. As a result, the pages that require these items can be rendered quickly, increasing the speed with which pages are delivered to the client browser.

Monitoring consists of regularly viewing specific performance monitors and making adjustments in the settings to correct any performance issues. The monitors measure cache hits, cache misses, cache compactions, and cache flushes. The following list describes each of these performance monitors.

  • A cache hit occurs when the cache receives a request for an object whose data is already stored in the cache. A high number of cache hits indicates good performance and a good end-user experience.
  • A cache miss occurs when the cache receives a request for an object whose data is not already stored in the cache. A high number of cache misses might indicate poor performance and a slower end-user experience.
  • Cache compaction (also known as trimming), happens when a cache becomes full and additional requests for non-cached content are received. During compaction, the system identifies a subset of the contents in the cache to remove, and removes them. Typically these contents are not requested as frequently.
    Compaction can consume a significant portion of the server’s resources. This can affect both server performance and the end-user experience. Therefore, compaction should be avoided. You can decrease the occurrence of compaction by increasing the size of the cache. Compaction usually happens if the cache size is decreased. Compaction of the object cache does not consume as many resources as the compaction of the BLOB cache.
  • A cache flush is when the cache is completely emptied. After the cache is flushed, the cache hit to cache miss ratio will be almost zero. Then, as users request content and the cache is filled up, that ratio increases and eventually reaches an optimal level. A consistently high number for this counter might indicate a problem with the farm, such as constantly changing library metadata schemas.

You can monitor the effectiveness of the cache settings to make sure that the end-users are getting the best experience possible. Optimum performance occurs when the ratio of cache hits to cache misses is high and when compactions and flushes only rarely occur. If the monitors do not indicate these conditions, you can improve performance by changing the cache settings.

The following sections provide specific information for monitoring each kind of cache.

Monitoring BLOB cache performance

You can monitor the effectiveness of the cache settings by using the performance monitors that are listed in the following table.

SharePoint Publishing Cache counter group

Counter name Ideal value or pattern Notes
Total Number of cache Compactions 0 If this number is continually or frequently high, the cache size is too small for the data being requested. To improve performance, increase the size of the cache.
BLOB Cache % full >= 90% shows red>= 80% shows yellow

<80% shows green

This can show that the cache size is too small. To improve performance, increase the size of the cache.
Publishing cache flushes / second 0 Site owners might be performing actions on the sites that are causing the cache to be flushed. To improve performance during peak-use hours, make sure that site owners only perform these actions during off-peak hours.
Publishing cache hit ratio Depends on usage pattern. For read-only sites, the ratio should be 1. For read-write sites, the ratio may be lower. A low ratio can indicate that unpublished items are being requested, and these cannot be cached. If this is a portal site, the site might be set to require check-out, or many users have items checked out.

Note :

For the BLOB cache, a request is only counted as a cache miss if the user requests a file whose extension is configured to be cached. For example, if the cache is enabled to cache .jpg files only, and the cache gets a request for a .gif file, that request is not counted as a cache miss.

Monitoring ASP.NET output cache performance

You can monitor the effectiveness of the cache settings by using the performance monitors that are listed in the following table.

ASP.NET Applications counter group

Counter name Ideal value or pattern Notes
Cache API trims 0 Increase the amount of memory that is allocated to the ASP.NET output cache.
Cache API hit ratio Depends on usage pattern. For read-only sites, the ratio should be 1. For read-write sites, the ratio may be lower. Potential causes of a low hit ratio include the following:

  • If you are using anonymous user caching (for example, for an Internet-facing site), users are regularly requesting content that has not yet been cached.
  • If you are using ASP.NET output caching for authenticated users, many users may have edit permissions on the pages that they are viewing.
  • If you have customized any of the VaryBy* parameters on any page (or master page or page layout) or customized a cache profile, you may have configured a parameter that prevents the pages in the site from being cached effectively (For example, you might be varying by user for a site that has many users).

Note : 

For the ASP.NET output cache, all pages are cached for a fixed duration that is independent of user actions. Therefore, there are flush-related monitoring events.
For more information about the ASP.NET output cache, see Output Caching andCache Profiles ( or cache Element for caching (ASP.NET Settings Schema) (

Monitoring object cache performance

  • The object cache is used to store metadata about sites, libraries, lists, list items, and documents that are used by features such as site navigation and the Content Query Web Part.
  • This cache helps users when they browse to pages that use these features because the data that they require is stored or retrieved directly from the object cache instead of from the content database.

  • The object cache is stored in the RAM of each web server in the farm. Each web server maintains its own object cache.

  • You can monitor the effectiveness of the cache settings by using the performance monitors that are listed in the following table.

SharePoint Publishing Cache counter group

Counter name Ideal value or pattern Notes
Total number of cache compactions 0 If this number is high, the cache size is too small for the data being requested. To improve performance, increase the size of the cache.
Publishing cache flushes / second 0 Site owners might be performing actions on the sites that are causing the cache to be flushed. To improve performance during peak-use hours, make sure that site owners perform these actions only during off-peak hours.
Publishing cache hit ratio Depends on usage pattern. For read-only sites, the ratio should be 1. For read-write sites, the ratio may be lower. If the ratio starts to decrease, this might be caused by one or more of the following:

  • The cache was recently flushed or compacted.
  • Users are accessing content that was recently added to the site. This might occur after lots of new content is added to the site.

Implement remote blob storage SharePoint 2013

During my journey of installing, configuring and exploring SharePoint I came across an issue implementing Remote BLOB Storage (RBS).

In the first place I tried to configure RBS using the explanation on TechNet: Install and configure RBS in a SharePoint farm. Although it tells you in headlines what steps to take, it contains a few errors and is missing some bits and pieces.

Because of this i decided to write a blog post about the subject so that you won’t have to do all the research I did to get it working.

Okay lets start:

Prepare SQL Server

First of all you need to prepare your database to be able to use the FILESTREAM function that is used by RBS.

1.) Login to your SQL Database Server and open the SQL Server Configuration Manager
2.) Select the SQL Server Services and right-click on the SQL Server instance that hosts SharePoint
3.) In the Properties dialog click on the ‘ FILESTREAM ’ tab and select all check boxes.

remote blob storage sharepoint

remote blob storage sharepoint

4.) Click on Apply and OK to close the dialog box
5.) Close the SQL Server Configuration Manager

Okay the FILESTREAM is now available for this SQL instance. The next step is to activate the filestream by executing a stored procedure.
Sounds scary? It’s not just follow these steps.

6.) Open your SQL Server Management Studio and login to the SharePoint instance
7.) Now click the New Query button of hit CTR+N on your keyboard to start the Query Editor.
8.) Enter the following query and click Execute

EXEC sp_configure filestream_access_level, 2


SQL query editor

9.) Restart the SQL Server service!

Provisioning the BLOB store

Okay we are almost halfway the configuration of the BLOB.

The next thing we have to do is specify a BLOB store. This is nothing more than a folder we’re the BLOB’s are stored.
This can be done by executing a set of queries in sequence. In the queries provided in this post you need to adjust a couple of settings to match your own environment.

In my example below I am creating a BLOB store for the record center where SP2013_Record_Center is the name of my content database.

use SP2013_Record_Center
if not exists
(select * from sys.symmetric_keys
where name = N’##MS_DatabaseMasterKey##’)
create master key encryption by password = N’Admin Key Password !2#4′

use SP2013_Record_Center
if not exists
(select groupname from sysfilegroups
where groupname=N’RBSFilestreamProvider’)
alter database SP2013_Record_Center
add filegroup RBSFilestreamProvider contains filestream

use SP2013_Record_Center
alter database sp2013_record_center
add file (name = RBSFilestreamFile, filename =
to filegroup RBSFilestreamProvider

After executing this SQL Queries It’s time to check if the result is as expected.

10.) Open the path provided in your query and verify that there is a folder and file created.

the next thing we have to do is installing the RBS provider components.

Installing the RBS Provider

This is where it gets tricky when following the article on TechNet.
The article on TechNet provides the wrong link to the RBS Provider that needs to be installed on the SharePoint servers! They redirect you to a page to download and install a x86 RBS.msi instead of the x64 bit version.
This Provider needs to be installed on all Front-End and SQL servers.

11.) The correct link to the download is RBS.msi
12.) Open the cmd window as an Administrator and browse to the download file is on the machine waiting for you to be installed.
13.) Copy and paste the command provided below in to the cmd window.

msiexec /qn /lvx* rbs_install_log.txt /i RBS.msi TRUSTSERVERCERTIFICATE=true

Again change the name of the content database (SP2013_Records_Center) and the Instance name (MSSQLSERVER) matching your own server.

14.) Fine, now that’s done check the log file that sits at the same location as the initial .msi file. Somewhere at the bottom there should be shown a Completed Succesfully message.

We’re almost done now 🙂 the only thing needed, besides testing, is enabling RBS for the content databases that you want to use.

Enable Remote BLOB Storage for the content databases.

15.) The easiest way to do this is using the SharePoint management Shell. Make sure you run this as Administrator.

First of all we need to get the content database for the web application. After we placed the content database in a variable we can use it to change the settings of that content database.

Sound all very difficult but it is not. To make it easy i wrote a PowerShell function that does the job. The only thing you need to do is run the script and tell the script what the URL of the web application is.

$cdb = Get-SPContentDatabase –WebApplication
$rbss = $cdb.RemoteBlobStorageSettings

Configure the minimum file size (Threshold)

The last thing that needs to be done is configuring the minimum file size for the files that needs to be stored outside the content databases. By default this is set to 60 kb but I would recommend to change this to 1 MB.

16.) Open up the good old PowerShell again and execute the following code.

$cdb = Get-SPContentDatabase –WebApplication
Migrate data from or to the RBS

If you activate RBS in an existing SharePoint environment you might want to move the current data out of the database to the BLOB location. Again this can be achieved through PowerShell.

17.) You might have guessed it already, Open PowerShell to execute the following code.

$cdb = Get-SPContentDatabase –WebApplication
rbss = $cdb.RemoteBlobStorageSettings

Depending on the amount of data in your databases this can take quiet a while.

Please let me know if this was helpful.

SharePoint 2010 and Remote Blob Storage

bloAdditional articles on RBS with SharePoint 2010:

  1. Plan for RBS
  2. Manage RBS
  3. Overview of RBS
  4. Install and configure RBS
  5. Install and configure RBS with 3rd party  provider
  6. Set a content database to use RBS
  7. Migrate content into or out of RBS
  8. Maintain RBS
  9. Disable RBS on a content database

It’s also important to note that using an RBS provider, whether the Microsoft one or a third party one, does not increase the data size scalability of SharePoint. All the limits numbers apply whether your data is all in SQL Server or if BLOBs are moved out using an RBS povider. Although there are many benefits of RBS providers, they do not break through the SharePoint supported content database size limits.