Category Archives: Monitoring

Slow SharePoint improve performance without upgrading hardware

what you can do if your SharePoint is sometimes very slow.

E.g.: on the first start of a Site
Sometimes during the day a search query will take about a minute until you get results.....

Just look on that article: http://support.microsoft.com/kb/2625048

it will improve "feeled" performance (site response times) massive, if you're going to implement both solutions.

Disabling CRL Check is just necessary if the SP Server does not have internet connectivity, that means proxy settings must be configured for the server itself

http://technet.microsoft.com/de-de/library/bb430772(v=exchg.141).aspx, and your proxy must allow traffic from the server of course.

Advertisements
Advertisements
Advertisements

search diagnostics and reports sharepoint

We can access and analyze several query and crawl health reports, logs and usage reports from the Search service application in the SharePoint Central Administration to monitor the health of the search system.

The health reports and logs only contain information after a full crawl has completed. To run a full crawl, we have to set up a Search service application, add at least one content source, and then start a full crawl.

To view the health reports and the crawl log, one have to be an administrator of the Search service application. Alternatively, an administrator who is a member of the Farm Administrators group can grant user accounts Read permissions on the Search service application. A user account that has Read permissions can only view the Search service application status page, the health reports and the crawl log.

Query health reports:

  1. Trend
  2. Overall
  3. Main Flow
  4. Federation
  5. SharePoint Search Provider
  6. People Search Provider
  7. Index Engine

To view query health reports:

  1. Verify that the user account that is performing this procedure is an administrator of or has Read permissions to the Search service application.
  2. In Central Administration, under Application Management, click Manage service applications.
  3. On the Service Applications page, click the Search service application.
  4. On the Search Administration page, in the Quick Launch, in the Diagnostics section, click Query Health Reports.
  5. On the Search Service Application: Query Latency Trend page, click the query report that you want to view.

The following table shows which reports are available.

query-health-report

Crawl health reports:

SharePoint 2013 provides the following reports about crawl health:

  1. Crawl Rate
  2. Crawl Latency
  3. Crawl Queue
  4. Crawl Freshness
  5. Content Processing Activity
  6. CPU and Memory Load
  7. Continuous Crawl

To view crawl health reports

  1. Verify that the user account that is performing this procedure is an administrator of or has Read permissions to the Search service application.
  2. In Central Administration, under Application Management, click Manage service applications.
  3. On the Service Applications page, click the Search service application.
  4. On the Search Administration page, in the Quick Launch, in the Diagnostics section, click Crawl Health Reports.
  5. On the Search Service Application: Crawl Reports page, click the crawl health report that you want to view.

The following table shows which reports are available.

crawl-health-report

Crawl log:

The crawl log tracks information about the status of crawled content. This log lets you determine whether crawled content was successfully added to the index, whether it was excluded because of a crawl rule, or whether indexing failed because of an error. The crawl log also contains information such as the time of the last successful crawl and whether any crawl rules were applied. You can use the crawl log to diagnose problems with the search experience.

To view the crawl log

  1. Verify that the user account that is performing this procedure is an administrator of the Search service application, or has Read permissions to it.
  2. In Central Administration, under Application Management, click Manage service applications.
  3. On the Service Applications page, click the Search service application.
  4. On the Search Administration page, in the Quick Launch, in the Diagnostics section, click Crawl Log.
  5. On the Crawl Log – Content Source page, click the view that you want.

crawl-log-views

Additional columns in the Content Source, Host Name and Crawl History views:

content-source-host-name-crawl-history-view

Usage reports (search report):

To view usage reports

  1. Verify that the user account that is performing this procedure is an administrator of or has Read permissions to the Search service application.
  2. In Central Administration, under Application Management, click Manage service applications.
  3. On the Service Applications page, click the Search service application.
  4. On the Search Administration page, in the Quick Launch, in the Diagnostics section, click Usage Reports.
  5. On the View Usage Reports page, click the usage or search reports view that you want view.

usage-report-search-report

 

Configure diagnostic logging SharePoint 2016

The SharePoint Server 2016 environment might require configuration of the diagnostic logging settings after initial deployment, after upgrade, and if a change is made to the environment, such as adding or removing a server.

The guidelines in the following list can help you form best practices for the specific environment.

* Change the drive to which the server writes logs:

By default, SharePoint Server 2016 writes diagnostic logs to the same drive and partition on which it was installed. Because diagnostic logging can use a large amount of drive space and compromise drive performance, you should configure SharePoint Server 2016 to write to another drive on which SharePoint Server 2016 is not installed.

You should also consider the connection speed to the drive on which SharePoint Server 2016 writes the logs. If verbose-level logging is configured, the server records a large amount of data. Therefore, a slow connection might result in poor log performance.

* Restrict log disk space usage:

By default, the amount of disk space that diagnostic logging can use is unlimited. Therefore, restrict the disk space that logging uses, especially if you configure logging to write verbose-level events. When the disk reaches the restriction, SharePoint Server 2016 removes the oldest logs before it records new logging data.

* Use the Verbose setting sparingly:

You can configure diagnostic logging to record verbose-level events. This means that SharePoint Server 2016 records every action that it takes. Verbose-level logging can quickly use drive space and affect drive and server performance. You can use verbose-level logging to record more detail when you are making critical changes and then reconfigure logging to record only higher-level events after you make the change.

* Regularly back up logs:

Diagnostic logs contain important data. Therefore, back up the logs regularly to ensure that this data is preserved. When you restrict log drive space usage, or if you keep logs for only a few days, SharePoint Server 2016 automatically deletes log files, starting with the oldest files first, when the threshold is met.

* Enable event log flooding protection:

When you enable this setting, SharePoint Server 2016 detects repeating events in the Windows event log, and suppresses them until conditions return to a typical state.

You can set the level of diagnostic logging for the event log and for the trace log. This limits the types and amount of information that are written to each log.

The following tables define the levels of logging that are available for the event log and trace log.

event-log-levels

trace-log-levels

Configure diagnostic logging by using Central Administration :

  1. In Central Administration, on the home page, click Monitoring.
  2. On the Monitoring page, in the Reporting section, click Configure diagnostic logging.
  3. On the Diagnostic Logging page, in the Event Throttling section, configure event throttling as follows:To configure event throttling for all categories:
    1. Select the All Categories check box.
    2. Select the event log level from the Least critical event to report to the event log list.
    3. Select the trace log level from the Least critical event to report to the trace log list.

    To configure event throttling for one or more categories:

    1. Select the check boxes of the categories that you want.
    2. Select the event log level from the Least critical event to report to the event log list.
    3. Select the trace log level from the Least critical event to report to the trace log list.

    To configure event throttling for one or more subcategories (you can expand one or more categories and select any subcategory):

    1. Click the plus (+) next to the category to expand the category.
    2. Select the check box of the subcategory.
    3. Select the event log level from the Least critical event to report to the event log list.
    4. Select the trace log level from the Least critical event to report to the trace log list.

    To return event throttling for all categories to default settings:

    1. Select the All Categories check box.
    2. Select Reset to default from the Least critical event to report to the event log list.
    3. Select Reset to default from the Least critical event to report to the trace log list.
  4. In the Event Log Flood Protection section, select the Enable Event Log Flood Protection check box.
  5. In the Trace Log section, in the Path box, type the path of the folder to which you want logs to be written.
  6. In the Number of days to store log files box, type the number of days (1-366) that you want logs to be kept. After this time, logs will automatically be deleted.
  7. To restrict the disk space that logs can use, select the Restrict Trace Log disk space usage check box, and then type the number gigabytes (GB) you want to restrict log files to. When logs reach this value, older logs will automatically be deleted.
  8. After you have made the changes that you want on the Diagnostic Logging page, click OK.

Configure diagnostic logging by using Windows PowerShell :

  1. Verify that you have the following memberships:
  • securityadmin fixed server role on the SQL Server instance.
  • db_owner fixed database role on all databases that are to be updated.
  • Administrators group on the server on which you are running the Windows PowerShell cmdlets.

An administrator can use the Add-SPShellAdmin cmdlet to grant permissions to use SharePoint Server 2016 cmdlets.

  1. On the Start menu, click All Programs.
  2. Click SharePoint 2016.
  3. Click SharePoint 2016 Management Shell.
  4. To change the drive to which the server writes logs, at the Windows PowerShell command prompt, type the following command:

Set-SPDiagnosticConfig -LogLocation D:\DiagnosticLogs

  1. To restrict log disk space usage, at the Windows PowerShell command prompt, type the following command:

Set-SPDiagnosticConfig -LogMaxDiskSpaceUsageEnabled

Or assign the maximum disk space for logs:

Set-SPDiagnosticConfig -LogDiskSpaceUsageGB 500

  1. To view the current logging level, at the Windows PowerShell command prompt, type the following command:

Get-SPLogLevel

  1. To change the logging level, at the Windows PowerShell command prompt, type the following command:

Set-SPLogLevel -TraceSeverity Monitorable

To set all categories back to default levels, at the Windows PowerShell command prompt, type the following command, and then press ENTER:

Clear-SPLogLevel

9. To enable event log flooding protection, at the Windows PowerShell command prompt, type the following command:

Set-SPDiagnosticConfig -EventLogFloodProtectionEnabled

Monitor cache performance SharePoint 2016

SharePoint Server 2016 provides three types of caches that help improve the speed at which web pages load in the browser: the BLOB cache, the ASP.NET output cache, and the object cache.

The BLOB cache is a disk-based cache that stores binary large object files that are used by web pages to help the pages load quickly in the browser.

The ASP.NET output cache stores the rendered output of a page. It also stores different versions of the cached page, based on the permissions of the users who are requesting the page.

The object cache reduces the traffic between the web server and the SQL database by storing objects such as lists and libraries, site settings, and page layouts in memory on the front-end web server. As a result, the pages that require these items can be rendered quickly, increasing the speed with which pages are delivered to the client browser.

The monitors measure cache hits, cache misses, cache compactions, and cache flushes. The following list describes each of these performance monitors.

A cache hit occurs when the cache receives a request for an object whose data is already stored in the cache. A high number of cache hits indicates good performance and a good end-user experience.

A cache miss occurs when the cache receives a request for an object whose data is not already stored in the cache. A high number of cache misses might indicate poor performance and a slower end-user experience.

Cache compaction (also known as trimming), happens when a cache becomes full and additional requests for non-cached content are received. During compaction, the system identifies a subset of the contents in the cache to remove, and removes them. Typically these contents are not requested as frequently.

Compaction can consume a significant portion of the server’s resources. This can affect both server performance and the end-user experience. Therefore, compaction should be avoided. You can decrease the occurrence of compaction by increasing the size of the cache. Compaction usually happens if the cache size is decreased. Compaction of the object cache does not consume as many resources as the compaction of the BLOB cache.

A cache flush is when the cache is completely emptied. After the cache is flushed, the cache hit to cache miss ratio will be almost zero. Then, as users request content and the cache is filled up, that ratio increases and eventually reaches an optimal level. A consistently high number for this counter might indicate a problem with the farm, such as constantly changing library metadata schemas.

You can monitor the effectiveness of the cache settings to make sure that the end-users are getting the best experience possible. Optimum performance occurs when the ratio of cache hits to cache misses is high and when compactions and flushes only rarely occur. If the monitors do not indicate these conditions, you can improve performance by changing the cache settings.

The following sections provide specific information for monitoring each kind of cache.

Monitoring BLOB cache performance:

monitor-blob-cache

Note:
For the BLOB cache, a request is only counted as a cache miss if the user requests a file whose extension is configured to be cached. For example, if the cache is enabled to cache .jpg files only, and the cache gets a request for a .gif file, that request is not counted as a cache miss.

Monitoring ASP.NET output cache performance :

monitoring-asp-net-output-cache-performance

Note:
For the ASP.NET output cache, all pages are cached for a fixed duration that is independent of user actions. Therefore, there are flush-related monitoring events.

Monitoring object cache performance :

The object cache is used to store metadata about sites, libraries, lists, list items, and documents that are used by features such as site navigation and the Content Query Web Part.

This cache helps users when they browse to pages that use these features because the data that they require is stored or retrieved directly from the object cache instead of from the content database.

The object cache is stored in the RAM of each web server in the farm. Each web server maintains its own object cache.

You can monitor the effectiveness of the cache settings by using the performance monitors that are listed in the following table.

monitoring-object-cache-performance

Cache Monitoring SharePoint 2013

SharePoint 2013 provides three types of caches that help improve the speed at which web pages load in the browser: the BLOB cache, the ASP.NET output cache, and the object cache.

  • The BLOB cache is a disk-based cache that stores binary large object files that are used by web pages to help the pages load quickly in the browser.
  • The ASP.NET output cache stores the rendered output of a page. It also stores different versions of the cached page, based on the permissions of the users who are requesting the page.
  • The object cache reduces the traffic between the web server and the SQL database by storing objects such as lists and libraries, site settings, and page layouts in memory on the front-end web server. As a result, the pages that require these items can be rendered quickly, increasing the speed with which pages are delivered to the client browser.

Monitoring consists of regularly viewing specific performance monitors and making adjustments in the settings to correct any performance issues. The monitors measure cache hits, cache misses, cache compactions, and cache flushes. The following list describes each of these performance monitors.

  • A cache hit occurs when the cache receives a request for an object whose data is already stored in the cache. A high number of cache hits indicates good performance and a good end-user experience.
  • A cache miss occurs when the cache receives a request for an object whose data is not already stored in the cache. A high number of cache misses might indicate poor performance and a slower end-user experience.
  • Cache compaction (also known as trimming), happens when a cache becomes full and additional requests for non-cached content are received. During compaction, the system identifies a subset of the contents in the cache to remove, and removes them. Typically these contents are not requested as frequently.
    Compaction can consume a significant portion of the server’s resources. This can affect both server performance and the end-user experience. Therefore, compaction should be avoided. You can decrease the occurrence of compaction by increasing the size of the cache. Compaction usually happens if the cache size is decreased. Compaction of the object cache does not consume as many resources as the compaction of the BLOB cache.
  • A cache flush is when the cache is completely emptied. After the cache is flushed, the cache hit to cache miss ratio will be almost zero. Then, as users request content and the cache is filled up, that ratio increases and eventually reaches an optimal level. A consistently high number for this counter might indicate a problem with the farm, such as constantly changing library metadata schemas.

You can monitor the effectiveness of the cache settings to make sure that the end-users are getting the best experience possible. Optimum performance occurs when the ratio of cache hits to cache misses is high and when compactions and flushes only rarely occur. If the monitors do not indicate these conditions, you can improve performance by changing the cache settings.

The following sections provide specific information for monitoring each kind of cache.

Monitoring BLOB cache performance

You can monitor the effectiveness of the cache settings by using the performance monitors that are listed in the following table.

SharePoint Publishing Cache counter group

Counter name Ideal value or pattern Notes
Total Number of cache Compactions 0 If this number is continually or frequently high, the cache size is too small for the data being requested. To improve performance, increase the size of the cache.
BLOB Cache % full >= 90% shows red>= 80% shows yellow

<80% shows green

This can show that the cache size is too small. To improve performance, increase the size of the cache.
Publishing cache flushes / second 0 Site owners might be performing actions on the sites that are causing the cache to be flushed. To improve performance during peak-use hours, make sure that site owners only perform these actions during off-peak hours.
Publishing cache hit ratio Depends on usage pattern. For read-only sites, the ratio should be 1. For read-write sites, the ratio may be lower. A low ratio can indicate that unpublished items are being requested, and these cannot be cached. If this is a portal site, the site might be set to require check-out, or many users have items checked out.

Note :

For the BLOB cache, a request is only counted as a cache miss if the user requests a file whose extension is configured to be cached. For example, if the cache is enabled to cache .jpg files only, and the cache gets a request for a .gif file, that request is not counted as a cache miss.

Monitoring ASP.NET output cache performance

You can monitor the effectiveness of the cache settings by using the performance monitors that are listed in the following table.

ASP.NET Applications counter group

Counter name Ideal value or pattern Notes
Cache API trims 0 Increase the amount of memory that is allocated to the ASP.NET output cache.
Cache API hit ratio Depends on usage pattern. For read-only sites, the ratio should be 1. For read-write sites, the ratio may be lower. Potential causes of a low hit ratio include the following:

  • If you are using anonymous user caching (for example, for an Internet-facing site), users are regularly requesting content that has not yet been cached.
  • If you are using ASP.NET output caching for authenticated users, many users may have edit permissions on the pages that they are viewing.
  • If you have customized any of the VaryBy* parameters on any page (or master page or page layout) or customized a cache profile, you may have configured a parameter that prevents the pages in the site from being cached effectively (For example, you might be varying by user for a site that has many users).

Note : 

For the ASP.NET output cache, all pages are cached for a fixed duration that is independent of user actions. Therefore, there are flush-related monitoring events.
For more information about the ASP.NET output cache, see Output Caching andCache Profiles (http://go.microsoft.com/fwlink/p/?LinkID=121543) or cache Element for caching (ASP.NET Settings Schema) (http://go.microsoft.com/fwlink/p/?LinkId=195986).

Monitoring object cache performance

  • The object cache is used to store metadata about sites, libraries, lists, list items, and documents that are used by features such as site navigation and the Content Query Web Part.
  • This cache helps users when they browse to pages that use these features because the data that they require is stored or retrieved directly from the object cache instead of from the content database.

  • The object cache is stored in the RAM of each web server in the farm. Each web server maintains its own object cache.

  • You can monitor the effectiveness of the cache settings by using the performance monitors that are listed in the following table.

SharePoint Publishing Cache counter group

Counter name Ideal value or pattern Notes
Total number of cache compactions 0 If this number is high, the cache size is too small for the data being requested. To improve performance, increase the size of the cache.
Publishing cache flushes / second 0 Site owners might be performing actions on the sites that are causing the cache to be flushed. To improve performance during peak-use hours, make sure that site owners perform these actions only during off-peak hours.
Publishing cache hit ratio Depends on usage pattern. For read-only sites, the ratio should be 1. For read-write sites, the ratio may be lower. If the ratio starts to decrease, this might be caused by one or more of the following:

  • The cache was recently flushed or compacted.
  • Users are accessing content that was recently added to the site. This might occur after lots of new content is added to the site.

Site slow taking long time querying sharepoint

What you can do if SharePoint is sometimes very slow.

E.g.: on the first start of a Site

Sometimes during the day a search query will take about a minute until you get results…..

  1. Just look on that article: http://support.microsoft.com/kb/2625048

it will improve your “feeled” performance (site response times) massive, if you’re going to implement both solutions.

  1. Disabling CRL Check is just necessary if the SP Server does not have internet connectivity! -> that means proxy settings must be configured for the server itself

http://technet.microsoft.com/de-de/library/bb430772(v=exchg.141).aspx, and your proxy must allow traffic from the server of course.