Storage Locations for files gathered by the Crawl Component sharepoint 2013

When gathering files from a content source, the SharePoint 2013 Crawl Component can be very I/O intensive process – locally writing all of the files it gathers from content repositories to its to temporary file paths and having them read by the Content Processing Component during document parsing. This post can help you understand where the Crawl Components write temporary files, which can help in planning and performance troubleshooting (e.g. Why does disk performance of my C:\ drive get so bad – or worse, fill up – when I start a large crawl?)

By default, all Search data files will be written within the Installation Path

  • The Data Directory (by default, a sub-directory of the Installation Path) specifies the path for all Search data files including those used by I/O intensive components (Crawl, Analytics, and Index Components)
    • The Data Directory can only be configured at the time of Installation (e.g. it can only be changed if uninstalling/re-installing SharePoint on the given server)
      • From the Installation Wizard, choose the “File Location” tab as seen below
      • IMPORTANT: Before uninstalling SharePoint, first modify your Search topology by removing any Search components from the applicable server. Once SharePoint is re-installed, you can once again deploy the components back to this server.
    • The defined path can be viewed in the registry:

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\15.0\Search\Setup\DataDirectory

    • Advanced Note: The Index files (by default, written to the Data Directory) path can be configured separately when provisioning an Index Component via PowerShell using the “RootDirectory” parameter

(As a side note: the graphic is only intended to display the default locations specified at install time. It is recommended to change these to a file path other than C:\ drive)

For the Crawl Component:

  • When crawling [gathering] an item, the filter daemon (mssdmn.exe – a child process of the Crawl Component that actually interfaces with an end content repository using a Search Connector/Protocol Handler) will download any applicable file blobs to the SSA’s “TempPath” (e.g. an HTML file, a Word document, a PowerPoint presentation, etc)
    • In the graphic below, this is step 2a
    • The defined path can be viewed either:
      • In the registry (of a Crawl server)

        HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\15.0\Search\Global\Gathering Manager\TempPath

      • Or as a property of the SSA:

        $SSA = Get-SPEnterpriseSearchServiceApplication


  • When the filter daemon completes the gathering of an item, it is returned to the Gathering Manager (mssearch.exe – responsible for orchestrating a crawl of a given item) and the applicable blob is moved to the “GathererDataPath“, which is a path relative to the DataDirectory mentioned above.
    • In the graphic below, this occurs in step 2b
    • The defined path can be viewed in the registry (of a Crawl server):

      HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\15.0\Search\Components\-GUID-of-theSSA-crawl-0\GathererDataPath

  • The GathererDataPath is mapped as a network share (used by the Content Processing Components)
    • The shared path can be viewed in the registry (of a Crawl server):

      HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office Server\15.0\Search\Components\-GUID-of-theSSA-crawl-0\GathererDataShare

Usage by the Content Processing Components:

  • When the item is fed from the Crawler to the Content Processing Component (step 3 above), the item is only logically submitted to the CPC in a serialized payload of properties that represent that particular item – any related blob would remain on the Crawler and retrieved by a later stage in the processing flow
    • For SharePoint list items, there would typically not be a blob (unless the list item had an attachment)
    • For a document in a SharePoint library, the blob would represent the item’s associated file (such as a Word document)
  • During the Document Parsing stage in the processing flow (e.g. during step 4 above), the item’s blob will be retrieved from the Crawl Component via the GathererDataShare
  • When the Crawl Component receives a callback (success or failure) from the CPC (e.g. in step 6b above after an item has been processed), the temporary blob is then deleted from the GathererDataPath

An example path to an item with DocID 933112 would look like the following:


#0xe3cf8 hex = 933112 decimal


  • crawlerSrv is a server running a crawl component
  • gthrsvc_-GUID-of-theSearchAdminWebServiceApp--crawl-0 is the name of the crawl component
    • This GUID can be identified using the following PowerShell:

      $SSA = Get-SPEnterpriseSearchServiceApplication

      $searchAdminWeb = Get-SPServiceApplication –Name $



  • And the file name is actually re-named to the hex value of the docID
    • For example: 0xe3cf8 hex = 933112 decimal
    • Which we can see in ULS, such as:
      • From the Crawl Component (in this case, running on server “faceman”):

        mssearch.exe     SharePoint Server Search Crawler:Content Plugin      af7zf VerboseEx

        CTSDocument: FeedingDocument: properties : strDocID = ssic://933112 key = path values =\\FACEMAN\gthrsvc_7ecdbb10-3c86-4298-ab09-04f61aaeb636-crawl-0\\f8\0xe3cf8.aspx 

      • From the Content Processing Component:

        NodeRunnerContent2-834ebb1f-009    Search    Document Parsing      ai3ef VerboseEx

        AttachDocParser – Parsing: ‘file://faceman/gthrsvc_7ecdbb10-3c86-4298-ab09-04f61aaeb636-crawl-0//f8/0xe3cf8.aspx’


SharePoint Health Analyzer rules reference SharePoint 2013


Sorry,we can’t open this document because there was a problem talking to the service – Office Web Apps 2013

Farm Information:

  1. Web Application published internally and externally (so make sure both URLs added to Alternate Access Mapping)
  2. Office Web Apps 2013 (Published externally via TMG 2010 and internally)

I have a farm which connected to Office Web Apps 2013 and It’s working fine internally so then we decided to enable it to open Office Web Apps externally by the following steps :

1- In Office Web Apps Server , Open Power Shell and run this command to add the External URL to existing Farm:

Set-OfficeWebAppsFarm -ExternalURL


2-In SharePoint Server , Open SharePoint 2013 Management Shell and run the following commands:

Remove-SPWOPIBinding –All:$true

New-SPWOPIBinding -ServerName -AllowHTTP
Set-SPWopiZone internal-http
$c = Get-SPSecurityTokenServiceConfig
$c.AllowOAuthOverHttp = $true

then when I tried to access the Office documents from External URL I got the following error :

“Sorry,we can’t open this document because there was a problem talking to the service”


This issue related to TMG publishing rule so to fix it :

1- Open TMG Management Console

2- Go to Publish rule for Office Web Apps

3- Right click on the rule and choose Configure HTTP

4- Uncheck  Verify Normalization

5- Click ok

6- Click Apply

7- Wait the sync process




New WCM Features in SharePoint 2013


This SQLServer instance does not have the required “max degree of parallelism” setting of 1

I have got this error while trying to install SharePoint 2013

max degree of parallelism

What is Max Degree of Parallelism ?
When an instance of SQL Server runs on a computer that has more than one microprocessor or CPU, it detects the best degree of parallelism, that is, the number of processors employed to run a single statement, for each parallel plan execution. You can use the max degree of parallelism option to limit the number of processors to use in parallel plan execution. SQL Server considers parallel execution plans for queries, index data definition language (DDL) operations, and static and keyset-driven cursor population.
Read this for more details

How to fix it ?

  1. Open Microsoft SQL Server Management Studio
  2. Login with sysadmin user
  3. Right Click on instance name and select properties >> Advanced
  4. Change Max Degree of Parallelism to 1
  5. Restart the SQL Service

Integrating ServiceNow with SharePoint 2013

UNIt goes without saying that every part of an enterprise relies on technology. With more and more technology providers coming to the fore, it is not uncommon to find a variety of technology platforms across the units of an enterprise. To keep the pieces of the enterprise moving together there is often a need for integrating the platforms, which are built by different providers. This blog provides a brief overview of ServiceNow, and what is involved in integrating ServiceNow and SharePoint 2013.

What is ServiceNow?
ServiceNow is a software platform that supports IT Service Management and automates common business processes. It’s a software as a service (SaaS) platform that offers targeted solutions for various units of an enterprise like IT, HR, Operations and IT Business Management. More than two thousand enterprises around the world use the ServiceNow platform extensively for their daily operations. ServiceNow brings the following key benefits for an enterprise:
• Improves service experience for end users
• Maintains service records in a structured model that allows audits
• Automates processes that reduce costs and increase efficiency
• Enables reporting and analytics for company executives

ServiceNow Integration Technologies
ServiceNow integrates with third-party applications and data sources using a variety of techniques, some of which are listed below.

• Single Sign-On
• Web Services—SOAP, REST
• Database Connectivity—ODBC, JDBC
• Import Sets—Excel, CSV, XML
• Email

Integrating with SharePoint 2013
SharePoint 2013 has proved to be a great platform for building enterprise solutions and is widely used by organizations worldwide. Due to the huge popularity of both ServiceNow and SharePoint 2013, the need of their integration becomes almost inevitable. There are two ServiceNow integration techniques that can be utilized to integrate with SharePoint 2013.
• Call ServiceNow REST API to read, create, update and delete service objects
• Send emails to ServiceNow-administered email ID to manage service objects

Let’s see how we can implement both the above techniques using SharePoint 2013 Workflows.

Calling ServiceNow REST API using SharePoint 2013 Workflows
1. Open SharePoint Designer 2013.
2. Add a new site workflow.
3. From the Actions dropdown menu on the ribbon select “Call HTTP Web Service” action.


  1. Specify the web service URI corresponding to the ServiceNow table that you want to query or update along with the HTTP method—GET, POST, PUT—depending on your requirement. In the below example the web service call will fetch a list of incidents on the ServiceNow instance.


  1. Specify the request and response parameters so that you can handle them later in the workflow. An example could be where you call the web service to create an incident in ServiceNow and capture the Incident ID in response.

Sending email using SharePoint 2013 Workflows
1. In SharePoint Designer 2013, from the Actions dropdown menu select the “Send an Email” action.


  1. Populate the fields of the email template according to the template that is configured on ServiceNow instance. The placeholders, like the ones shown in the below screenshot, are dynamic.


  1. In the above example, the template could be used to communicate to the ServiceNow instance that user’s task has been completed. The subject would read something like “Task 1234 has been Completed”. When the email is received by ServiceNow it will update the task as completed based on the pre-configured inbound email action.

Access Denied Error after migrating to SharePoint 2013


We were working for a client, they had many groups and we had to build a collaboration portal for all the groups. Key thing was few sites of some groups were already present in SharePoint 2010 in different standalone servers. Migration was a key thing here as the existing sites has huge data, and huge user base.

The Requirement was to build a portal /web application which will have migrated sites and new set of sites as per agreed site structure. According to the agreed architecture and design we created a new web application and started building the site hierarchy.

As part of this we followed the regular approach database detach –attach method and migrated the existing SharePoint 2010 site .Migration was successful and we were able to access the site  with the system account. Later we tried with couple of site admin accounts, to our surprise we were getting “ACCESS DENIED” with any other user id.


By default when we create a web application in SharePoint 2013, it gets created with Claims authentication. When we migrate the content DB to 2013, it recognizes the user account only in this format i:0#.w|domainusername . Though it’s an AD account it no more recognizes the DomainUserName format.

SharePoint assumes all users to be claim users and renders them so. Therefore, a normal windows user – “DomainUserName” appears as “i:0#.w|DomainUserName”. Moreover, it uses the username in this same format to check for its permissions but does not find a matching entry for the user as the database has windows users – “DomainUserName”. So, the site will give you an access denied.

Note that the System Account will work since its “DomainUserName” is never used and System Account is a keyword used by SharePoint for the application pool identity. Therefore, it remains unaffected.


In brief the share point 2010 site which needs to be migrated should be converted to claims format and then migrate it to 2013. But a word of caution , do not directly change the SP 2010 site to claims format in a production environment as it will not allow existing windows accounts to login and existing SharePoint 2010 site will be no more operational.

Below power shell script converts classic mode site to claims mode:

Power shell script

This script converts user accounts to claims format:

Script for converting user accounts to claims format

On executing the first script (to enable claims authentication) the SharePoint Content Database is made ready for claims based authentication but the already existing site users were windows users, are not “migrated” to be understood by claims authentication.

We use the second script to “migrate” the users. MigrateUser($true) will convert all user accounts to claims format. After running this script user accounts are converted in the database to claims format, therefore, user names are read correctly by SharePoint therefore, permissions for users are associated correctly by SharePoint hence the site permissions work correctly.


By any chance if you execute these scripts directly in productions, by executing $webapp.MigrateUsers($false) will not convert user accounts to windows mode, rather it will throw an exception. Make sure you have a temporary environment built where you execute the above scripts. Also note that these scripts are running on Web Applications so they will affect all site collections in that web application