When gathering files from a content source, the SharePoint 2013 Crawl Component can be very I/O intensive process – locally writing all of the files it gathers from content repositories to its to temporary file paths and having them read by the Content Processing Component during document parsing. This post can help you understand where the Crawl Components write temporary files, which can help in planning and performance troubleshooting (e.g. Why does disk performance of my C:\ drive get so bad – or worse, fill up – when I start a large crawl?)
Processing this item failed because of an unknown error when trying to parse its contents Crawl error SharePoint2013
There is crawling error in the Craw log of a SharePoint 2013 environment. Processing this item failed because of an unknown error when trying to parse its contents. (Error parsing document ‘http://sharepoint.contoso.com/Project/abcd/Q_M/ABX/SitePages/Homepage.aspx’. Sandbox worker pool is closed.; ; SearchID =… Read More ›
Partial Index Reset of a single content source. SharePoint will rebuild the index for these sources, when the next full crawl is started.
Error SharePoint Crawl Log Error: The SharePoint item being crawled returned an error when attempting to download the item for example .aspx files Solution : Open Regedit on your search server/s Navigate to this registry key: HKEY_LOCAL_MACHINESOFTWAREMicrosoftOffice Server14.0SearchGlobalGathering Manager Change… Read More ›