When gathering files from a content source, the SharePoint 2013 Crawl Component can be very I/O intensive process – locally writing all of the files it gathers from content repositories to its to temporary file paths and having them read by the Content Processing Component during document parsing. This post can help you understand where the Crawl Components write temporary files, which can help in planning and performance troubleshooting (e.g. Why does disk performance of my C:\ drive get so bad – or worse, fill up – when I start a large crawl?)
I came across a situation where user is trying to search documents selecting the option “search in same site” instead of “all sites” from search box and getting no result where as can find documents from other library with in same site.
Why such happens ?
The first point comes to mind for search error is content not crawled, indexing not done for this situation.
Yes , its true but we need to think why ?
Processing this item failed because of an unknown error when trying to parse its contents Crawl error SharePoint2013
There is crawling error in the Craw log of a SharePoint 2013 environment. Processing this item failed because of an unknown error when trying to parse its contents. (Error parsing document ‘http://sharepoint.contoso.com/Project/abcd/Q_M/ABX/SitePages/Homepage.aspx’. Sandbox worker pool is closed.; ; SearchID =… Read More ›
Partial Index Reset of a single content source This script will remove and re-add your content source’s start addresses. SharePoint will more or less rebuild the index for these sources, when the next full crawl is started. $sourceName = “Local… Read More ›
Error SharePoint Crawl Log Error: The SharePoint item being crawled returned an error when attempting to download the item for example .aspx files Solution : Open Regedit on your search server/s Navigate to this registry key: HKEY_LOCAL_MACHINESOFTWAREMicrosoftOffice Server14.0SearchGlobalGathering Manager Change… Read More ›