When gathering files from a content source, the SharePoint 2013 Crawl Component can be very I/O intensive process – locally writing all of the files it gathers from content repositories to its to temporary file paths and having them read by the Content Processing Component during document parsing. This post can help you understand where the Crawl Components write temporary files, which can help in planning and performance troubleshooting (e.g. Why does disk performance of my C:\ drive get so bad – or worse, fill up – when I start a large crawl?)
SharePoint Health Analyzer rules reference SharePoint 2013
I came across a situation where user is trying to search documents selecting the option “search in same site” instead of “all sites” from search box and getting no result where as can find documents from other library with in same site.
Why such happens ?
The first point comes to mind for search error is content not crawled, indexing not done for this situation.
Yes , its true but we need to think why ?
During various search troubleshooting i came across the following crawling error in the Crawl log of a SharePoint 2013 environment. Processing this item failed because of an unknown error when […]
If your Usage and Health Data Collection Proxy is in a stopped state here is a quick bit of PowerShell to to get it started: $sap = Get-SPServiceApplicationProxy | where-object […]
what you can do if your SharePoint is sometimes very slow. E.g.: on the first start of a Site Sometimes during the day a search query will take about a […]
Error: SharePoint Crawl Log Error: The SharePoint item being crawled returned an error when attempting to download the item for example .aspx files Solution: 1.Open Regedit on your search server/s […]