Share via


Working with the Crawl Log

The crawl log tracks information about the status of crawled content. This log allows you to verify whether crawled content was added to the index successfully, whether it was excluded because of a crawl rule, or whether indexing failed because of an error. Additional information about the crawled content is also logged, including the time of the last successful crawl, the content source (there could be more than one), the content access account used, and whether any crawl rules were applied.

You can also apply filters to the crawl log to control which data is displayed. This makes working with the crawl log more manageable, because you can apply a filter to show only the data that you are interested in, instead of having to go through all the data to find what you are looking for.

Crawl Log Object Model

You can find the crawl log classes in the Microsoft.Office.Server.Search.Administration namespace, located in the Microsoft.Office.Server.Search.dll.

You use the LogViewer object to retrieve the crawl log data. The MaxDaysCrawlLogged property of the LogViewer object allows you to set the maximum numbers of days for the crawl log to keep data.

Crawl Log Data

To manipulate the data in the crawl log, you use the CrawlLogFilters object, which contains all of the filters used for this purpose. This object contains an AddFilter method with four overloads, allowing you to add filters for the following:

  • All integer properties (such as startAt, TotalEntries, and MessageId)

  • Log time

  • Message type

  • URL

See Also

Reference

Microsoft.Office.Server.Search.Administration.LogViewer
Microsoft.Office.Server.Search.Administration.CrawlLogFilters

Concepts

Managing Content
Getting Started with the Enterprise Search Administration Object Model