Crawl Depth: Crawl depth relates to the level at which a search engine crawls and indexes a website. Crawling refers to the bots or other software that search engines use to find and index results.
Because an individual website may have thousands of different pages, search engines may or may not reach all of those pages depending upon the site's robots.txt file or site map. In order to fully index a website, it is recommended that websites make all of their site available for indexing. This is especially crucial for pages that may have more obscure information, since they will want hits from long tail keyword searches.
In order to get deeply indexed, sites need to have quality links and accurate site mapping. Webmasters also need to be wary of duplicate content on their websites; this can push rankings down or get the page eliminated from an index altogether.
A site with considerable crawl depth would be something like a wiki website (e.g. Wikipedia). Because there are so many different pages on Wikipedia, they must be adequately linked to the site's other pages in order to get picked up by search engines. Pages that are "orphaned," so to speak, may not be indexed if a search crawler can't reach them from other parts of the website. This is an example of why an accurate site map is crucial for indexing purposes.
This differs from crawl frequency, which determines how often a website is crawled by search engines.