With the Knowledge Graph, we’re continuing to go beyond keyword matching to better understand the people, places and things you care about. To do this, we not only organize information about webpages but other types of information too. Today, Google Search can help you search text from millions of books from major libraries, find travel times from your local public transit agency, or help you navigate data from public sources like the World Bank.
If you simply have too many URLs on your site, Google might crawl a lot but it will never be enough. This can happen because of faceted search navigation for instance, or another system on your site that simply generates too many URLs. To figure out whether this is the case for you, it’s always wise to regularly crawl your own site. You can either do that manually with Screaming Frog’s SEO spider, or with a tool like Ryte.
Even if you submit URL to Google, their AI not only needs to classify and register the data on your website, it also has to evaluate whether your content is good, and relevant for competitive keywords or just an unintelligible spam fest that got pumped with tons of backlinks. This means it’s possible your website was crawled, but the link indexing process is still underway.
Web indexing (or Internet indexing) refers to methods for indexing the contents of a website or of the Internet as a whole. Individual websites or intranets may use a back-of-the-book index, while search engines usually use keywords and metadata to provide a more useful vocabulary for Internet or onsite searching. With the increase in the number of periodicals that have articles online, web indexing is also becoming important for periodical websites.[1]