I wasn’t sure how to ask the question. But basically, it’s a textbook scenario. I’m working on a site that’s article based, but the article information is stored in a database. Then the page is rendered with the information in the database based on the requested article id:
For example: http://www.mysite.com/articles/9851
I’m new to SEO, so I’m wondering how engines are able to crawl the contents of pages like this and/or what I need to do in order to ensure that it will be crawled.
So for instance, this site. All of the articles/posts on this site appear to live in a database somewhere. The URL has an ID which looks like it is used to tell the server which data to use to generate the page — so the page doesn’t actually exist somewhere, but it’s template does. When I search google, I might find one of these posts based on the content of the post.
I understand that crawlers normally just find a page and follow it’s links and follow its links’ links and so on, but how does that work when the site is search based like this? Do have to create a page that randomly picks articles out of the database so that the crawler can see it or something?
8
At its very simplest, search engines are just reading HTML. The web page at a certain URL is just an HTML file to a search engine, so it doesn’t know a database query is involved. It only knows that the HTML file contains text and links — which is an overly simplified explanation, but accurate enough for this scenario.
@RomanMik has the right idea. You need to review Google’s documentation, but regardless of the search engine, the process is the same:
- Submit your site to the search engine
- Your site needs links — lots of links, and links with descriptive text — to the rest of the pages on your site
- A robots.txt file in the root of your domain can help tweak the search indexers, usually by telling them to ignore certain directories or file types
- Create fresh content and update the home page and section pages of your site to link to the new content
A search indexer will start at your home page, index it, then it will move on to child pages in your site based on the links it finds in the document it is currently indexing.
Forget about thinking of search engines and your site in terms of your database. It’s all HTML to a search engine, and they need fresh content, descriptive content, and links to other pages. Do that and you’ve cleared the first major hurdle.
1
Since Google is the largest Search Engine, and ALL sites want to be crawled by google, I suggest you start with google’s webmaster tools page. (https://www.google.com/webmasters/). It is full of helpful resources.
To answer your question, you provide a robot.txt or a sitemap with all links that you want search engines to crawl and index. Here’s a detailed answer about how to submit a sitemap to google. https://support.google.com/webmasters/answer/183669?hl=en