As you can see, there is lots of content, links and headings there. Let’s check it’s source code.
As you can see, except for scripts there is not anything much.
So, when Googlebot crawls this website, it can only parse the HTML DOM. How does Google crawl and index such websites where there is no content, no links in the HTML DOM but visible to the user in the browser? This is how Google does it.
After rendering the page, Google puts it in its index for rankings.
So, the first step is crawling and the next is rendering.
But why keep parsed pages into the render queue and not render them immediately after they are parsed?
Core Content Rendered
Besides this, Google is able to rank a web page only when it comes out of the render queue. This adds delay to the indexing and ranking which you definitely don’t want in SEO.
Internal Links Not Crawled
Another issue with JS powered websites is the non-crawlability of internal links. We already know that internal links discovery is important for Google to crawl and index web pages from your websites.
As links are injected dynamically into the HTML via JS code, this means all the internal links are not available to the search bots initially and hence that means no further crawling. Google recommends using href attributes for links but that does not happen with JS websites.
JS websites are quite heavy as they contain a lot of JS scripts hence impacting page score negatively. We have already talked about how we can optimise JS for good page speed. Here are some things to be noted:
- Defer Non Critical JS code
- Defer Third Party Scripts
- When you visit the JS SPA website, you send a request to the server. The server returns HTML files to your browser.
- The browser renders the web page and you are able to interact with it.
- When you click on a new URL, the request to the server is intercepted by the JS framework and it reorganises the components like header, footer and also makes some API calls and eventually renders new content on your browser. In the process, the framework also changes the URL ONLY IN THE BROWSER!
As you can see when you try to visit different URLs, you are not sending requests to the server and receiving new files, instead all the content is updated locally in your browser. That’s why they are called SPAs.
So, when you visit different views in a SPA, each page might not have unique metadata like Title and description since you are not actually changing URLs and getting new files. You are actually on the same page and only components are being reorganised to give you new content and changing URL client-side. This is another problem with JS websites.
As you have seen now. There are a number of problems we need to address in JS powered websites. How do we deal with them? Let’s learn that in the next guide where we will be addressing each of these problems and also see what Google has to say about them.
Search Engine Code Team is comprised of SEO experts and strategists having more than 20 years of combined experience. We keep testing and delivering knowledge of SEO for the community of SEO.
Leave a Reply