In this guide, we will talk about what are JavaScript-powered websites and what are the various SEO issues associated with them. In the next guide, we will learn how we can deal with such websites for optimal SEO performance.
What Are JavaScript Powered Websites
JavaScript websites are those websites where core and primary content is injected into the DOM via JavaScript. In traditional websites, we have all the primary content available in the HTML DOM which is later on parsed by Googlebot without any problem.
But in JavaScript websites, there is no main content available in the HTML DOM when Googlebot crawls it. Let’s look at an example. Angular.io is a JavaScript-powered website built on a popular JS framework called angular. Below is what you see when you view the page in the browser.
As you can see, there is lots of content, links and headings there. Let’s check it’s source code.
As you can see, except for scripts there is not anything much.
So, when Googlebot crawls this website, it can only parse the HTML DOM. How does Google crawl and index such websites where there is no content, no links in the HTML DOM but visible to the user in the browser? This is how Google does it.
Indexing & Ranking of JavaScript Websites
Google’s search engine divides the indexing process into two steps. First, it crawls the HTML page as it normally would do. The next step taken by Google is to keep this crawled HTML page in its render queue. Google later picks up on this website from its render queue and renders the website core content by executing the JavaScript on the HTML DOM.
After rendering the page, Google puts it in its index for rankings.
So, the first step is crawling and the next is rendering.
But why keep parsed pages into the render queue and not render them immediately after they are parsed?
The reason is that JavaScript execution takes a lot of time and due to limited resources, Google can’t assign resources to the web pages immediately. Instead, it assigns resources when they become available. Till then pages have to stay in the render queue.
How Does JavaScript Affect SEO?
Core Content Rendered
As we have seen, the HTML of JavaScript-powered websites is devoid of any meaningful content. The content is only injected when JS is executed. What this means is that content is not available to search bots when they crawl the web page and in SEO, this is a serious problem
Besides this, Google is able to rank a web page only when it comes out of the render queue. This adds delay to the indexing and ranking which you definitely don’t want in SEO.
Internal Links Not Crawled
Another issue with JS powered websites is the non-crawlability of internal links. We already know that internal links discovery is important for Google to crawl and index web pages from your websites.
As links are injected dynamically into the HTML via JS code, this means all the internal links are not available to the search bots initially and hence that means no further crawling. Google recommends using href attributes for links but that does not happen with JS websites.
Page Speed
JS websites are quite heavy as they contain a lot of JS scripts hence impacting page score negatively. We have already talked about how we can optimise JS for good page speed. Here are some things to be noted:
- Minify JavaScript
- Compress JavaScript
- Defer Non Critical JS code
- Defer Third Party Scripts
Metadata Problems
JavaScript has given rise to something we call Single Page Applications (SPAs). These are web apps that essentially contain only a single page or view but to you, it would seem there are many! Here is how that works.
- When you visit the JS SPA website, you send a request to the server. The server returns HTML files to your browser.
- The browser renders the web page and you are able to interact with it.
- When you click on a new URL, the request to the server is intercepted by the JS framework and it reorganises the components like header, footer and also makes some API calls and eventually renders new content on your browser. In the process, the framework also changes the URL ONLY IN THE BROWSER!

As you can see when you try to visit different URLs, you are not sending requests to the server and receiving new files, instead all the content is updated locally in your browser. That’s why they are called SPAs.
So, when you visit different views in a SPA, each page might not have unique metadata like Title and description since you are not actually changing URLs and getting new files. You are actually on the same page and only components are being reorganised to give you new content and changing URL client-side. This is another problem with JS websites.
As you have seen now. There are a number of problems we need to address in JS powered websites. How do we deal with them? Let’s learn that in the next guide where we will be addressing each of these problems and also see what Google has to say about them.
Search Engine Code Team is comprised of SEO experts and strategists having more than 20 years of combined experience. We keep testing and delivering knowledge of SEO for the community of SEO.
Leave a Reply