We occasionally see questions about what JS-based sites can do and still be visible in search, so here’s a brief summary for today’s state:
#1 Don’t cloak to Googlebot
- Use “feature detection” & “progressive enhancement” techniques to make your content available to all users.
- Avoid redirecting to an “unsupported browser” page. Consider using a polyfill or other safe fallback where needed.
- The features Googlebot currently doesn’t support include Service Workers, the Fetch API, Promises, and requestAnimationFrame.
#2 Use rel=canonical attribute when serving content from multiple URLs is required
#3 Avoid the AJAX-Crawling scheme on new sites
Consider migrating old sites that use this scheme soon. Remember to remove “meta fragment” tags when migrating. Don’t use a “meta fragment” tag if the “escaped fragment” URL doesn’t serve fully rendered content.
#4 Avoid using “#” in URLs (outside of “#!”)
#5 Check your robots.txt file
#6 Limit the number of embedded resources
When using AMP, the AMP HTML page must be static as required by the spec, but the associated web page can be built using JS/PWA techniques. Remember to use a sitemap file with correct “lastmod” dates for signaling changes on your website.
Some useful links:
 Progressive Web Apps : https://developers.google.com/web/progressive-web-apps
 Progressive enhancement:https://en.wikipedia.org/wiki/Progressive_enhancement
 rel=canonical: https://support.google.com/webmasters/answer/139066
 AJAX Crawling scheme: https://developers.google.com/webmasters/ajax-crawling/docs/specification