SEMrush Technical SEO Exam Answers 2019 PDF
To check and document your level of technical SEO knowledge.
You will be asked 34 questions.
You are allowed 40 minutes for the exam.
To pass the exam, you need to score at least 70%.
Using the 503 status code with the retry-after header
Using the HTTP status code 200
Using the noindex directive in your robots.txt file
Using the 500 status code with the retry-after header
The method of the request (usually GET/POST)
The request URL
The server IP/hostname
The time spent on a URL
True or false? It is recommended to work with log files constantly, making it a part of the SEO routine rather than doing one-off audits.
It is not a good idea to combine different data sources for deep analysis. It’s much better to concentrate on just one data source, e.g. logfile
Combining data from logfiles and webcrawls helps compare simulated and real crawler behavior
If you overlay your sitemap with your logfiles, you may see a lack of internal links that shows that the site architecture is not working properly
They have strong default geo-targeting features, e.g. .fr for French
They may be unavailable in different regions/markets
They need to be registered within the local market, which can make it expensive
Choose two http response status codes that will work where there is any kind of geographical, automated redirect. We are talking about international requests from different geographical regions.
301 and 303
302 and 301
302 and 303
You have site versions for France and Italy and you set up two hreflangs for them. For the rest of your end-users you plan to use the English version of the site. Which directive will you use?
<link rel=”alternate” href=”http://example.com/” hreflang=”x-default”/>
<link rel=”alternate” href=”http://example.com/en” hreflang=”uk”/>
<link rel=”alternate” href=”http://example.com/en” hreflang=”en-au”/>
True or false? The SEMrush Site Audit tool allows you only to define issues that slow down your website and does not give any recommendations on how to fix them.
Avoid using new modern formats like WebP
Increase the number of ССS files per URL
Proper compression & meta data removal for images
True or false? Pre-fetch and pre-render are especially useful when you do not depend on 3rd party requests or contents from a CDN or a subdomain.
Fill in the blank. According to the latest statistics, 60% or more of all results for high volume keyword queries in the TOP-3 have already been moved over to run on ______
The non-critical CSS is required when the site starts to render
There is an initial view (which is critical) and below-the-fold-content
CRP on mobile is bigger than on a desktop
The “Critical” tool on Github helps to build CCS for CRP optimisation
Invalid mark-up still works, so there’s no need to control it
Even if GSC says that your mark-up is not valid, Google will still consider it
Changes in HTML can break the mark-up, so monitoring is needed
Using AMP is the only way to get into the Google News carousel/box
AMP implementation is easy, there’s no need to rewrite HTML and build a new CSS
CSS files do not need to be inlined as non-blocking compared to a regular version
A regular website can never be as fast as an AMP version
rel=amp HTML tags
Which type of mobile website version should you use to check if the “user agent HTTP header” variable is included to identify and provide the relevant web version to the right user agent?
Responsive web design
Independent/standalone mobile site
Anchor text, a-tag with href-attribute
Nofollow attribute, anchor text
a-tag with href-attribute, noindex attribute
The number of links pointing at a certain page
The value a hyperlink passes to a particular webpage
Optimized website link hierarchy
Multiple links to a single URL
Meta robots nofollow
Interlink relevant contents with each other
Internal, link-level rel-nofollow
XML sitemaps must only contain URLs that give a HTTP 200 response
It is recommended to use gzip compression and UTF-8 encoding
There can be only one XML sitemap per website
XML sitemaps should usually be used when a website is very extensive
It is recommended to have URLs that return non-200 status codes within XML sitemaps
A well-defined hierarchy of the pages
It can be downloaded to your local computer
It can’t audit desktop and mobile versions of a website separately
It provides you with a list of issues with ways of fixing
It allows you to include or exclude certain parts of a website from audit
Reverse DNS lookup
User Agent Overrider
User Agent Switcher
How often does the option to combine a robots.txt disallow with a robots.txt noindex statement make folders or URLs appear in SERPs?
Less than ones without noindex
It should point to URLs that serve HTTP200 status codes
It is useful to create canonical tag chaining
Each URL can have several rel-canonical directives
Pages linked by a canonical tag should have identical or at least very similar content
Google prefers them over other pages because they are dynamically generated and thus very fresh.
they do not pass any linkjuice to other pages
those pages are dynamic and thus can create bad UX for the searcher
PRG (Post-Redirect-Get pattern) is a great way to make Google crawl all the multiple URLs created on pages with many categories and subcategories.
It is important to have all sub-pages of a category being indexed
Proper pagination is required for the overall good performance of a domain in search results
rel=next and rel=prev attributes explain to Google which page in the chain comes next or appeared before it
Pagination is extremely important in e-commerce and editorial websites
You have two versions of the same content in HTML (on the website and in PDF). What is the best solution to bringing a user to the site with the full navigation instead of just downloading a PDF file?
Using the X-robots-tag and the noindex attribute
Introducing hreflang using X-Robots headers
Using the X-robots rel=canonical header
The rankings will be fully transferred to the new URL
Link equity will be passed to the new URL
To not lose important positions without any replacement
The new URL won’t have any redirect chains
When there is another page to replace the deleted URL
If the page can be restored in the near future
When the page existed and then was intentionally removed, and will never be back
When you want to delete the page from the index as quickly as possible and are sure it won’t ever be back
A Good Approach Is To Create Internal Competition. More Links To Different URLs Have The Same Anchor Text, Easier For Google Is To Differentiate Which Of Them Is The One URL On Your Domain To Be Ranked For The Given Keyword.