Complete SEO guide for beginners

SEO. Three letters that might potentially change your life as well as the life of your company. Why? Thanks to SEO, which means millions of users of search engines, you can make your business progress and grow.

If you are asking what is SEO, it represents the abbreviation for „Search Engine Optimization“. 

Search engine optimisation is the key to online success. With SEO, you can optimise your website, which means a higher ranking in search results, more considerable traffic on your web, and so potentially more customers.

If you want to change your career and start making a living online, or you wish to sell your products or services online and gain more clients to your brick-and-mortar store, SEO is the most valuable and most effective digital marketing tool.

Nowadays, you can also find the wrong usage of this abbreviation, like „SEO optimisation “, which does not really make any sense. Optimisation of the website or search optimisation are correctly used terms that represents this area of marketing. 

The most common and also the most crucial search engine is Google these times, and that´s why it is essential to optimise the web for Google search engine. However, you can also find guides and tools for optimisation in local search engines commonly used in some countries.

More quickly, it is a process of making the website’s visibility better when people are looking for products or services associated with your business via different search engines.

And the better your site is visible in the search results, the more likely you will attract the attention of potential customers.

The search engine optimisation process is a long-term process; it requires careful preparation as work and effort is manifested slowly and gradually. For example, if you define the wrong keywords in the PPC campaign (Google Adwords, Etarget, etc.), you will discover the error early, so you have a chance to fix it.

However, if something like this happens in SEO, you will only find out in a few weeks for new pages, even in a few months.

Important note: to optimise your site, arm yourself with patience. Even if you hire a professional SEO company, you can’t expect results in a few days.

One of the reasons why SEO frustrates people is that it changes continuously. SEO is an infinite battle for attracting as many eyes to your website and convicting Google that your site is worth the attention of search engines.

Who is this SEO guide for?

This complete SEO guide is designed especially for newcomers to the SEO area.

We assume that you have not yet previously experienced search engine optimisation or marketing in this area, so now you can learn everything you need in a simple way, without technical terms or professional jargon, which may be confusing seo for beginners.

What can you expect from this SEO guide?

This is an SEO guide, an introduction to SEO and only touches on the foundations of SEO. It is a guide for beginners that will teach you the basic SEO concepts and set the soil to move to more advanced SEO techniques.

How does the search work?

Have you ever wondered how many times per day you use Google or other search tools to find something?

Is it five times, ten times or even sometimes more? Did you know that Google itself hosts more than 2 trillion searches annually?

These numbers are enormous. Search tools have become our “best friends”. We use them as an educational tool, a shopping tool, in entertainment and leisure time, and for business and growth.

It is not exaggerated to claim that we have reached a point where we depend on search engine tools for (almost) everything we do. And the reason why this is happening is straightforward. We know that search tools, especially Google has answers to all our questions.

But what happens when you write a question and click “find”? How do the search tools actually work, and how do they make decisions on what to show in the search results and in what order?

How do search tools work

The search tool is complex computer software.

Before allowing you to enter a query and find something on the web, the software does a full range of preparatory work, so when you click on “find”, you will already receive a set of accurate and quality results that answer your question or query.

So what does this “preparatory work” include? Three main phases. The first phase represents the process of discovering information, the second phase represents the organisation of information, and the third phase is evaluating.

They are known worldwide as:

  1. Crawling
  2. Indexing 
  3. Ranking

Step 1: Crawling

Search engines contain many computer programs called “web crawlers” responsible for searching for information that is publicly available on the Internet.

To simplify the functioning of this complex process, you only need to know that the task of these software browsers is to browse the Internet and find the web servers where the website is placed.

This creates lists of all web servers to crawl and the number of websites hosted by each server to start working.

The software visits each site and tries to find out how many pages it has, whether it is text content, pictures, videos or other formats (CSS, HTML, JavaScript, etc.).

When visiting the site, in addition to the number of websites hosted by a server, it also monitors any links (either leading to the page or external websites), so it discovers more and more pages.

They do this continuously and also follow the changes made on the website to know when new pages are added or removed, when links are updated, etc.

Imagine that today there are more than 130 trillion individual pages on the Internet, and thousands of new pages are created daily. You can imagine that it is a lot of work.

Why should you even be interested in the search process?

Your first worry about optimising your websites for search engines is ensuring they have the proper access to them. Because if they can’t “read” your seo website, you can’t expect great results in terms of high SEO ranking or traffic.

As explained above, crawlers have a lot of work, and you should try to make their job easier.

Step 2: Indexing

Of course, browsing and crawling the page is not enough to create a search engine result.

Information identified by crawlers should be organised, sorted and stored so that search engine algorithms can process them before accessing it to the end user. This process is called indexation.

The search tools do not store all the information found on the page in their index. Still, they keep things like: when it was created/updated, the page name and description, the content type, related keywords, internal and external links, and many other parameters. Google likes to describe its index as the back of the book (a reeeeally fat book).

Simply put, indexing is downloading the content of your website to the search engine database.

Indexing is constantly working. The indexing itself ensures that the search tool knows about you and finds out what’s on your web. However, this will not provide the first place in the search.

Why to be interested in the process of indexing?

It is elementary. If your site is not in the index, it will not appear in any search.

This also means that the more pages you have in the search engine index, the greater the chance they will appear in the search results when someone enters a demand.

Note that we have used the term “appear in the search results”, and I mean at any position, not necessarily in the top positions or first pages.

Step 3: Ranking

The third and last step in this process is to make search engines decide which pages to see in SERP and in what order when someone enters a demand. The whole process is based on search engine evaluation algorithms.

Simply put, these are the parts of software with a number of rules that analyse what the user is looking for and what information to give back. These rules and results arise based on the information available in the index.

How do searching tool algorithms work?

Over the years, rating algorithms have developed and have become genuinely complex. At the beginning (around 2001), it was as easy as connecting a user´s demand to the headline/title/ page name, but this is no longer the case. The Google seo evaluation algorithm takes into account more than 255 rules, and no one knows what these rules are.

This includes Google founders Larry Page and Sergey Brin, who created the original algorithm.

Things have changed vigorously over the past few years, and nowadays, the constant development of machine learning and computer programs has become the most essential part of the decision–making process, which is based on a large number of parameters that are beyond the content of the website.

To facilitate the understanding, we present a simplified procedure of the functioning of how search engine SEO ranking factors work:

Analyse the user’s demand

The first step for the search engine is to analyse what kind of information the user is looking for.

Search engines, therefore, analyse the user’s demand (search terms) by distributing it into several meaningful keywords. A keyword is a word that has a specific meaning and purpose.

For example, when you enter “how to make pizza”, search engines, thanks to the words “how to”, will know that you are looking for instructions on how to prepare a pizza, and therefore the results you will receive will include cooking sites with recipes.

If you are looking to “buy original”, search engines will know from the words “buy” and “original” that you want to buy something, and the returned results will include e-shop websites and online stores. The development of this mechanism – machine learning helps them to combine the related keywords.

The search mechanism is also clever enough to be able to recognise spelling errors, understand the plural and generally extract the importance of demand from the natural language (whether written or verbal in the case of voice search).

Finding relevant sites

The second step is to look into the search engine index and decide which pages will give the best answers for the query. This is a crucial phase throughout the process for both search engines and web owners.

Search engines have to return the best possible results as quickly as possible, and on the other hand, web owners want their seo websites to get as much traffic as possible. This is also a phase in which good SEO techniques can influence the decision of algorithms.

To have a better idea of ​​how the relevance and selection rating works, we mention the most important factors:

  • Title and content relevancy – how relevant are the title and content to the user’s demand.
  • Content type – if the user is looking for images, the returned results will include images and not text.
  • Content quality – content should be useful, informative and unbiased.
  • Web page quality – the website’s overall quality is also very important. Google will not display pages that do not meet quality standards.
  • The date of publication – In case of news-related demands, Google wants to display the latest results, so the publication date is also considered.
  • The popularity of the site – does not depend on the website’s traffic but on how other websites perceive the page. A site with many references (backlinks) from other websites is considered more popular than other pages without links, so it has more chances of being chosen by algorithms. This process is also known as an off-page SEO.
  • Page language – Websites are displayed primarily in the user´s language.
  • Website speed- Websites that load quickly (2-3 seconds) have a slight advantage compared to websites that are loaded more slowly.
  • The type of device – users searching on mobile devices find pages suitable for mobile devices.
  • Location – Users who were looking for results in their area will receive mainly the results related to their location.

And that’s just a tiny portion of everything – the tip of the iceberg. As already mentioned, Google uses more than 255 assessment factors in its algorithms to ensure that its users are satisfied with the results they receive.

Why be interested in how search engine ratings work?

To obtain traffic from search engines, your seo website must appear in the first position on the first page with results.

It is statistically proven that most users click on one of the 5 top results (both computer and mobile).

Position on the second or third page of the results will not bring you any traffic.

Key knowledge

One of the search engine optimisation goals is to provide Google with the right signals to “pick up” your page during the search algorithm process and display it over the search results.

Closer to SEO description

Search engine optimisation or SEO is a process with specific rules to optimise your website and generally to make it easier for search engines to index your website and better understand your content.

Remember that crawlers and algorithms are not people but computer programs and cannot “read or see” a website as real users.

SEO crawlers read the HTML website code and extract the information they need, and subsequently, they add it to the index.

Then it is the task of the algorithms to decide which website to place above when searching for a given query.

All of these SEO components are a whole 255 rating factors that the Google SEO evaluation algorithm takes into account when assessing the quality of the website.

For easier understanding and optimisation in general, various SEO factors can be grouped into three main processes.

Each of these sub-processes deals with a number of rules that ultimately create a fully optimised website. So when applied as a whole, they can secure your website a high Google rating.

We, therefore, bring a quick summary of what each process contains. Do not worry if something is confusing for you at the moment; we will still get to the detailed clarification.

Technical SEO

Technical SEO applies to the process of optimising your website during the crawling and indexing phase. With technical SEO guide, you can easily allow search tools to access, crawl, interpret and index the webpage.

We call it “technical” because it has nothing to do with the actual content of the website or with the promotion of websites. Its main goal is to optimise the structure of the website.

On-Page SEO

As for the On-Page SEO, your goal will be to understand and speak the “search engine language” and help crawlers understand the meaning and context of your site. Now comes a phase in which you are starting to deal with keywords and content seo marketing, which means working on the optimisation of your site for specific keywords.

OFF-Page SEO

Off – Page SEO concerns the methods and techniques of promoting websites that are already beyond the design and the website’s content.

However, this is an essential but often critical part of the SEO process. Still, you will only deal with it after completing the previous two processes (technical and on-page SEO).

Note: When you approach the SEO field, you can come across other terms such as local SEO, mobile SEO, e-commerce SEO and content SEO. These are subgroups of search engine optimisation that are specific to certain types of websites.

For example, local SEO is more suitable for businesses that exist physically at a particular place and want to bring more customers to their shop/office, while e-commerce SEO is mainly for online stores.

Key knowledge

SEO will help you get more visibility in search engines. To simplify SEO management, the whole process is divided into three main partial processes that must be respected by the above order: Technical SEO, On-Page SEO and Off-Page SEO.

The basic SEO principles are the same for all kinds of websites. Then depending on their type (blog, company website, e-commerce website, etc.), you can use specific rules to improve the evaluation in the search engine further.

Why SEO

In this topic, you will learn something about SEO benefits, as well as why it is so vital for the success of the online business.

Thanks to this SEO guide, your site will be placed in higher positions and get a better seo ranking and higher rating from Google, giving you more website traffic. Traffic is the most essential element to determine the success of your website.

Without traffic, you will not get conversions, sales, subscribers, or the attention your website deserves. 

Is it possible to drive targeted traffic to your website with SEO? Yes, and you probably know that there are other sources that will bring you this traffic, such as social media (Facebook or Instagram), but no source will be as successful as the organic one.

Users who enter Google search demand have a specific intention, while users on Facebook usually view entertainment channels, look for distractions or seek their friends.

There is, therefore, a massive difference in conversion. When you sell something online, it is more probable that sales will occur with traffic from Google than Facebook. A Google visitor has some specific need and is looking for a solution, while a Facebook visitor can visit your site because of advertising or curiosity.

Note: The conversion is when users make the action you want. For example, it can be a subscription to the newsletter, buying a product from your store, visiting your e-commerce store, submitting the contact form, etc.

“Free” traffic 24 × 7

If your SEO is not working, you will probably have to pay for the traffic to your site. This is not necessarily a bad solution; you can lead profitable SEO campaign on Google or Facebook.

The problem is that as soon as you stop paying for these ads, your traffic will drop, and you will be back to zero.

With SEO, however, it is an entirely different story. Once you get a high seo score in Google and take all the necessary steps to maintain your rating, the traffic will come 24×7.

Imagine the benefits that this kind of strategy can bring to your business. Users visit your website, and you sell or acquire potential customers while sleeping.

Well, to avoid confusion and to be completely accurate, the word “free” is not absolutely precise. Before you get high positions in Google and successfully rank for selected SEO keywords, you will have to do a lot of work, which, of course, has its price.

However, this does not change the fact that as soon as you get into this phase, everything else will be really more accessible, and your costs will gradually decline.

SEO guide allows you to develop your business

As for SEO, there are no guarantees. Google is constantly making changes to the evaluation algorithm, and every time this happens, some websites may get or lose seo score.

However, if you do not break any rules within your SEO strategies, work consistently and have a pure history, you can predict the expected traffic levels. This means, in practice, that you can plan and transmit to reality the growth of your business.

For example, if you sell online services and know that you have had an average of 10 customers per month (from organic traffic) over the past year, you can assume that this number will be similar next year. Based on this, you can take the following steps accordingly.

Note: Organic traffic is a term that describes non-paid traffic from Google (and other search tools). It is, therefore, different from paid traffic, where you pay for a click when someone clicks on Google or Facebook ad and visits your site.

How does optimisation work?

Search tools like Google or Bing use robots to browse a list of pages on the web. Robots collect information about the site and put it in the index.

Imagine an index as a vast library in which a librarian can pick up a book that will help you find exactly what you are searching for at that time. Then the algorithms analyse the page in the index, taking into account hundreds of factors to determine the order in which the site appears in the search results. In our analogy, the librarian has read each book in the library and is thus able to say which book exactly (website) will contain answers to your questions.

The search algorithms are programmed to show relevant, authoritative sites and provide users with effective results. Optimising your site and content concerning these factors can help to appear higher in the search results.

If we had to choose the most important things to explain how SEO works and what you should understand, it would be four things:

SEO keyword research – understand what people are searching for

Content relevance – give the search engine the possibility to recognise what is on your site

Increasing the authority of the site – use different methods to ensure that pages are genuinely reliable and useful

Technical Optimization – improve the quality and speed of the page

“Successful SEO does not mean deceiving Google. It is about a partnership with Google to provide the best search results for Google users. ” Phil Frost

When is to use the SEO training and do it by yourself and when to prefer professional SEO company

Simply put: it depends on the industry you are in, your knowledge of the Internet and marketing, and partly also on your technical skills. If your business area is competitive, which means that there are many sites optimised at first glance (properly structured, containing SEO keywords that appear in the texts and subtitles of pages and usually having “nice” URLs), you will have a little more complicated job.

On the other hand, If you already know something about marketing, in which case we recommend you dive into it and study from available and, especially, the freshest sources about SEO. Maybe at one point, you will find that you simply can’t go further alone, and at that moment, you will start to look at professional seo services.

On the contrary, if your company does not belong to an area with great competition, it will usually be enough to continue with the information from literature or the Internet.

What is SERP

SERP is an abbreviation you can sometimes encounter when browsing and optimising for the search engine. The acronym comes from “Search Engine Results Page”. The SEO task is to get the best positions for SERP.

The aim is to get the most relevant visitors, visitors who are interested in information/products/services offered on your site.

What is the most essential SEO factor?

Do the backlinks matter? Yes.

Do you need to have links for better ranking? Probably, but not necessarily.

Is the length of the content significant? Yes, but an annoyingly long post does not overtake a short, fantastic post.

How long does it take to show the effects of SEO?

The answer to this question is not clear. Only Google knows precisely how its algorithm works. SEO can manifest the effects after weeks or months, depending on your SEO strategy. If one of your competitors does something just a little better, it can easily knock you off the top of the SERP.

What the search result consists of

The search result usually consists of the following parts:

  • Title
  • Page preview
  • Excerpt from the text (Snippet)
  • URL address
  • Additional information – region, more, archive pages, version of HTML etc.

Meta tags

Focus on Meta tags, the SEO essential part and introduction to the basic HTML. What are the meta tags for, and why is it important to write them correctly to make them read by the search engine.

Meta tags are portions of HTML language that provide information on the website. Why are we dealing with it? Knowing what they are and, above all, how they should be written to work is not a topic that concerns only a web admin. Instead, it is crucial that those who have a site know how it ‘speaks’ to the search engine and, in particular, to Google.

Meta tag: definition and function

The most stringent definition that can be given of a meta tag is this: “Meta tags are a part of the HTML tags that describe the contents of the page to search engines and website visitors”. In practice, they are the information, called meta data, which is not displayed in the front end part of the site, but are found in its structure and allows search engines to classify their content. In other words, the meta tags are essential elements that appear in the HTML code of a website and indicate to the search engine what that site is about.

How does this mechanism take place in practice? Here, for example, google acts as a giant in the online research sector and, in practice, almost a monopolist operator. We are talking about Googlebot to generically indicate the web crawler or spider of Google, or the bot who periodically analyses the World Wide Web and the individual sites to create an index concerning the first.

Meta title 

The meta title of the page is in the source code written like <title> and </title>. It is also sometimes referred to as “title tag,” “page title,” or “title.”  It is the main initialisation element that can impress and influence the visitor to click on the search link. Google search robots pay the most attention to this tag when doing the rankings. It can even be the same as your H1 (headline) tag.

As for titles, you must keep in mind a few things:

  • Limit of the number of characters/pixels – some time ago, the rule was to have titles up to 65 characters long. This limit is now defined in pixels – 600px. You just don’t want to fill your headlines and titles with keywords, but you want them to be attractive and interesting to encourage users to click on them.
  • Uniqueness – your titles should be unique for each website page.

Make sure your headlines are not identical on several pages. This is often a problem with some CMS systems (e.g. WordPress) that duplicate many of your headlines on many different sites.

The best page name-title format is one that “best describes what visitors will find on the site. It makes no sense to use some keyword that can be interesting and attractive in the page’s title, but your site does not contain anything related to it.

Do not try to stuff all the SEO keywords you can remember at all costs in it. It is not beneficial. Ideally, the title of the site should have as many words and characters as you can see in the Google Search results (which means the sentence is not cut at any point).

Meta Description

The challenge in the Meta Description is to summarise the page briefly but effectively: what it contains and what it is for because it is worth reading. Your ad on the SERP will have to attract the user’s attention to transform it – with its click – into a visitor to your site. Generally, search engines show the meta description seo in search results under the title tag. 

How do you write a good meta descriptions in seo? Here are some suggestions:

The length visible in the description field is max. One hundred fifty-five characters / 920 pixels, which is reduced to 130 characters for the mobile search results.

Add a Call To Action (CTA) If the page’s purpose requires it, such as: Find out more, contact us, subscribe now etc.

Add the targeted keywords.

What are the other primary measures?

Without going into HTML language technicisms, we want to dwell on three other meta tags that must be considered with particular regard for the functions they perform.

  • Alternative Text Tag (Alt alternative text): It is used to make the image or images loaded on a site to the search engine read. Google has no eyes (not physical, at least!), and it can understand what a photograph or a drawing represents, thanks to this tag. That is why it is so important to know how to use the Alt text. The page can also be indexed through an image. What to do to optimise an image for SEO? Remember to name the image; The alt text must be clear and explanatory; Use about 55 characters to the maximum for the description in the tag field.
  • Meta Tag Robots: The robot tag is a category of HTML tags positioned in the section of your web page that indicates to search engines if and how they have to scan the pages of a website. This meta tag provides in particular instructions to the search engine seo crawlers on the indexation or the noindex of a web page. Inserting incorrect attributes in the Meta Tag Robot can have a negative effect on the SEO and, therefore, on the presence of your site in the search results. There are four values ​​that can be attributed to the tag. They are: 

1) “Index,” tells robots to index the page; 

2) “Noindex,” tells robots not to index the page; 

3) “Follow,” tells robots to scan the connections on the page 

4) “Nofollow,” tells robots not to scan the connections on the page and that no approval is implicit.

  • Meta Tag Keywords: We put this meta tag as the last. In fact, it might seem that using tags with keywords can be a good ‘makeup’ to recall the attention of the search engine. In reality, it is not so, and Google does not use the meta tags with keywords or to index, nor for ranking. Why is that? Because they have become a potentially deceptive tool in the hands of expert webmasters: the so-called keyword stuffing or the abuse of often irrelevant keywords.
    For this reason, Google has started ignoring these meta tags since 2009, while Bing even considers them potential ‘spam’. Although, therefore, for SEO, the relevance is equal to zero, knowing how to use the Meta Tags Keywords can still be helpful because they offer site managers the opportunity to define a set of keywords for potential research. To do it effectively, the choice of SEO keywords must be thoughtful: yes to the relevant keywords, no to the excessive and redundant use of them.

What does the ideal seo title format look like?

Page headlines should not be repeated on the website. For example, if you write a news site, the ideal title will be the article’s name or a description of what the text is about. On the e-shop site, it will be the name of the product or type of goods.

The page headline is also significant when it comes to arousing interest in your search page. It is displayed as the headline in the search engine results.

Although the seo title is connected to the page’s headline, it should not be identical. The exception is a page whose name or brand is so known that people are looking for a name rather than page content. An example could be, e.g. Coca-Cola. Here it is logical that the name of this lemonade will appear in the page headline and the title. In this case, customers are looking for a brand.

For a site where most visitors are looking for specific products, services or information (e-shops, news sites, travel agencies, etc.), it will be more important to have words defining these things in the title of the page. Long-term statistics show that, for example, the e-shop name in the title is significant. The suitable title that contains the brand is, for example, the following:

<Title> Canon 400D digital camera cheap from amazon.com </title>

As you already know, the title of the page should not be too long – for this reason, the name of the shop or page is better shorter. If you do not have a selected domain and name for your project yet, remember it. The search engine also understands that if two words are close to each other, ideally next to each other, then they can be considered a phrase.

Keywords

What is a keyword, and what are the keywords for SEO

Back to seo basics: now we take a little step back and focus on a primary theme of online activities and SEO work. To be precise, the topic that we will face – what a keyword is, trying not only to understand the meaning of the term but also the importance of keywords within the SEO strategy is. We will also discuss the types and varieties of keywords that can be intercepted and used to obtain positioning on Google, conversions and concrete results.

What is a keyword

A keyword is simply a word, or more specifically, one or more terms that are associated with a particular concept or need an answer (not necessarily in an identical way) to a question made by users through a search engine.

Keyword meaning for search engines

More specifically, a keyword or key is any term that the user inserts in the Google box or another search engine (which is precisely called a query), which becomes the interpretative key to perform a search that gives life to a page of the results in which the websites are listed (among the indexed ones, i.e. already present in the memory of the system) in an order that reflects the relevance to the query itself, perceived based on the criteria of the specific research algorithm.

We can consider a keyword for every query searched on a search engine, made up of a single word or complex phrase.

What are the SEO keywords for

Keywords are, therefore, the first signals read by search engines for scanning a site and its pages, as well as the tool with which Google indexes and positions the contents based on its SEO ranking factors. The keywords, therefore, play a fundamental role in the SEO of a site because they are useful for intercepting traffic from organic search in search engines and clicks of users interested in information relating to their query.

The relationship between keywords and queries in search engines

In the SEO fields, there are various theories compared to the keyword and the importance of finding the exact match, or the perfect correspondence between the user’s query and the keyword contained in the text, which in the past often led to forced creation of the seo content, inserting in the article the exact strings of terms in an ungrammatical way in order to intercept the keyword.

Today this aspect still has a strong relevance, but thanks to the evolution of Google’s algorithm, it is possible to obtain good positions with correlated keywords, synonyms and also those semantically related to research if they respond to the search intent of the person who launches the query.

In other words, until a few years ago, seo keyword research – the search process of the best keywords to use within content and sites to obtain visibility on Google – was based on the SEO analysis of quantitative parameters such as search volume, CPC, difficulty etc. Today we prefer a qualitative approach based on the identification of the best context and on the understanding of what people really want and what they expect to find within a web page.

All keywords, a guide to discovering the types of keywords

Coming to the practical aspects, you can find a series of kinds of keywords that can be intercepted by search engines in content.

The best known is the specific one, called the focus keyword, consisting of a single term which holds a high volume of research and an equally high level of competition. It is a very specific keyword, which alone, however, is not enough to produce the desired effects with a view to the performance of the site.

Know the SEO keyword: the vanity keyword

The so-called vanity keyword is also part of this first group of keywords: it is an SEO keyword, generally composed of one or maximally two terms, which has a very high average and monthly research volume and appears attractive only in appearance.

These keywords generally describe the site topic and the products sold in e-commerce, attracting not very qualified traffic and thus generating a rate of low conversions. Positioning the site with a vanity keyword does not offer concrete benefits to the conversion, sales or the increase in the readers of the site but only serves to satisfy the vanity of the site with large numbers that do not translate into great economic results. Indeed, there is a risk that positioning and ranking with a vanity keyword can increase the rebound frequency because the user can immediately abandon the site page without interacting, in case he considers it poorly exciting and not very centred on his research intent.

Medium-tail keywords and long-tail keywords

In an SEO strategy for the optimisation of the keywords, therefore, it is advisable to focus on the longer strings of keywords. The competition can be lowered, and the page topics can be better defined this way. The usage of long-tail keywords intercepts the search intent of users and their purchase intentions in a more specific way.

They usually consist of three or four terms.

Keywords on a page: main, secondary and related

With a view to optimising the performance of a page and seo content, then, it is important to know and recognise the role of the keyword. The main keyword is the heart of the whole strategy, the primary intent of the article. It must have characters relevant to the website’s contents and the page itself.

Secondary keywords derive directly from the main one, which they often accompany with the addition of a single term (before or after), and represent a semantic delimitation of the priority, defining a specific appearance or a sub-topic of the main keyword: secondary keywords are also relevant, but only for a particular detail of the content.

Related keywords are instead extensions of the contents of the page and do not always contain the primary keyword: they can be synonymous, grammatical variations, or other expressions that expand the semantic field of a content or an article and are helpful in intercepting users with similar needs. In particular, the relevant related keywords help to go deep into the contents of the page (vertical expansion), responding to an information need of the user. In contrast, the unrelevant related keywords serve, above all, to expand the topic on other semantic fields for the valuable horizontal expansion for organic seo positioning in SERP.

Which keywords to use for SEO strategies

Going even more specifically, there are at least six large categories of keywords with shades and various meanings. The first classification system is based on the user’s intent. Therefore information keywords, navigation keywords, commercial keywords (commercial investigation), transactional keywords, local keywords (that define actions to be carried out in a specific geographical area) and branded keywords reported precisely to the name of a brand or person.

The classification of keywords based on the USER INTENT

The informational keywords are the most generic, those referring to a query that serves to inquire about a specific topic and that are generally performed when the user is at the beginning of his search journey. They are keywords containing informational value, perfect to use in guides or pillar articles to acquire relevance on a specific topic in the eyes of the search engine.

Informational, navigational and branded keywords

Navigational keywords are generally keywords composed of the main term and an additional word linked to a name or brand: the navigation intent responds to the need of the user to obtain specific information on a line produced exclusively by that company or to reach that particular official website.

Branded keywords can be considered a nuance of navigational keywords. They are dry keywords that refer directly to the brand (commercial or personal brand) and often also appear in the name of the official domain, which in fact, obtains (almost) always the first positions in SERP for these specific searches.

The last category we face is that of the local keywords, the local SEO keywords that serve to define and delimit the geographical interest of content: they are all the geolocated keywords that circumscribe a local need or that direct the search for an ‘ activities present on a specific territory.

We will talk more about keywords and keyword analysis in our next seo article.

Search engines in more detail

How long does it take for my site to appear in search results?

I’m pretty sure if I googled the same query today, there would be pages and links to new examples and articles.

Search engines prefer current information and frequently prioritise search results pages from older webs over pages with a higher rating. Usually only for a short time – if the new page does not establish itself as an authority in the eyes of the search engine in the near future, it will fall back to lower positions in the search results. The same is valid for new pages. Google typically indexes only the home page of a website if it has links to it from external sites. It usually ranks high for a few days after the initial indexing, but if the site does not receive quality backlinks within those few days, it will be pushed down again. It almost never indexes other pages of the website.

How do I alert search engines to my site?

You are not required to register with any search engines (full text). If someone approaches you and promises to “register your site in the search engines*,” he either doesn’t know what he’s talking about or is a filuta attempting to overtake you. If there are links to your website from pages that the search engine already knows about, he will eventually find you. However, if you believe indexing pages are taking too long, you can send information about the website to search engines using particular forms.

How to automatically notify search engines?

The Google search engine supports automatic notification of new content. That one in Google Search Console allows web admins to notify search engines about the existence of the sitemap file, sitemap.xml. It is a file in which the search engine finds links to all of the website’s pages in a clearly defined structured format. We’ll go over its structure and individual parameters shortly.

What is a sitemap, and why is it important to me?

When we refer to the sitemap file, we refer to the sitemap.xml file, which is stored in the website’s root directory (root). Search engines will most likely look for it there. And if they find it, it will only benefit you. Sitemap.xml typically contains a list of links to all of your pages. The search engine likes that it doesn’t have to go through your site’s navigation and links but can pass through and index all the pages on which address it finds in the sitemap.xml file.

How do I notify search engines of the existence of a sitemap?

Assume you have a publishing program that produces a sitemap.xml file whenever a new page is added. In this case, use the HTTP request option to send the file.

How to Submit New Pages Automatically in WordPress?

You have an advantage if you have a seo blog built on the WordPress platform. This blogging program includes support for the Ping-o-matic service. This is an excellent service that allows search engines to learn about your new article in a matter of seconds. Simply log in to your WordPress administration, and open the Settings-Publishing tab.

What does the sitemap.xml file’s sample code look like?

The sitemap.xml file’s source code, which search engines understand, looks like this:

<?xmlversion-’1.0* encoding-‘UTF-8 ’?>

<urlsetxmlns-‘http://www.sitemaps.org/schemas/si temap/0 .9 “>

<url >

<loc>http://www.yoursite.com/</loc>

<lastmod>2009-01-01</lastmod>

<changefreq>monthly</changefreq>

<priority 0.8< /priority>

</url>

<url >

<loc>http://www.yoursite.com/page.html</loc>

<lastmod>2009-02-03</lastmod>

<changefreq>weekly</changefreq>

<priority>0.4</priority>

</url>

</urlset>

What do the individual parameters in the sitemap.xml file mean?

ParameterDescription
<urlset>Mandatory top-level element
<url>mandatoryChangefreq, lastmod, loc, and priority elements are closed by this element. Every URL element. When linking to a folder, the address must be followed by a slash.
<loc>optionalContains the URL of the document. Must be shorter than 1045 characters.
<lastmod>optionalDisplays the last time the linked page was modified. It should be used as a text value formatted in accordance with Date and Time Formats.
<changefreq>optionalThe frequency with which a specific page is changed. The value expressing the period does not have to correspond with the robot’s visits. This is only a suggestion for search engines to consider if they follow them.
<priority>optionalIndicates the priority with which the search engine should index and rank in higher positions. We set the value from 0 to 1.

How do you make a sitemap file?

As previously stated, an XML sitemap is a simple way to notify search engines about your website. It makes no difference how it is generated, even though it is an XML file. It can be written in any scripting language that returns an acceptable file type (application/xml). Input the URLs of the pages you want search engines to know about. They state that the pages listed in the XML sitemap will be prioritised during indexing but that other pages will not be ignored.

If you have a static website, then use one of the XML MAP offline or online generators. If your server supports PHP, you can find a script from the workshop of the company RJ Softwares (http://wwwrjsojhvarcs.com), which has published a free use of two files – sitemap.php and sitemap init.php.

Simply copy these files to your server and then navigate to http://www.yourpages.cz/sitemap.php, where the file will generate an XML map of your pages. Set directories you do not want to index and write to the XML map in the source code or set other variables.

What should you do if the pages vanish from the search results?

First, let’s define what it means to vanish from search results. It is not the case that you appear on the first page one day and then drop to the second or third the next. It’s just a natural movement of results, usually caused by a change in the ranking algorithm or better optimisation of the competition. In the first case, i.e. when results are sorted in a different order, you must first determine which pages and parameters the search engine most likely prefer at the time. In such cases, incorporating the new information into your optimisation strategy is usually sufficient.

If the latter is the case, all you have to do is sit down by the computer and start researching why the search engines consider the competition more interesting. The following examples might be a reason:

  • The higher number of backlinks
  • Links from higher-ranking websites
  • Better-designed seo content, more information, more pages, more current data
  • Better site availability and accessibility
  • Age of the domain

What should you do if the site indeed vanishes from the search results?

If your site does not appear in search results at all, for example, by searching for it using the site parameter (site:yoursite.com), or if it performs significantly worse than in the past, it may be a type of penalty from search engines. This could have occurred for a number of reasons. For example, the search engine may have discovered:

  • There is far too much duplicate content on your site when compared to sites of higher authority (they are older, have more backlinks, etc.).
  • Are your pages frequently down, loading slowly, or not loading at all?
  • You engage in practices that are contrary to the ethical concept of optimisation (for example, you hide text or offer different information to search engines than to visitors).
  • You include links to pages with illegal content, warez, porn, gambling, etc.
  • Your website contains viruses or other potentially harmful elements, which could have occurred without your knowledge, for example, by carelessly saving the password from your FTP connection in one of the programs, allowing it to be intercepted by an outside attacker.

But don’t panic. Nothing is lost; once the causes have been eliminated, you can request that the search engine re-evaluate the site. To make a request, open the following form: https://www.google.com/webmasters/tools/reconsideration

How can I tell if a search engine has already visited my website?

There are several alternatives. You can also view the statistics on your server, which are generated from logs. A program such as AWstats can be useful in this regard.

Statistics like this should be available from any reputable web host. From such statistics, you can deduce which robots visited the site and when, as well as how many pages they visited based on the amount of information they took away.

If you have a new site or do not have access to the server’s logs, try adding the PHP script to the source code.

The script is designed to track Google, but it is simple to adapt to other search engines. The script detects the arrival of the robot and sends you e-mail updates about the visit. Because the information is sent by the mentioned e-mail, if the pages are visited several times per day, and the robot goes through hundreds of pages, it may cause an overflow of your e-mail box.

How can I find out how many of my pages are known by the search engine?

Google Search Console allows you to check the indexing status of your pages. If you don’t have a Google account yet, now is the time to get one.

How to Create a Google Site Admin Account?

The Google Webmaster Tool  – also known as Google Search Console, has already been mentioned several times in previous lines.

If you already use GSC, you can skip this tip. If not, then let’s take it one step at a time; I’ll assist you in creating an account, registering the first pages, and verifying them. Then you can begin fully utilising Google Search Console for free.                                                                                

Step-by-step instructions SEO guide:

Sign in to your Google account first. If you already use a Google service such as Gmail, Ads, Adsense, Analytics, or another Google tool, you should have this. If you don’t already have one, simply go through the simple registration process at https://www.google.com/accounts/NewAccount.

Go to https://search.google.com/search-console if you are already logged in or if you have just created a new Google account.

After logging in, the application’s home page will appear.

Before Google displays detailed statistics and information about the pages, you must demonstrate that you are the true owner of the pages. You have two options here. Either insert a verification string, referred to as a meta data seo tag, into the page header. It appears to be as follows:

<meta name***google-site-verification“content=“DvciXaEvT2TmRzkoe6Qh00r?a-F8ZBM3ehkW4LhdV87o“/>.

You can also download an empty HTML file named, for example, googlefc8688c95dfl06cc.html. You will then upload it to your site’s root directory. Then simply press the Verify button.

You can now begin working fully with individual functions. For example, notify Google about your sitemap.xml file, diagnose pages from a search engine standpoint, and so on.

Domain names in SEO

Domains with multiple words, with or without a hyphen

Separating individual words with a hyphen is preferable for multi-word domains in terms of SEO. On the contrary, the absence of unnecessary hyphens resulted in a more memorable domain for users.

Why does the hyphenated version perform better in search engines? The hyphen is regarded as a word separator by the search engine. It sees the website address www.newtaste.com and interprets it as one word. But what if we separated the words new and taste with a hyphen? The search engine recognizes it as two distinct words. This example shows that the variant with a hyphen is better suited for the search engine. This way, you can also emphasize that these are two distinct words. But what if the hyphenated domain is no longer available? Don’t give up and go with the variant without the hyphen. It is very likely that the search engine will deal with it eventually.

During a multi-word query, the search engine combines words by removing spaces and determining whether the result does exactly correspond to a domain. If the query matches a domain, search engines may favour the home page of that domain in the results ranking. It is essentially an algorithm designed to improve the resolution of navigation queries.

Can multi-word domains be detrimental?

Keywords in a domain are considered a benefit in general, though their influence, which is often overstated, will be discussed in one of the following seo tips. But now I’d like to discuss a situation in which multi-word domains can actually be detrimental to your business.

This is especially true for domains in which keywords are combined but do not form a general phrase.

  1. It can look spammy

Nowadays, people usually associate domain names that combine multiple words using hyphens as being spammy. Unfortunately, a lot of people will avoid clicking links that go to these types of domain names because they could be afraid to open a scam website of some kind. It is a very important factor in first impressions and interaction with potential site visitors of your website.

  1. While search engines try to keep their exact algorithms hidden but in general, it’s been suggested that domains that are too long don’t rank as well in search engines. There may be some correlation there, but there’s no indication of a causal link. 

Should I use keywords in my domain?

In the past, the keywords in the domain had a bigger meaning and impact. This parameter is currently weaker in the algorithm that controls the ordering of search results. It has a historical justification. Search engines correctly assumed that if somebody registered the domain www.picasso.com first, the audience would find authoritative information about Pablo Picasso.

In the case of a search for this artist, search engines may prioritize the page located on such a domain over others. However, the evaluation algorithms had to rethink their ideas over time. This domain does not automatically mean high-quality seo content. Today, as a first result, you will get the biography on Wikipedia. Strong and linked servers with high-quality content are prioritized. 

Domain speculation is also a factor. Some businesses hold many interesting and intuitive domains where they do not run any content and are simply waiting for someone to find it valuable enough to pay the required amount for it. Pushkin, a classic of Russian poetry, turned out to be a sad story in terms of domain names. A domain investor registered his name and the current asking price is €1,999.

Personally, I have no objections to domain speculation because it is essentially the same as purchasing vacant land and real estate. It is entirely up to the ability of such a domainer to earn money from domains. However, ethics come to play also here. A number of speculators, for example, have purchased the domain names of well-known brands and companies, knowing that it will usually be cheaper for the company to buy the domain than potential legal disputes. However, similar “squatters” can also be found in other areas of business. 

The second reason search engines stopped placing so much emphasis on the keyword in the domain is that they began to be used with no sense. Rather than owning and operating a website under the domain www.looking-for-a-job-in-chicago.com, it is recommended to come up with a short and memorable domain and promote the brand.

Finally, do you know what the world’s most expensive domains are? Google, YouTube, MySpace, Facebook, and even eBay and Twitter are examples. Despite the fact that neither of these words is an intuitive domain, the servers have millions of fans.

What is the relationship between search engines and subdomains?

Subdomains, or third-level domains (subdomain.domain.cz), are one method for inserting the required keyword into the URL and thus giving it more meaning. It can be, for example, a website including a blog section: www.blog.yoursite.com.

It typically works by displaying content from the directory www.domain.com/keyword on www.keyword.domain.com, either through PHP redirection or, more effectively, through commands in the htaccess file. Make sure that the robots ignore the directory itself in this case, or there will be unnecessary duplications. That irritates search engines and because of that you can also lose the rankings. Assume you’ll be creating backlinks for content on your subdomain, but your visitors will be more interested in the directory page. The search engine then selects one of the preferred variants and discards the other, including any existing backlinks.

Subdomain as a linkbuilder assistant

However, there is one instance where a subdomain makes sense from an SEO standpoint. This is the stage at which you register your website in various catalogues. Many of these sites will not accept URLs like www.domain.com/database or www.your-eshop.com/category. However, you will discover that it is more appropriate to link not only to the main page but also to sub-pages, which in the case of e-commerce, can be the aforementioned categories. So, how do you get out of it? A subdomain could be the answer.

How to create nice addresses using Htaccess

The.htaccess file has already been mentioned. What is it exactly? It is a file that allows you to change some settings on your hosting server without having access to it or without any administrative rights. You can use htaccess to manage things like redirecting pages and displaying URL addresses. Now let’s talk about how to create beautiful URLs.

Assume your online store generates an address for you:

http://www.yourshop.com/index.php?p=1234567

And we’d like the addresses to look something like this:

http://www.yourshop.com/product -1234567.html

This is how the entry will look:

RewriteCond %|HTTP_HOST| ^www\.yourshop\.com

Rewriterule ^product-(. *)\.html$ http: / / www.yourshop.com/index .php?p=$l [L.QSA]

How to create nice addresses in PHP

You may have pages hosted on a server where the.htacccess file is not permitted.

Then you should look into PHP.

The creation of “nice” URLs in PHP is dependent on the capabilities of the editorial program or e-shop. Many open-source scripts have either free or paid add-ons. Some systems, such as the WordPress blogging program or the Websitebaker editorial program, have nice addresses built-in; you just need to enable them in the program settings (we’ll get to WordPress in a minute).

If you only have a few pages on your website, you can use the functions include or require, which allow dynamically generated content to be inserted into a static page that you can name however you want.

For larger websites, it is more appropriate to perform some system measurements, which necessitates the use of a script that communicates with the database. Typically, such scripts will then generate the page title, for example, from the title of the article or the name of the product. However, I recommend that you take more care when creating page titles and headlines. It is common for the product or article’s name to be unnecessarily long, so it is best to leave the option of influencing the file name open.

How to Create Good WordPress Addresses

WordPress is a well-known blogging platform. It can be used not only for blogging but also as a solid foundation for the creation of low-cost corporate websites, micro-sites and even smaller news sites, thanks to its extensibility and a large community of developers who provide WordPress users with a variety of more or less useful plugins for seo (additional functions and extensions) for free. Some extensions can transform WordPress into a link catalogue or even into a simple e-commerce site.

A WordPress SEO extension is also available for download. The All in One SEO Pack is a well-known plugin for seo that you can get at http://wordpress.org/extend/plugins/all-in-one-seo-pack/. Its popularity is demonstrated by the fact that over 3.5 million blogging system users have downloaded it.

After the initial installation of WordPress, the addresses are displayed as follows:

http://www.yourblog.com/?p=22

At the moment, you will be unable to read anything from the article’s address, possibly only that it was created as the 22nd in the sequence. However, WordPress allows you to include more useful information in the address, such as the title of the article, category, the time it was written, or even the author’s name. Most importantly, WP can make it more appealing to and understandable to visitors.

Pretty URL activation in stages:

  1. Begin by logging into the administration interface. This is usually found at www.yourblog.com/wp-admin. The name is the same as what you entered during installation, and the password can be found in your activation e-mail. 
  2. Go to the Settings — Permalinks tabs, depending on which language version you have installed, whether the default is English or you have installed other localization. This tab is at the top of the menu.
  3. You should now see a page with several nice address formats pre-set.

If none of the options appeals to you, you can now design your own. This is done in the last field. To make the URL unique, always include at least one unique string in the address – either the title of the article or a numeric ID.

For example, your entry could look like this: % day% -% monthnum% -% year% -% post name% (www.yourblog.com/01-12-2009-name-of-the-article).

If you don’t have permission to write to the htaccess file at the same time as you compose your own URL, the system will generate its contents for you. Copy the code and paste it into the htaccess file mentioned. 

Officially, the creators of WordPress recommend that the address begins with a numerical value, such as the day or month the article was added, rather than a text value. The URL of the address should also not begin with a tag (label) or the title of the article – this avoids possible performance degradation and page loading delays.

What should I do if I change the URL of my website?

Above all, avoid changing site addresses at all if possible. If the key product you sell has been found for two years at www.yourcompany.com/selling.html, or www.yourcompany/catalog/str4.htm, you can be found in search engines, the site already has some ranks from search engines, and statistics show that people frequently go to this page directly, i.e. either it was recommended to them by Google.

Even if an SEO expert tells you that the page should be called www.yourcompany.com/name-of-the-product, forget it. Clean URLs are nice, but it is not worth losing a visitor because of them. Every URL change and redirection of existing pages may be detrimental to the website.

If the URL of the site must be changed for any reason, try to at least ensure a functional redirection of the old addresses to the new ones. The best way to accomplish this is to modify the.htaccess file. It would be easier for you if you had the old web addresses created in a consistent manner.

How search engines are programmed to redirect

If you need to redirect, whether, through a 301 or 302 status, it’s a good idea to understand how search engines handle such redirects.

Assume you have a page A that sends visitors to page B.

Assume page A has an S-rank of 7/10. Page B is new and thus starts at zero. A link on page C leads to page A. When the robot visits page A, it removes it from the search database but leaves it in the database. Page A maintains its position in the search engine’s rankings, which are gradually decreasing. A link from page C to the original page A is forwarded in favour of page B. Although page B collects links that originally boosted page A’s ranking, it does not replace it. Search engines always see it as a new page and treat it as such.

How to make robots work

The essence of SEO optimization is that we try to make the job of search engines easier in exchange for our higher ranking in the search results. Giving search robots information about whether something has changed on the site since their last visit is one of the friendly steps towards them. When the robot visits your page, whether you asked for or simply it came across a link to your website somewhere on the Internet, it downloads all of the text (or image) content from the search engine to the database. The page is processed there. If your page hasn’t changed since the bot’s last visit, it’s pointless for the bot to download it. 

That´s why, an HTTP status code can be set on the hosting server, providing the search robot with timely information that the content of the page has not been changed in any way, eliminating the need to go through the entire download procedure. This status code is denoted by the number 304 (HTTP header If-Modified-Since) and indicates that the requested page has not been changed since the previous request. The server does not return the content of the corresponding page when it returns this response. This saves search engines time by not downloading pages that haven’t changed, allowing them to focus on those that have.

What are the most frequently used status codes?

In the previous tip of our SEO training, I mentioned HTTP status codes. What exactly are status codes, and why are they important for search engine optimization? When a request for a page from your website is sent to your server (for example, when a page is crawled by a GoogleBot), your server responds by issuing an HTTP status code. This status code indicates the current status of the request. The robot obtains preliminary information about the website and the requested page by using the status code.

What does status code 200 mean?

Status code 200 denotes a successful request. The page was successfully displayed by the server, so the robot accesses it and downloads its content.

What does the status code 301 mean?

301 – permanently relocated. The requested page has been permanently relocated. When the server displays this message (in response to a GET or HEAD type of request), the requester is automatically redirected to a new location. If you redirect an existing website to another or use multiple domains for one presentation (for example, a hyphenated and non-hyphenated variant or an address with and without www), you should notify search engines with a 301 that the pages will be permanently relocated elsewhere.

What does the status code 302 indicate?

302 – temporarily relocated. As with 301, only the search engine knows that the original URL should be used for future requests. However, only use them for temporary redirection, such as directing the start page to another within the framework of a specific event and returning to the original address at the end of the event. If you no longer intend to use the original address, do not use the 302 status to notify search engines that you have moved the page or pages, as search engines will continue to crawl and index the original location.

What does the status code 403 mean? 

403 – not permitted. The server rejects the robot or user request completely. For example, if you discover in your Google Search Console account or in server statistics that the robot received this code for its request, but your pages are still displayed normally, it is possible that your server or virtual space is restricting access to search engines. You will not be found in the search results in this case.

What does 404 mean? 

404 means that the requested page does not exist. When a page cannot be found, the robot receives a 404 status code. Many websites have a special 404 page to which the server redirects you when the page you requested is renamed or removed. Later, we’ll talk about what an ideal 404 page should look like.

If you do not have a robots.txt file on the server, the server will sometimes return a 404 to robots. This is because every robot automatically searches the root of domains, even if there is no appropriate content for this file. If you have robots.txt and the server still returns this message (again, check the GSC or server log files), the file is most likely misnamed or not located in the domain’s root.

What does the status code 500 indicate? 

500 – internal server error. This error usually occurs when the commands in the htaccess file are incorrectly set. As a result, the search engine recognizes that an error on the server has occurred and that the request cannot be fulfilled.

What does the status code 503 indicate?

The 503 – service is unavailable. The server is currently unavailable, which is similar to 500. For example, because it is overloaded or is being serviced. This is usually only a temporary condition.

How to make use of page 404 (page not found)

It’s always inconvenient when a page with some links is deleted for whatever reason. When a search engine arrives at such a page and discovers that it no longer exists, it will recall its own site. In better cases, such a situation is handled with a 404 page. As previously stated, 404 is the status code that a robot or browser receives when a page cannot be found. Of course, it does not have to be deleted; simply renaming the file is sufficient.

A standard 404 page informs the visitor that the requested page cannot be found on the server. However, in terms of SEO and, especially, user-friendliness, it is best to provide visitors with content that is as close to what they are looking for as possible. It could be a related article or a similar product from the same category, a list of similar products, a sitemap, or a search function that opens a search on suggested pages that contain similar content. For e-shops, for example, a page that recognizes the original product or product the visitor or robot was looking for, provides the information that it is no longer on sale and offers him a replacement for it is ideal. 

How to avoid inappropriate duplicate content

We have already discussed in this SEO guide the use of commands in the robots.txt file. Another possibility was that the search engine detected duplicate or similar content and simply excluded one of the pages from indexing, which was unfortunately beyond our control. You can also use the redirection mentioned in the previous tip with the status 301. However, none of these alternatives is perfect.

As a result, search engines like Google, Yahoo, and Bing support the new tag link canonical.

What exactly does that mean? If we have two identical or similar pages (for example, sorting the search results in an e-shop by price and then alphabetically, ascending or descending – the content is the same, only sorted differently), we can use this tag to tell the search engine which of the pages is more important to you.

From the perspective of the search engine, this is the same content; the page contains the same words, just in a different order. You can also use it in an e-shop to sell the same product in different variants, such as different sizes or colours, and the page itself only in the photo.

In practice, it works as follows: on a secondary page, i.e. one that is not considered the main one, you insert a link tag canonical with the name somewhere between < head> and </head>:

<link rel “canonical” href=”http://www.mydomain.com/name-of-the-most-important-p a g e .html/” />

The search engine will then interpret this information in such a way that it will keep only this “more important” page in the database, resulting in a combination of possible ranks for different forms of URLs.

Remember that even if the content is the same from your perspective, the search engine treats it as a new, unique page the moment anything changes in the page address, such as a variable that determines the ordering of results or sessions in the URL. This results in unnecessary duplication.

Google supports this tag across multiple websites and domains; it is only a matter of time before other search engines enable it too.

Canonical URL rules

  • URLs can be absolute or relative, but absolute URLs are preferred.
  • The link must be in the same second-level domain.
  • It can point to a different subdomain (forum.domain.com/discussion.html > domain.com/forum/discussion.html).
  • It can make internal domain links, such as from HTTP to HTTPS and vice versa.
  • Canonical links can be chained (A->B, B->C), but it is preferable to point to the most important – target page directly.
  • The URL must contain the same or very similar content, or it must be otherwise arranged. If the content is not identical or similar, the canonical link will be ignored.

Where to hide data from robots

We may not want certain parts of our website to be indexed by search engines at times. This occurs, for example, when Google returns parts of pages that you do not believe are appropriate for a search engine query, or simply information that you do not want to be searchable. I purposefully do not use the word discoverable here because the general rule is that if you do not want any information to be discoverable, the only truly reliable defence is to, simply, not put it on the Internet.

However, under normal circumstances, search engines will not see the content and text:

  • Drawn (written) in pictures. Surprisingly, this is a common mistake. For example, because the company’s name is only in the logo, you will not find it on the website. Search engines can find and index images, but they cannot yet read their content. There are technologies that can read scanned text, for example, but they are not currently used in practice by search engines.
  • Where the AJAX script was used to create the page. AJAX technology typically requires a user´s action, such as entering a few letters into a form field – AJAX will then automatically provide you with assistance without opening a new page. The bot will not see the such text because it is not a regular user.
  • Located on a page with no links or on a page with a link hidden within a JavaScript element. However, be cautious because a search engine may obtain the link by going from such a page to another via a link, for example, and the original page is saved in the file logs as a so-called referrer page. If such statistics are freely available to search engines, they will include a link to your page. Even if there is no link to the page, it can be revealed if someone visits it with a bar installed, such as one from Yahoo that measures the Alexa rank.
  • Embedded into iframe elements. This is only true if there is no text link to the page embedded in an iframe.
  • Created by Flash program. This is true only for some search engines, because Google, for example, can already read some Flash text. However, if you insert text in the form of an image into the animation, the situation is identical to the first one.
  • Information produced by Silverlight technology. While Flash might be already crawled by Google, it cannot read the content of other multimedia formats such as Silverlight. Input texts are mostly visual and search engines can only extract plain text and links. Of course, the text embedded in a graphic, video, or audio file is invisible to the robot.
  • Placed on password-protected pages, because the search engine cannot access them if it does not know the password.

Google can specifically read content “hidden” behind the form, content in discussion forums where you must register, and texts in PDF if they have not been previously converted to a bitmap or so-called curves. There is no issue with text created in text or spreadsheet editors such as Word or Excel, presentations created in PowerPoint, and so on. But there’s more in the next tip…

What types of files can the robot index and crawl?

A usual search engine can crawl standard HTML files generated by PHP, ASP, or other languages, as well as pdf, doc, rtf, and ppt files.

Google is also able to crawl postscript ps formats, RSS and Atom feeds, DXF graphic formats, Google Earth files (kml, kmz), Lotus 1-2-3 files (vvkl, wk2, wk3, wk4, wk5, wki, vvks, wku), text formats, and other formats. Lotus WordPro (lwp), plain text (.ans,.txt), MacWrite (mw), Microsoft Works (wks, wps, wdb), Microsoft Write (wri), Open Document Format (odt), flash animations (swf), and Microsoft Excel tables (xls) and Wireless Markup Language (.wml,.wap) pages, which are typically optimized for mobile devices.

How to allow bots to read Javascript-hided content

In some cases, it is necessary for certain content to be displayed in response to a visitor’s action. In that case, you might consider using JavaScript. However, we know that search engines have issues with this. So what should we do? Don’t worry. The creators of scripting languages considered this possibility. For this, the special tag “no script” is used.

Simply place the identical content from the JavaScript section between pair tags with no script to complete the process. The bot will bypass the JavaScript, but it will reach the content. However, make sure that both elements have the same content, in this case, the same JavaScript text as that one between the no script tags. If the alternative element contained significantly different content, search engines might conclude that you are attempting to push different content to the crawler than to the visitors, which is considered unfair practice and may subject your site to search engine penalties.

Image optimization

Photos and images not only improve the appearance of your website, but they can also bring in new visitors. Search engines may display relevant images before text links in some cases. If such images appear on your website, you should be proud of yourself. But how should you do it? 

You will use the following SEO tools during image optimization:

  • Image names
  • Alternate image tags
  • Image titles
  • Page title (containing images)
  • The text surrounding the image

It is best to include the most relevant description and the obviously required keywords everywhere possible. Forget about image names like DC012287.jpg, Pl050436.jpg, or scan065.jpg, which are typically generated by your phone, digital camera or scanner.

Do not forget about alternate image tags. You will not only comply with HTML standards, but you will also assist disabled users, and most importantly, you will inform search engines about what is in the image. The alternate text is added as an alt attribute to the img tag. Its original meaning is that it appears when the image is unavailable for some reason, and it appears when the cursor is moved over the image.

We write alt tag as follows:

<img src=”jawa.jpg” alt=”Jawa 350 Motorcycle”> 

Use relevant text around the image as well. For example:

<img src=”jawa.jpg” alt=”Jawa 350 motorcycle”> Reliable motorcycle Jawa 350*

In addition to the alt attributes, remember to include other HTML standards-compliant attributes such as width (image width in pixels) and height (height of the image in pixels).

It is also appropriate to include a title that describes the image. Use this option and be creative instead of copying the alt description whenever possible.

Remember the importance of backlinks to the pages and HTML files themselves when linking to images. Instead of referencing the image with “find here” or “open in new window”, refer to it as follows:

A photo can be found here: <a href =”http://www.jawaweb.com/photos/jawa.jpg”> 

motocycle Jawa 350 </a>

For image links, please do not use Flash buttons or JavaScript links. If you have a separate WWW page dedicated to displaying such an image, you have even more options. Then you have the option to optimize using the page title, headings, tags and so on.

How to optimize the PDF and other non-HTML files

The majority of the documents found in search engines are HTML files. However, there are times when we have pages where there is enough information in other types of files, such as price lists created in Excel or user manuals created in PDF or Word. Information hidden in such files can frequently bring in new and interesting visitors, so why throw it away when these files can also be optimized?

When you or your graphic designer create a PDF file, you probably don’t consider that it might one day be placed on the company’s website and that it should be easily searchable and indexable by search engines. For example, in the search results, look for the information about the BMW car in PDF files (filetype: pdf bmw). The first example is from a book about cars, the second is a catalogue of accessories for the new BMW from the car company’s website, and only in the third place is the catalogue of the car I was looking for. The catalogue can be downloaded from a dealer´s website rather than from the car manufacturer’s website. None of the files is optimized for search engines.

Simultaneously, only a few minor interventions would be required. A PDF file is viewed by the browser in the same way that an HTML document is. A PDF file, unlike an HTML file, lacks a title tag.

So we can work with the following:

  • With a file name that will be included in the URL. It is preferable if keywords appear in the title, such as superb-catalogue-accessories.pdf. The URL would be something like: http://www.domain.com/for-download/superb-catalog-accessories.pdf
  • With highlighted headings within the files. It is important to mention that the search engine will usually use the first (approx.) 60 characters of the file or the first highlighted text in the PDF document (for example, written in a larger font) instead of the page title. If you search for an advertisement or leaflet, this is usually the title of a book, announcement, or slogan.
  • With metadata – the name and description of the document in the PDF file’s properties. Depending on the program you use to create it, you can change the name, description, and keywords for each PDF file. In Adobe Acrobat, for example, you can find this option in the tab: File — Document Properties — Description. Meta data seo can be added to Microsoft Office files in the same way. On non-HTML files, the first thing search engines look for is metadata. However, metadata should include information from the document itself. Do not try to deceive search engines by including something in the metadata that users won’t find in the file.
  • With link text – that directs the reader to the document itself. The same as with backlinks, the search engine guesses what the linked page is about, based on its content.
  • With the textual content of the document. Also, in this case, it is obvious, that it should include the necessary keywords, and the text in the PDF file should be in text format – the scanned image of the flyer converted to PDF is unreadable. Simply hover your mouse over a word to see if it is in text format. The pointer should change to a text cursor, allowing you to “select” the given word and mark it, possibly copy it, and so on. If the cursor remains passive over the word, it is most likely an image, and the text is, therefore, invisible to robots. If you would like to see how the search engine sees your document, in reality, click on the link, and view it as HTML, which is located on the search result page next to the file type description.

What is the best size for a WWW page?

It is ideal if the page is no larger than 100 KB, with no images, linked cascading styles (CSS), or other attachments. Previously, search engines searched only a certain number of characters on a page; fortunately, this problem no longer exists; the most commonly used robots can download the entire page. However, they dislike overly large sites, and it’s also not ideal for visitors. For example, if you need to put long text on your WWW pages, such as a diploma thesis, it is definitely better to put each chapter separately on the pages and divide the long ones into separate pages.

How to speed up page loading

Many factors can cause a page to load slowly. You must first determine why. The following are the most common causes of slow page loading:

  • Excessive page size 
  • Redirection via materefresh tag
  • Multiple redirects
  • Page compilation from external sources 
  • Low-quality web hosting

If the HTML page is too large, it can be compressed. GZIP and DEFLATE are the two methods supported by browsers. The page is compressed on the server, and the browser “decompresses” it before displaying it. Such compression can be very effective, reducing the size and, thus, the speed and bandwidth required to download and display the page by up to 70%. The DEFLATE method is more than 20% more cost-effective than GZIP, according to tests.

How to create pages from external sources

If you use nested elements on your pages, such as iframe, to load other pages, it can cause slowdowns, especially when composing pages from elements from different servers. You can improve that by attempting to replace such content.

After all, search engines are not particularly good at indexing similarly nested elements. A similar situation can occur when we compose content from other external sources, such as javascript codes. Typically, advertising banners, various information boxes, and other external applications are inserted in this manner. In this case, we recommend that you place these codes, that transfer the content from other sites, at the bottom of the page so that they don’t slow down the display of your own content if they have loading issues.

Why should you go with high-quality web hosting?

Perhaps the problem is not with your webmaster, but with a slow server connection. Contact your hosting administrator to determine the source of the problem.

The fact is that a slow response or server unavailability, and therefore pages, can be a major problem. If the search robots repeatedly come on your site, when they are not available, it may give them the impression that they no longer exist and, therefore no need to offer them in search results.

Are meta tags necessary?

The meta tag defining the encoding is one that should not be missing on the page – for example, we know that some of the search engines only index certain encodings.

<html>

<head>

<title>…</title>

<meta …>

<meta …>

<meta …>

</head>

<body>

The page content itself

</body>

The first meta attribute is either name (general meta information) or http-equiv, a system attribute interpreted by the browser as the HTTP protocol header. 

E.g:

<meta name=”information type” content=”content of the information” >

or:

<meta http-equiv=”information type” content=”content of the information” >

Meta tags alternating the page’s coding

As previously stated, search engines are interested in the meta tag that defines the page’s language and encoding. Of course, there are other ways for a search engine to recognize a page’s coding, but relying on the meta tag is the most straightforward, so why not make it easy for the bots…

Example meta tags for modifying the page’s coding are as follows:

<meta http – equiv=”Content type” content=*text/html ; charset=windows-1250″>

Which other meta tags are important to search engines?

You can also notify search engines about the content on your page that you do not want to be indexed. The following meta tag is then used for this:

<meta name=”robots” content=”noindex.follow”>

This instructs bots not to list the page, but rather to follow the links. Change the value to index. nofollow if you want the bots to index the page but not follow the links.

The Googlebot meta tag is also recognized by the Googlebot robot. An example of disallowing page snippet listing and archiving is as follows:

<meta name=”googlebot” content=”nosnippet.noarchive”>

Do search engines index meta tags, descriptions, and keywords?

When it comes to the most popular search engine, Google indexes the description meta tag. It considers it, especially when there is no other relevant textual content on the page itself, such as when the first page is replaced by an image, signpost, or animation (splash page) with no texts. In this case, it is preferable to add short content to the description tag so that search engines can at least display a snippet.

If you do not have a keywords meta tag on your site, don’t worry; search engines will ignore it.

SEO tools for creating meta tags

Avoid writing meta tags by using one of the online meta tag generator tools. The time you save by not having to type out correct HTML meta tags can be better spent carefully creating sentences and phrases.

Meta tag generators available online:

■ http://www.seochat.com/seo-tools/advanced-meta-tag-generator/

■ http://www.seochat.com/seo-tools/meta-tag-generator/

■ http://www.seologic.com/webmaster-tools/meta-tag-generator.php

■ http://www.seutility.com/en/Meta_Tag_generator.htm

What about site validity in terms of SEO?

Should the pages be considered valid or invalid? Is page validity important for search engine optimization? The first question has a straightforward answer: It is preferable if your pages are valid. It will not harm them. However, there are errors and errors. Some are harshly critical, while others are less so.

For example, failing to include an alternative image description (alt) is considered a violation of validity. We know that this description is useful when looking for images, for example. On the other hand, the specification of the cut and style of the font in the page’s text, for example, is deemed invalid. In that case, search engines don’t care. If the website’s possible lack of functionality does not reduce its functionality and viewability in common browsers, I would not be very concerned.

I will not repeat the well-known liberal programmers’ argument that even Google does not have valid sites, and instead will provide a link where you can check the validity of your site yourself: http://validator.w3.org/.

Make sure that the site is accessible rather than valid according to strict standards.

Where can I find out if my website is robot-friendly?

To determine whether or not your website is visible and accessible to search engine robots. You can use a variety of online tools and utilities. In the first step, use a text-based browser, such as Lynx, to browse the site.

You can also view your site in one of the online simulators, which you can access by going to http://crschmidt.net/services/lynx/name-of-the-domain.tld.

SEO on social media

How does social media promotion work?

You can use social media to expand your network of contacts (friends) who are interested in your activities, follow the news, and converse with you. The advantage is that you can constantly expand your social network.

You can attract a large number of people in this manner even if your budget is limited. A secondary method is seo link building, which will be discussed further below.

What is the distinction between a social network that is open and one that is closed – private?

This is a significant difference in terms of SEO. While a closed social network is represented by a private garden hidden behind a high stone wall, an open social network is represented by a freely accessible public playground. Anyone is welcome to come and have a good time.

Everyone who passes by, whether they are on the field or not, sees you and can form an opinion about you. You can have fun in a closed social network as well, but only those who are members can see you. Passers-by may believe that something is going on inside, that there is some fun going on, but they have no idea what kind or who is involved. A search engine, for example, can be a bystander. Second Life is an example of a typically closed network. The search engine will not be able to determine what is going on inside. As a result, such a network is unsuitable for building direct links.

Twitter and MySpace, for example, are open places where search engines can and do go. Even Facebook, which is otherwise a closed system, is aware of this, which is why some of its site types are also open; this primarily applies to so-called fan sites (Pages).

Links from business or fan pages that are open to the public are typically indexed and ranked, so links from them make sense.

How to make use of Facebook in SEO

Facebook is one of the Internet’s most popular social networks. You can use it to post photos, send short messages to your profile, and join various interest groups. A Facebook profile, on the other hand, is uninteresting from an SEO standpoint. It turns out that using the Pages tool is more appropriate. These are search engine friendly (unlike profiles and groups), they gain rankings, and you can easily link to them and work with them.

Unfortunately, common links on Facebook pages, like Twitter, which we will discuss shortly, have the nofollow attribute. However, we have a simple trick for you to get links from your blog to Facebook without this restriction:

  1. Go to your Facebook page and click Edit Page.
  2. Navigate to the More applications tab and select one of the RSS readers, such as Simple RSS or Social RSS.
  3. Add an RSS reader to the site.
  4. Enter the RSS feed for your blog.

The nofollow attribute is not present in links obtained through an RSS reader. Remember to include keywords in your article titles if you want backlinks.

Of course, social networks can help you get links indirectly. Your friends will learn about your interesting article/product/service and will be able to link to it from their own websites. In any case, if you plan to use tools like Facebook, Twitter, or other social networks for promotion, keep in mind that promoting and increasing the link population is also beneficial to your profiles.

Twitter for SEO

Twitter (www.twitter.com) is a service that allows you to send 140-character messages to your profile. This is a type of microblog. Your profile is now at a single permanent address, and it will rise in the ranks over time. You can then link to your own projects from your profile, where you can transfer ranks. Anyone can follow your profile, which allows you to develop your company’s branding.

Create unique posts that pique people’s interest. Regular updates will bring you, new readers. Promote your Twitter account. Discuss it in a company presentation, your seo blog, or a discussion forum. Obtain “followers” (like Facebook friends) who will see your posts. Receiving information from other profiles is also beneficial.

SEO for Instagram

There are many factors that can help make an IG profile interesting for a brand or a professional who chooses to communicate also through this channel: once the right road is taken and apply the best seo practices in your daily social management of this social site, it will be easy for you to notice the difference between an amateur profile cure and one instead more professional.

A good SEO for Instagram can help you effectively promote your account, increase its visibility and increase its engagement level, to create a passionate community of followers. The impact of SEO on Instagram Followers can, in fact, be decidedly significant and drive towards an exponential increase of the same. All factors, especially for a profile linked to a specific business, can easily translate into greater conversions, purchases or revenues.

What are the most popular other social networks?

MySpace is one of the most popular open networks (www.myspace.com). It operates on the same principles as Facebook. You can make your own profile, search for friends, upload photos, and so on. Unlike the other social networks mentioned previously, links from MySpace profiles do not yet have the nofollow attribute.

LinkedIn (www.linkedin.com) should also be mentioned. It is an employment-focused social network, and the information in the profile is similar to that in the CV. Write down where you worked and what you did.

Discussions, comments, and competitions

What role do the discussion links play?

There are numerous discussion boards dedicated to various topics. Sign up for discussions about services similar to the one you provide. In discussions, give advice and assist others. This is how you establish natural authority in your field. Most discussion forums allow you to add a so-called signature to your posts.

Don’t make a single post in a discussion forum. Generally, I post once a day or at some other time interval. Each post includes a link to one of your projects. The more you post, the more links you will receive from various sources.

You will, of course, increase website traffic in addition to seo link building. Others enjoy visiting the website of the person who gave them the advice.

How to sign in to a discussion

Don’t stuff your signature with too many links. Put the links in sentences, similar to WWW pages. Make your signature flash or perform other similar functions. It will not be well received by the public (quite the contrary). If some discussion boards do not allow automatic signatures, sign each post individually.

When should you start your own community?

Many people want to start their own discussion forum. However, very few places actually create such a forum where regular visitors can participate in the discussions. In the case of a discussion forum, the issue may be that not enough people visit the forum, causing it to become “dead,” i.e. there will be insufficient new posts, and the forum will become unattractive.

People prefer to participate in discussions where there is life, rather than those where no one has written anything in the last month.

If you can get the forum up and running, this is a great way for a promotion. However, this does not always work out. You can start the forum by inviting your friends and acquaintances to participate. You can also arrange with the students to lead a discussion in the forum. 

How to ruin your reputation on a discussion board

Many people join forums solely to be able to post advertisements. Avoid similar actions. Forums are mostly moderated, and such behaviour can result in a ban. Create forum posts that are useful to readers. A discussion forum is an excellent servant but a terrible master. You can quickly tarnish your reputation on it.

Why is comment spam inappropriate?

You can include a link to the blog’s comments. But don’t post on different blogs just to leave a link on them. If you write a quality comment that adds a new idea to the discussion or a different perspective on the problem, you deserve a link from the site; however, if you simply say, “Good article,” the blog owner may suspect you of comment spam. One post of a similar type is acceptable, but you should not annoy the seo blog owner with similar posts on a regular basis.

It also looks silly if you sign your comment with something like “disclaimer” instead of your name or nickname. All because the commenter’s name also serves as the anchor text for the link to his website.

On the Internet, you can also find scripts that recognize, for example, blogs deployed on some extended systems, such as the aforementioned WordPress, and then automatically “supply” comments. Similar scripts fall under the category of unethical SEO practices, and we strongly advise against using them.

What does an SEO competition entail?

In most SEO competitions, you want to rank as high as possible for the chosen keyword. 

Why to you prefer to participate in activities that directly benefit others?

Many websites are attempting to outperform their competitors by ranking as high as possible in search engines. They are the frontrunners in a battle to acquire more links. The same is true for pay-per-click (PPC) advertising. Advertisers are simply outdoing one another and raising the cost of advertising among themselves.

As a result, don’t just concentrate on running the competition, in which you’ve invested a lot of money. In other words, the first place is frequently determined by the amount of money spent on links rather than an original idea.

Create your websites to assist real people. When creating links, keep this rule in mind. Rather than spending money to keep up with the competition, invest directly in your website and its value to users. If you provide something to a website visitor that other sites do not, they will advertise for you and link to your website.