How to choose the right semantic core. Compiling a top semantic core: a selection of articles on semantics. Cleaning by general and exact frequency

IN this moment For search engine promotion, factors such as content and structure play the most important role. However, how to understand what to write text about, what sections and pages to create on the site? In addition to this, you need to find out exactly what the target visitor to your resource is interested in. To answer all these questions you need to collect semantic core.

Semantic core— a list of words or phrases that fully reflect the theme of your site.

In the article I will tell you how to pick it up, clean it and break it down into structure. The result will be a complete structure with queries clustered across pages.

Here is an example of a query core broken down into a structure:


By clustering I mean breaking down your search queries into individual pages. This method will be relevant for both promotion in Yandex and Google PS. In this article I will describe a completely free way to create a semantic core, but I will also show options with various paid services.

After reading the article, you will learn

  • Choose the right queries for your topic
  • Collect the most complete core of phrases
  • Clean up uninteresting requests
  • Group and create structure

Having collected the semantic core you can

  • Create a meaningful structure on the site
  • Create a multi-level menu
  • Fill pages with texts and write meta descriptions and titles on them
  • Collect positions of your website for queries from search engines

Collection and clustering of the semantic core

Correct compilation for Google and Yandex begins with identifying the main key phrases of your topic. As an example, I will demonstrate its composition using a fictitious online clothing store. There are three ways to collect the semantic core:

  1. Manual. Using the Yandex Wordstat service, you enter your keywords and manually select the phrases you need. This method is quite fast if you need to collect keys on one page, however, there are two disadvantages.
    • The accuracy of the method is poor. You may always miss some important words if you use this method.
    • You will not be able to assemble a semantic core for a large online store, although you can use the Yandex Wordstat Assistant plugin to simplify it - this will not solve the problem.
  2. Semi-automatic. In this method, I assume using a program to collect the kernel and further manually breaking it down into sections, subsections, pages, etc. This method compilation and clustering of the semantic core, in my opinion, is the most effective because has a number of advantages:
    • Maximum coverage of all topics.
    • Qualitative breakdown
  3. Auto. Nowadays, there are several services that offer fully automatic kernel collection or clustering of your requests. Fully automatic option - I do not recommend using it, because... The quality of collection and clustering of the semantic core is currently quite low. Automatic query clustering is gaining popularity and has its place, but you still need to merge some pages manually, because the system does not give an ideal ready-made solution. And in my opinion, you will simply get confused and will not be able to immerse yourself in the project.

To compile and cluster a full-fledged correct semantic core for any project, in 90% of cases I use a semi-automatic method.

So, in order we need to follow these steps:

  1. Selection of queries for topics
  2. Collecting the kernel based on requests
  3. Cleaning up non-target requests
  4. Clustering (breaking phrases into structure)

I showed an example of selecting a semantic core and grouping into a structure above. Let me remind you that we have an online clothing store, so let’s start looking at point 1.

1. Selection of phrases for your topic

At this stage we will need the Yandex Wordstat tool, your competitors and logic. In this step, it is important to collect a list of phrases that are thematic high-frequency queries.

How to select queries to collect semantics from Yandex Wordstat

Go to the service, select the city(s)/region(s) you need, enter the most “fatty” queries in your opinion and look at the right column. There you will find the thematic words you need, both for other sections, and frequency synonyms for the entered phrase.

How to select queries before compiling a semantic core using competitors

Enter the most popular queries into the search engine and select one of the most popular sites, many of which you most likely already know.

Pay attention to the main sections and save the phrases you need.

At this stage, it is important to do the right thing: to cover as much as possible all possible words from your topic and not miss anything, then your semantic core will be as complete as possible.

Applying to our example, we need to create a list of the following phrases/keywords:

  • Cloth
  • Shoes
  • Boots
  • Dresses
  • T-shirts
  • Underwear
  • Shorts

What phrases are pointless to enter?: women's clothing, buy shoes, prom dress, etc. Why?— These phrases are the “tails” of the queries “clothes”, “shoes”, “dresses” and will be added to the semantic core automatically at the 2nd stage of collection. Those. you can add them, but it will be pointless double work.

What keys do I need to enter?“low boots”, “boots” are not the same thing as “boots”. It is the word form that is important, not whether these words have the same root or not.

For some, the list of key phrases will be long, but for others it consists of one word - don’t be alarmed. For example, for an online store of doors, the word “door” may well be enough to compile a semantic core.

And so, at the end of this step we should have a list like this.

2. Collecting queries for the semantic core

For proper, full collection, we need a program. I will show an example using two programs simultaneously:

  • On the paid version - KeyCollector. For those who have it or want to buy it.
  • Free - Slovoeb. A free program for those who are not ready to spend money.

Open the program

We create new project and let's call it, for example, Mysite

Now to further collect the semantic core, we need to do several things:

Create new account on Yandex mail (the old one is not recommended due to the fact that it can be banned for many requests). So you created an account, for example [email protected] with password super2018. Now you need to specify this account in the settings as ivan.ivanov:super2018 and click the “save changes” button below. More details in the screenshots.

We select a region to compile a semantic core. You need to select only those regions in which you are going to promote and click save. The frequency of requests and whether they will be included in the collection in principle will depend on this.

All settings are completed, all that remains is to add our list of key phrases prepared in the first step and click the “start collecting” button of the semantic core.

The process is completely automatic and quite long. You can make coffee for now, but if the topic is broad, for example, like the one we are collecting, then this will last for several hours 😉

Once all the phrases are collected, you will see something like this:

And this stage is over - let's move on to the next step.

3. Cleaning the semantic core

First, we need to remove requests that are not interesting to us (non-target):

  • Related to another brand, for example, Gloria Jeans, Ecco
  • Information queries, for example, “I wear boots”, “jeans size”
  • Similar in topic, but not related to your business, for example, “used clothing”, “clothing wholesale”
  • Queries that are in no way related to the topic, for example, “Sims dresses”, “puss in boots” (there are quite a lot of such queries after selecting the semantic core)
  • Requests from other regions, metro, districts, streets (it doesn’t matter which region you collected requests for - another region still comes across)

Cleaning must be done manually as follows:

We enter a word, press “Enter”, if in our created semantic core it finds exactly the phrases that we need, select what we found and press delete.

I recommend entering the word not as a whole, but using a construction without prepositions and endings, i.e. if we write the word “glory”, it will find the phrases “buy jeans at Gloria” and “buy jeans at Gloria”. If you spelled "gloria" - "gloria" would not be found.

Thus, you need to go through all the points and remove unnecessary queries from the semantic core. This may take a significant amount of time, and you may end up deleting most of the collected queries, but the result will be a complete clean and correct list all kinds of promoted queries for your website.

Now upload all your queries to excel

You can also remove non-target queries from semantics en masse, provided you have a list. This can be done using stop words, and this is easy to do for a typical group of words with cities, subways, and streets. You can download a list of words that I use at the bottom of the page.

4. Clustering of the semantic core

This is the most important and interesting part - we need to divide our requests into pages and sections, which together will create the structure of your site. A little theory - what to follow when separating requests:

  • Competitors. You can pay attention to how the semantic core of your competitors from the TOP is clustered and do the same, at least with the main sections. And also see which pages are in the search results for low-frequency queries. For example, if you are not sure whether or not to create a separate section for the query “red leather skirts,” then enter the phrase into the search engine and look at the results. If the search results contain resources with such sections, then it makes sense to create a separate page.
  • Logics. Do the entire grouping of the semantic core using logic: the structure should be clear and represent in your head a structured tree of pages with categories and subcategories.

And a couple more tips:

  • It is not recommended to place less than 3 requests per page.
  • Don’t make too many levels of nesting, try to have 3-4 of them (site.ru/category/subcategory/sub-subcategory)
  • Do not make long URLs, if you have many levels of nesting when clustering the semantic core, try to shorten the urls of categories high in the hierarchy, i.e. instead of “your-site.ru/zhenskaya-odezhda/palto-dlya-zhenshin/krasnoe-palto” do “your-site.ru/zhenshinam/palto/krasnoe”

Now to practice

Kernel clustering as an example

To begin with, let’s categorize all requests into main categories. Looking at the logic of competitors, the main categories for a clothing store will be: men's clothing, women's clothing, children's clothing, as well as a bunch of other categories that are not tied to gender/age, such as simply “shoes”, “outerwear”.

We group the semantic core using Excel. Open our file and act:

  1. We break it down into main sections
  2. Take one section and break it into subsections

I will show you the example of one section - men's clothing and its subsection. In order to separate some keys from others, you need to select the entire sheet and click conditional formatting->cell selection rules->text contains

Now in the window that opens, write “husband” and press enter.

Now all our keys for men's clothing are highlighted. It is enough to use a filter to separate the selected keys from the rest of our collected semantic core.

So let’s turn on the filter: you need to select the column with queries and click sort and filter->filter

And now let's sort

Create a separate sheet. Cut the highlighted lines and paste them there. You will need to split the kernel in the future using this method.

Change the name of this sheet to “Men’s clothing”, a sheet where the rest of the semantic core is called “All queries”. Then create another sheet, call it “Structure” and put it as the very first one. On the structure page, create a tree. You should get something like this:

Now we need to divide the large men's clothing section into subsections and sub-subsections.

For ease of use and navigation through your clustered semantic core, provide links from the structure to the appropriate sheets. To do this, click right click mouse to the desired item in the structure and do as in the screenshot.

And now you need to methodically separate the requests manually, simultaneously deleting what you may not have been able to notice and delete at the kernel cleaning stage. Ultimately, thanks to the clustering of the semantic core, you should end up with a structure similar to this:

So. What we learned to do:

  • Select the queries we need to collect the semantic core
  • Collect all possible phrases for these queries
  • Clean out "garbage"
  • Cluster and create structure

What, thanks to the creation of such a clustered semantic core, can you do next:

  • Create a structure on the site
  • Create a menu
  • Write texts, meta descriptions, titles
  • Collect positions to track dynamics of requests

Now a little about programs and services

Programs for collecting the semantic core

Here I will describe not only programs, but also plugins and online services that I use

  • Yandex Wordstat Assistant is a plugin that makes it convenient to select queries from Wordstat. Great for quickly compiling the core for a small site or 1 page.
  • Keycollector (word - free version) is a full-fledged program for clustering and creating a semantic core. It is very popular. A huge amount of functionality in addition to the main direction: Selection of keys from a bunch of other systems, the possibility of auto-clustering, collecting positions in Yandex and Google and much more.
  • Just-magic is a multifunctional online service for kernel compilation, auto-breaking, text quality checking and other functions. The service is shareware; to fully operate, you need to pay a subscription fee.

Thank you for reading the article. Thanks to this step-by-step manual, you will be able to create the semantic core of your website for promotion in Yandex and Google. If you have any questions, ask in the comments. Below are the bonuses.


The semantic core of a site consists of keywords (queries) that users use on the Internet to search for services, products and any other information that this site offers. For webmasters, this is an action plan for promoting the resource. In an ideal plan The semantic core of the site is created once, before optimization and promotion begin.


The semantic core of a website is usually compiled in several stages:

  1. All sorts of words (phrases) that are appropriate to the topic of the site are selected. At first, you can limit yourself to 100–200 search queries. In order to know which queries are suitable for you, answer yourself the question “What do I want to dedicate my site to?”
  2. Expansion of the semantic core through associative queries
  3. Inappropriate words should be eliminated. Here you filter out those phrases that you will not use to promote your site. There are usually more than half of such words.
  4. Highly competitive queries for which there is no point in promoting the site are eliminated. Typically, three words out of five or more are removed.
  5. And lastly, this is the correct distribution of the list of search queries on the resource pages. It is recommended to leave highly competitive queries on the main page of the resource; less competitive ones should be grouped according to their meaning and placed on other pages. To do this, you need to create a document in Excel and break down the keywords into pages.

Selection of search queries and checking frequency

The first thing you need to do is collect as many different queries as possible on your topic that interest users on the Internet. There are two methods for this:

  • Free ones, which include: Wordstat Yandex, Slovoeb, the old-fashioned way, hints from Google (External Keyword Tool), analysis of the semantics of competitors and search tips.
  • Paid ones that include Key Collector, Semrush, Pastukhov databases and some other services.

These tools are suitable for various purposes (for example, Semrush is best used for burzhunet). Of course, all this can be entrusted to optimizers, but there is a possibility that you will receive an incomplete semantic core.

Many people use Pastukhov’s database to collect key phrases, but with Key Collector it is much more convenient to collect queries from Yandex and Google statistics services.

At the initial stage, it is better to collect queries in Excel; it looks like this:


If Google is more important for your resource, then focus on it, but also take into account and analyze keywords from Yandex. It is also very important to collect a long tail of low-frequency queries; they will get you traffic much faster.

Another option you can use is to find out key phrases (words) from your competitors and use them. At this stage, you simply collect as many key phrases (words) that are relevant to the topic of your resource as possible, and then move on to the next stage - filtering.

Analysis of requests, removal of dummies

This stage is already simpler; here you need to filter out dummy words and those that are not related to the theme of the site. For example, you have lunch delivery in Kyiv, but there are other cities on the list.

How to identify empty requests? Go to Yandex Wordstat and enter keyword:


You see 881 impressions per month, but to be more precise:


Now a completely different picture emerges. Maybe it's not best example, but the main thing is that you understand the essence. There are a large number of key phrases for which sufficient traffic is visible, although in reality there is nothing there. That's why you need to weed out such phrases.

For example, if a person before (or after) typing the request “lunch delivery” entered another phrase in the search bar (called one search session), then Yandex makes the assumption that these search phrases are somehow interconnected. If such a relationship is observed among several people, then such associative queries are shown in the right column of wordstat.


Such search queries are sorted in the wordstat window in descending order of frequency of their entry in conjunction with the main query this month (the frequency of their use in the Yandex search engine is shown). You need to use this information to expand the semantic core of your resource.

Distribution of requests across pages

After this, you need to distribute the keywords (phrases) you collected on the pages of your site. Distribution is much easier when you don’t yet have the pages themselves.

Focus primarily on keywords in search queries and their frequency. This is how you deal with competition: home page allocate to one or two highly competitive queries.

For moderately competitive or low competitive queries, optimize section and article pages accordingly.

If there is semantic similarity in the search queries, simply collect the same phrases and define them in one group. When creating keywords to promote a resource, always use not only standard tools, but also a creative approach.

By combining non-standard and classical methods, you can simply and quickly create the semantic core of a site, choose the most optimal promotion strategy and achieve success much faster!

Hi all! Today's article is devoted to how to correctly assemble a semantic core (SC). If you are engaged in SEO promotion in Google and Yandex, want to increase natural traffic, increase website traffic and sales - this material is for you.

To get to the bottom of the truth, we will study the topic from “A to Z”:

In conclusion we will consider general rules for compiling the SY. So let's get started!

Semantic core: what is it and what are the queries?

The semantic core of a site (also known as the “semantic core”) is a set of words and phrases that exactly corresponds to the structure and theme of the resource. Simply put, these are the queries by which users can find a site on the Internet.

It is the correct semantic core that gives search engines and the audience a complete picture of the information presented on the resource.

For example, if a company sells ready-made postcards, then the semantic core should include the following queries: “buy a postcard”, “postcard price”, “custom postcard” and the like. But not: “how to make a postcard”, “do-it-yourself postcard”, “homemade postcards”.

Interesting to know: LSI copywriting. Will the technique replace SEO?

Classification of requests by frequency:

  • High frequency queries(HF) - the most often “hammered” into the search bar (for example, “buy a postcard”).
  • Midrange(MF) – less popular than HF keys, but also of interest to a wide audience (“buy postcard price”).
  • Low frequency(NP) – phrases that are requested very rarely (“buy an art postcard”).

It is important to note that there are no clear boundaries separating HF from SY and LF, since they vary depending on the topic. For example, for the query “origami”, the RF indicator is 600 thousand impressions per month, and for “cosmetics” – 3.5 million.

If we turn to the anatomy of the key, then the high frequency consists only of the body, the midrange and low frequencies are supplemented by a specifier and a “tail”.

When forming a semantic core, you need to use all types of frequency, but in different proportions: minimum HF, maximum LF and average amount of MF.

To make it clearer, let's draw an analogy with a tree. The trunk is the most important request on which everything rests. Thick branches located closer to the trunk are mid-frequency keys, which are also popular, but not as popular as HF. Thin branches are low-frequency words that are also used to search for the desired product/service, but rarely.

Separation of keys by competitiveness:

  • highly competitive (HC);
  • average competitive (SC);
  • low-competitive (NC).

This criterion shows how many web resources are used this request for advancement. Everything is simple here: the higher the competitiveness of the key, the more difficult it is to break through and stay in the top 10 with it. Low-competitive ones are also not worth attention, since they are not very popular on the network. The ideal option is to advance according to IC requests, with which you can realistically take first place in a stable business area.

Classification of requests according to user needs:

  • Transactional– keys associated with the action (buy, sell, upload, download).
  • Information– to obtain any information (what, how, why, how much).
  • Navigational– help you find information on a specific resource (“buy a telephone socket”).

The remaining keywords, when it is difficult to understand the user’s intention, are classified into the “Other” group (for example, just the word “postcard” raises a lot of questions: “Buy? Make? Draw?”).

Why does a website need a semantic core?

Collecting a semantic core is painstaking work that requires a lot of time, effort and patience. It will not be possible to create a correct syntax that will work in just two minutes.

A completely reasonable question arises here: is it even worth spending effort on selecting a semantic core for a site? If you want your Internet project to be popular, constantly increase your customer base and, accordingly, increase the company’s profits, the answer is unequivocal: “YES.”

Because collecting the semantic core helps:

  • Increase the visibility of a web resource. Search engines Yandex, Google and others will find your site using the keywords you select and offer it to users who are interested in these queries. As a result, the influx of potential customers increases, and the chances of selling a product/service increase.
  • Avoid competitors' mistakes. When creating a syntax, an analysis of the semantic core of competitors occupying the first position in search results is necessarily performed. By studying the leading sites, you will be able to determine what queries help them stay in the top, what topics they write texts on, and what ideas are unsuccessful. During your competitor analysis, you may also come up with ideas on how to develop your business.
  • Make the site structure. It is recommended to use the semantic core as an “assistant” for creating a website structure. By collecting the complete CN, you can see all the queries that users enter when searching for your product or service. This will help you decide on the main sections of the resource. Most likely, you will need to create pages that you didn’t even think about initially. It is important to understand that the NL only suggests the interests of users. Ideally, the site structure matches the business area and contains content that meets the needs of the audience.
  • Avoid spam. After analyzing the semantic core of top competitor sites, you can determine the optimal keyword frequency. Because there is no universal indicator of query density for all pages of a resource, and everything depends on the topic and type of page, as well as the language and the key itself.

How else can you use the semantic core? To create the right content plan. Properly collected keys will suggest topics for texts and posts that are of interest to your target audience.

Conclusion. It is almost IMPOSSIBLE to create an interesting, popular and profitable Internet project without SY.

Material on topic:

Preparing to collect the semantic core for the site

Before creating the semantic core of the site, you need to perform the following steps:

I. Study the company’s activities (“brainstorming”)

Here it is important to write down ALL the services and goods that the organization offers. For example, to collect a semantic core for an online furniture store, you can use the following queries: sofa, armchair, bed, hallway, cabinet + restoration, repair. The main thing here is not to miss anything and not to add unnecessary things. Only relevant information, i.e. If the company does not sell poufs or repair furniture, these requests are not necessary.

In addition to brainstorming, you can use Google services Analytics and Yandex.Metrika (Fig. 1) or personal accounts in Google Search Console and Yandex Webmaster (Fig. 2). They will tell you which queries are most popular among your target audience. Such assistance is available only to already operating sites.

Texts to help:

  • Advego– works on the same principle as Istio.com.

  • Simple SEO Toolsfree service for SEO analysis of the site, including the semantic core.

  • Lenartools. It works simply: load the pages from which you need to “pull” keys (max 200), click “Let’s go” - and you get a list of words that are most often used on resources.

II. To analyze the semantic core of a competitor site:

  • SEMRUSH– you need to add the resource address, select the country, click “Start Now” and get the analysis. The service is paid, but 10 free checks are provided upon registration. Also suitable for collecting keys for your own business project.

  • Searchmetrics– a very convenient tool, but it’s paid and English language, so it is not available to everyone.

  • SpyWords– a service for analyzing a competitor’s activities: budget, search traffic, ads, requests. A “reduced” set of functions is available for free, and for a fee you can get a detailed picture of the progress of the company you are interested in.

  • Serpstat– a multifunctional platform that provides a report on keywords, rankings, competitors in Google and Yandex search results, backlinks, etc. Suitable for selecting keywords and analyzing your resource. The only negative is that the full range of services is available after paying for the tariff plan.

Another effective method extensions of the semantic core - use synonyms. Users can search for the same product or service in different ways, so it is important to include all alternative keys in the TL. Hints in Google and Yandex will help you find synonyms.

Advice. If the site is informational, you first need to select the queries that are the main ones for of this resource and on which promotion is planned. And then - seasonal. For example, for a web project about fashion trends in clothing, the key queries will be: fashion, women's, men's, children's. And, so to speak, “seasonal” - autumn, winter, spring, etc.

How to assemble a semantic core: detailed instructions

Having decided on a list of queries for your site, you can begin collecting the semantic core.

It can be done:

I. FREE using:

Wordstat Yandex

Yandex Wordstat is a very popular online service with which you can:

  • collect the semantic core of the site with statistics for the month;
  • get words similar to the query;
  • filter keywords entered from mobile devices;
  • find out statistics by city and region;
  • determine seasonal fluctuations of keys.

Big drawback: you have to “unload” the keys manually. But if you install the extension Yandex Wordstat Assistant, working with the semantic core will speed up significantly (relevant for the Opera browser).

It’s easy to use: click on “+” next to the desired key or click “add all”. Requests are automatically transferred to the extension list. After collecting the CN, you need to transfer it to the table editor and process it. Important advantages of the program: checking for duplicates, sorting (alphabet, frequency, addition), the ability to add keys manually.

Step-by-step instructions on how to use the service are given in the article: Yandex. Wordstat: how to collect key queries?

Google Ads

Keyword planner from Google, which allows you to select a semantic core online for free. The service finds keywords based on search engine user requests Google systems. To work, you must have a Google account.

The service offers:

  • find new keywords;
  • see the number of requests and forecasts.

To collect the semantic core, you need to enter a query, selecting the location and language. The program shows the average number of requests per month and the level of competition. There is also information about ad impressions and the bid to display an ad at the top of the page.

If necessary, you can set a filter by competition, average position and other criteria.

It is also possible to request a report ( step by step instructions The program shows how to do it).

To study traffic forecasting, just enter a query or a set of keys in the “See the number of queries and forecasts” window. The information will help determine the effectiveness of the strategic plan for a given budget and rate.

The “disadvantages” of the service include the following: there is no exact frequency (only the average for the month); does not show encrypted Yandex keys and hides some from Google. But it determines competition and allows you to export keywords in Excel format.

SlovoEB

This is a free version of Key Collector, which has a lot of useful features:

  • quickly collects a semantic core from the right and left columns of WordStat;
  • performs batch collection of search tips;
  • determines all types of frequency;
  • collects seasonality data;
  • allows you to perform batch collection of words and frequency from Rambler.Adstat;
  • Calculates KEI (Key Effectiveness Index).

To use the service, just enter your account information in Direct (login and password).

If you want to know more, read the article: Slovoeb (Slovoeb). Basics and instructions for use

Bukvariks

An easy-to-use and free program for collecting the semantic core, the database of which includes more than 2 billion queries.

Is different operational work, as well as useful features:

  • supports a large list of exception words (up to 10 thousand);
  • allows you to create and use lists of words directly when forming a sample;
  • offers to compile lists of words by multiplying several lists (Combinator);
  • removes duplicate keywords;
  • shows frequency (but only “worldwide”, without selecting a region);
  • analyzes domains (one or more, comparing SYNAL resources);
  • exported in .csv format.

The only major drawback for installation program– large “weight” (in the archived format ≈ 28 GB, in the unpacked format ≈ 100 GB). But there is an alternative - selecting SYS online.

II. PAID using programs:

Maxim Pastukhov's base

Russian service, which contains a database of more than 1.6 billion keywords with Yandex data WordStat and Direct, as well as English, containing more than 600 million words. It works online and helps not only in creating a semantic core, but also in launching an advertising campaign in Yandex.Direct. Its most important and important disadvantage can be safely called its high cost.

Key Collector

Perhaps the most popular and convenient tool for collecting the semantic core.

Key Collector:

  • collects keywords from the right and left columns of WordStat Yandex;
  • filters out unnecessary requests using the Stop Words option;
  • searches for duplicates and identifies seasonal keywords;
  • filters keys by frequency;
  • uploaded in Excel table format;
  • finds pages relevant to the request;
  • collects statistics from: Google Analytics, AdWords, etc.

You can evaluate how Kay Collector collects the semantic core for free in the demo version.

Rush Analytics

A service with which you can collect and cluster the semantic core.

In addition, Rush Analytics:

  • looks for hints in Youtube, Yandex and Google;
  • offers a convenient stop word filter;
  • checks indexing;
  • determines frequency;
  • checks site positions for desktops and mobiles;
  • generates technical specifications for texts, etc.

An excellent tool, but paid: no demo version and limited free checks.

Mutagen

The program collects key queries from the first 30 sites in the Yandex search engine. Shows the frequency per month, the competitiveness of each search query and recommends using words with an index of up to 5 (since for effective promotion such keywords provide enough quality content).

Useful article: 8 types of texts for a website - write correctly

A paid program for collecting the semantic core, but there is a free limit - 10 checks per day (available after the first replenishment of the budget, at least by 1 ruble). Open only to registered users.

Keyword Tool

A reliable service for creating a semantic core that:

  • V free version – collects more than 750 keys for each request, using Google, Youtube Bing, Amazon, eBay tips, App Store, Instagram;
  • in paid– shows the frequency of requests, competition, cost in AdWords and dynamics.

The program does not require registration.

In addition to the tools presented, there are many other services for collecting the semantic core of a site with detailed video reviews and examples. I settled on these because I think they are the most effective, simple and convenient.

Conclusion. If possible, it is advisable to purchase licenses to use paid programs, since they have much wider functionality than free analogues. But for simple collection“Open” services are also quite suitable for SY.

Clustering of the semantic core

A ready-made semantic core, as a rule, includes many keywords (for example, for the request “upholstered furniture,” services return several thousand words). What to do next with such a huge number of keywords?

The collected keys are needed:

I. Clear away “garbage”, duplicates and “dummies”

Requests with zero frequency or errors are simply deleted. To eliminate keys with unnecessary “tails,” I recommend using Excel function"Sorting and Filtering". What can be considered garbage? For example, for a commercial site, words such as “download”, “free”, etc. will be superfluous. Duplicates can also be automatically removed in Excel using the “remove duplicates” option (see examples below).

We remove keys with zero frequency:

Removing unnecessary “tails”:

Getting rid of duplicates:

II. Remove highly competitive queries

If you don’t want the “path” to the top to last for years, exclude VK keys. With such keywords, it will not be enough to just get to the first positions in the search results, but what is more important and more difficult is to try to stay there.

An example of how to determine VK-keys through the keyword planner from Google (you can use the filter to leave only NK and SK):

III. Perform ungrouping of the semantic core

You can do this in two ways:

1. PAID:

  • KeyAssort– a semantic core clusterer that helps create a site structure and find niche leaders. Powered by search engines Yandex and Google. Performs ungrouping of 10 thousand requests in just a couple of minutes. You can evaluate the benefits of the service by downloading the demo version.

  • SEMparser performs automatic grouping of keys; creating a site structure; identification of leaders; generation of technical specifications for copywriters; Yandex backlight parsing; determining the geo-dependence and “commerciality” of queries, as well as the relevance of pages. In addition, the service checks how well the text matches the top according to SEO parameters. How it works: collect SYNOPSIS and save it in .xls or .xlsx format. You create a new project on the service, select a region, upload a file with queries - and after a few seconds you receive words sorted into semantic groups.

In addition to these services, I can also recommend Rush Analytics, whom we have already met above, and Just-Magic.

Rush Analytics:

Just-Magic:

2. FREE:

  • Manually- With using Excel and the Sort and Filter functions. To do this: set a filter, enter a query for the group (for example, “buy”, “price”), highlight the list of keys in color. Next, set up the “Custom sorting” option (in “Sorting by color”) by going to “sort within the specified range.” The final touch is to add names to the groups.

Step 1

Step 2

Step 3

Step 4

An example of an ungrouped semantic core:

  • SEOQUICKfree online program for automatic clustering of the semantic core. To “scatter” keys into groups, just download a file with requests or add them manually and wait a minute. The tool works quickly, determining the frequency and type of key. Allows you to delete unnecessary groups and export the document in Excel format.

  • Keyword Assistant. The service works online on the principle of an Excel table, i.e. you will have to distribute the keywords manually, but it takes much less time than working in Excel.

How to cluster the semantic core and what methods to use is up to you. I believe that the way you need it can only be done manually. It's long, but effective.

After collecting and distributing the semantic core into sections, you can begin writing texts for the pages.

Read a related article with examples: How to correctly enter keywords into the text?

General rules for creating FL

To summarize, it is important to add tips that will help you assemble the correct semantic core:

The marketing statement should be designed so that it meets the needs of as many potential clients as possible.

The semantics must exactly correspond to the theme of the web project, i.e. You should focus only on targeted queries.

It is important that the finished semantic core includes only a few high-frequency keys, the rest is filled with mid- and low-frequency ones.

The semantic core should be regularly expanded to increase natural traffic.

And the most important thing: everything on the site (from keys to structure) must be done “for people”!

Conclusion. A well-assembled semantic core gives a real chance to quickly promote and maintain a site in top positions in search results.

If you doubt that you can assemble the correct semantic language, it is better to order a semantic core for the site from professionals. This will save energy, time and bring more benefits.

It will also be interesting to know: How to place and speed up the indexing of an article? 5 secrets of success

That's all. I hope the material will be useful to you in your work. I would be grateful if you share your experience and leave comments. Thank you for your attention! Until new online meetings!

What is the semantic core of a site? The semantic core of the site (hereinafter referred to as SY) is a set of keywords and phrases for which the resource progressing in search engines and which indicate that the site belongs to a certain topics.

For successful promotion in search engines, keywords must be correctly grouped and distributed across the pages of the site and in in a certain form contained in meta descriptions ( title, description, keywords), as well as in headings H1-H6. At the same time, overspam should not be allowed, so as not to “fly away” to Baden-Baden.

In this article we will try to look at the issue not only from technical point perspective, but also to look at the problem through the eyes of business owners and marketers.

What is the collection of SY?

  • Manual— possible for small sites (up to 1000 keywords).
  • Automatic— programs do not always correctly determine the context of the request, so problems may arise with the distribution of keywords across pages.
  • Semi-automatic— phrases and frequency are collected automatically, phrases are distributed and refined manually.

In our article we will consider a semi-automatic approach to creating a semantic core, as it is the most effective.

In addition, there are two typical cases when compiling a synonym:

  • for a site with a ready-made structure;
  • for a new site.

The second option is more preferable, since it is possible to create an ideal site structure for search engines.

What does the process of compiling a NL consist of?

Work on the formation of the semantic core is divided into the following stages:

  1. Identification of directions in which the site will be promoted.
  2. Collecting keywords, analyzing similar queries and search suggestions.
  3. Frequency parsing, filtering out “empty” requests.
  4. Clustering (grouping) of requests.
  5. Distribution of requests across site pages (creation of an ideal site structure).
  6. Recommendations for use.

The better you create the core of the site, and quality in this case means the breadth and depth of semantics, the more powerful and reliable the flow of search traffic you can direct to the site and the more customers you will attract.

How to create a semantic core of a website

So, let's look at each point in more detail with various examples.

At the first step, it is important to determine which products and services present on the site will be promoted in the search results of Yandex and Google.

Example No. 1. Let’s say the site has two areas of services: computer repair at home and training to work with Word/Exel at home. In this case, it was decided that training was no longer in demand, so there was no point in promoting it, and therefore collecting semantics on it. Another important point is that you need to collect not only requests containing "computer repair at home", but also "laptop repair, PC repair" and others.

Example No. 2. The company is engaged in low-rise construction. But at the same time he builds only wooden houses. Accordingly, queries and semantics by directions "construction of houses from aerated concrete" or "construction of brick houses" may not be collected.

Collection of semantics

We will look at two main sources of keywords: Yandex and Google. We’ll tell you how to collect semantics for free and briefly review paid services that can speed up and automate this process.

In Yandex, key phrases are collected from the Yandex.Wordstat service and in Google through query statistics in Google AdWords. If available, as additional sources semantics, you can use data from Yandex Webmaster and Yandex Metrics, Google Webmaster and Google Analytics.

Collecting keywords from Yandex.Wordstat

Collecting queries from Wordstat can be considered free. To view the data of this service, you only need a Yandex account. So let's go to wordstat.yandex.ru and enter the keyword. Let's consider an example of collecting semantics for a car rental company website.

What do we see in this screenshot?

  1. Left column. Here is the basic query and its various variations with "tail". Opposite each request is a number indicating how much this request is in in general has been used by various users.
  2. Right column. Requests similar to the main one and indicators of their overall frequency. Here we see that a person who wants to rent a car, in addition to the request "car rental", can use "car rental", "car rental", "car rental" and others. This is very important data that you need to pay attention to so as not to miss a single request.
  3. Regionality and history. By choosing one of possible options, you can check the distribution of requests by region, the number of requests in a particular region or city, as well as the trend of changes over time or with the change of season.
  4. Devices, from which the request was made. By switching tabs, you can find out which devices are most often searched from.

Check different variants Key phrases and the received data are recorded in Excel tables or Google spreadsheets. For convenience, install the plugin Yandex Wordstat Helper. After installing it, plus signs will appear next to the search phrases; when you click on them, the words will be copied; you will not need to select and paste the frequency indicator manually.

Collecting keywords from Google AdWords

Unfortunately Google doesn't have open source search queries with their frequency indicators, so here you need to work around it. And for this we need a working account in Google AdWords.

We register an account in Google AdWords and top up the balance with the minimum possible amount - 300 rubles (on an account that is inactive in terms of budget, approximate data is displayed). After that, go to “Tools” - “Keyword Planner”.

Will open new page, where in the “Search for new keywords by phrase, site or category” tab, enter the keyword.

Scroll down, click “Get options” and see something like this.

  1. Top request and average number of requests per month. If the account is not paid, then you will see approximate data, that is, the average number of requests. When there are funds on the account, exact data will be shown, as well as the dynamics of changes in the frequency of the entered keyword.
  2. Keywords by relevance. This is the same as similar queries in Yandex Wordstat.
  3. Downloading data. This tool is convenient because the data obtained in it can be downloaded.

We looked at working with two main sources of statistics on search queries. Now let's move on to automating this process, because collecting semantics manually takes too much time.

Programs and services for collecting keywords

Key Collector

The program is installed on the computer. The program connects work accounts from which statistics will be collected. Next, a new project and a folder for keywords are created.

Select “Batch collection of words from the left column of Yandex.Wordstat”, enter the queries for which we collect data.

An example is included in the screenshot, in fact, for a more complete syntax, here you additionally need to collect all query options with car brands and classes. For example, “bmw for rent”, “buy a toyota with option to buy”, “rent an SUV” and so on.

WordEb

Free analogue previous program. This can be considered both a plus - you don’t need to pay, and a minus - the program’s functionality is significantly reduced.

To collect keywords, the steps are the same.

Rush-analytics.ru

Online service. Its main advantage is that you don’t need to download or install anything. Register and use it. The service is paid, but when you register, you have 200 coins in your account, which is enough to collect small semantics (up to 5000 requests) and parse frequency.

The downside is that semantics are collected only from Wordstat.

Checking the frequency of keywords and queries

And again we notice a decrease in the number of requests. Let's go ahead and try another word form of the same query.

We note that in the singular, this request is searched by a much smaller number of users, which means the initial request is a higher priority for us.

Such manipulations must be carried out with every word and phrase. Those requests for which the final frequency is equal to zero (using quotation marks and an exclamation mark) are eliminated, because “0” means that no one enters such queries and these queries are only part of others. The point of compiling a semantic core is to select the queries that people use to search. All queries are then placed in an Excel table, grouped by meaning and distributed across the pages of the site.

It’s simply not possible to do this manually, so there are many services on the Internet, paid and free, that allow you to do this automatically. Let's give a few:

  • megaindex.com;
  • rush-analytics.ru;
  • tools.pixelplus.ru;
  • key-collector.ru.

Removing non-target requests

After sifting through the keywords, you should remove unnecessary ones. What search queries can be removed from the list?

  • requests with the names of competitors' companies (can be left in contextual advertising);
  • requests for goods or services that you do not sell;
  • requests that indicate a district or region in which you do not work.

Clustering (grouping) of requests for site pages

The essence of this stage is to combine queries that are similar in meaning into clusters, and then determine which pages they will be promoted to. How can you understand which requests to promote to one page and which to another?

1. By request type.

We already know that everything queries in search engines are divided into several types, depending on the purpose of the search:

  • commercial (buy, sell, order) - promoted to landing pages, pages of product categories, product cards, pages with services, price lists;
  • informational (where, how, why, why) - articles, forum topics, answer to question section;
  • navigation (telephone, address, brand name) - page with contacts.

If you are in doubt what type of request it is, enter its search string and analyze the results. For commercial requests there will be more pages offering services, for informational requests there will be more articles.

There is also geo-dependent and geo-independent queries. Most commercial requests are geo-dependent, as people are more likely to trust companies located in their city.

2. Request logic.

  • “buy iphone x” and “iphone x price” - need to be promoted to one page, since in both the first and second cases, the same product is searched, and more detailed information about him;
  • “buy iphone” and “buy iphone x” - need to be promoted on different pages, since in the first request we are dealing with a general request (suitable for the product category where iPhones are located), and in the second the user is looking for a specific product and this request should be promoted to the product card;
  • "how to choose good smartphone“—it is more logical to promote this request to a blog article with the appropriate title.

View search results for them. If you check which pages on different sites lead to the queries “construction of houses made of timber” and “construction of houses made of bricks”, then in 99% of cases these are different pages.

4. Automatic grouping using software and manual refinement.

The 1st and 2nd methods are excellent for compiling the semantic core of small sites where a maximum of 2-3 thousand keywords are collected. For a large system (from 10,000 to infinity of requests), the help of machines is needed. Here are several programs and services that allow you to perform clustering:

  • KeyAssistant - assistant.contentmonster.ru;
  • semparser.ru;
  • just-magic.org;
  • rush-analytics.ru;
  • tools.pixelplus.ru;
  • key-collector.ru.

After automatic clustering is completed, it is necessary to check the results of the program manually and, if errors are made, correct them.

Example: the program can send the following requests to one cluster: “vacation in Sochi 2018 hotel” and “vacation in Sochi 2018 hotel breeze” - in the first case, the user is looking for various hotel options for accommodation, and in the second, a specific hotel.

To eliminate the occurrence of such inaccuracies, you need to manually check everything and, if errors are found, edit.

What to do next after compiling the semantic core?

Based on the collected semantic core, we then:

  1. We create the ideal structure (hierarchy) of the site from the point of view of search engines;
    or in agreement with the customer, we change the structure of the old website;
  2. we write technical assignments for copywriters to write text, taking into account the cluster of requests that will be promoted to this page;
    or We are updating old articles and texts on the site.

It looks something like this.

For each generated request cluster, we create a page on the site and determine its place in the site structure. The most popular queries are promoted to the top pages in the resource hierarchy, less popular ones are located below them.

And for each of these pages, we have already collected requests that we will promote on them. Next, we write technical specifications to copywriters to create text for these pages.

Technical specifications for a copywriter

As with the site structure, we will describe this stage in general terms. So, technical specifications for the text:

  • number of characters without spaces;
  • page title;
  • subheadings (if any);
  • a list of words (based on our core) that should be in the text;
  • uniqueness requirement (always require 100% uniqueness);
  • desired text style;
  • other requirements and wishes in the text.

Remember, don’t try to promote +100500 requests on one page, limit yourself to 5-10 + tail, otherwise you will get banned for over-optimization and will be out of the game for a long time for places in the TOP.

Conclusion

Compiling the semantic core of a site is painstaking and hard work, which needs to be given especially close attention, because it is on this that the further promotion of the site is based. Follow the simple instructions given in this article and take action.

  1. Choose the direction of promotion.
  2. Collect all possible queries from Yandex and Google (use special programs and services).
  3. Check the frequency of queries and get rid of dummies (those with a frequency of 0).
  4. Remove non-target requests - services and goods that you do not sell, requests mentioning competitors.
  5. Form query clusters and distribute them across pages.
  6. Create an ideal site structure and draw up technical specifications for the content of the site.

Semantic coreis a set of keywords that search engine users enter into the search bar to find an answer to their query.

Collecting a semantic core is necessary in order to find all the keywords and phrases for which a company or website is ready to give a comprehensive answer, satisfy customer needs, and for which users are looking (formulating a request) for an answer to their question. If we have a keyword, then the user will get to our site, if not, he won’t.

The volume of keywords in the semantic core depends on the goals, objectives, and characteristics of the business. The reach of the target audience, its conversion and cost depend on the volume and depth of the semantic core. Full semantics allows you to increase coverage and reduce competition.

Goals of collecting the semantic core

Searching and selecting keywords is one of the stages of email marketing. And it greatly influences further success. Based on the compiled semantic core, the following will be developed:

  • Website:
    • “Ideal” structure of a website, online store, blog.There are 2 approaches to this issue: SEO (search engine optimization) and PR (public relations). The SEO approach consists of the initial collection of all key queries. Having covered the maximum number of niche keywords, we develop the site structure, taking into account the real requests of users and their needs. With the PR method, the site structure is first developed based on the information that we want to convey to users. Afterwards, keywords are collected and distributed throughout our structure. Which strategy to choose depends on the goals: if you need to convince of something, convey some position, etc., then the PR method is chosen. If you need to get as much traffic as possible, for example, if you are making an information website or online store, then you can choose the first method. But in general, this is the foundation for future promotion: a well-designed site structure allows you to conveniently sort information for users (positive user experience) and the ability to index it in search engines. The criteria for accepting the future structure of the site are the goals and expectations of users and the results of an analysis of successful competitors.
  • Lead generation strategy:
    • SEO strategy. Having identified the search queries with the least competition and the greatest potential traffic that they can bring, a content strategy is developed to fill and optimize the site.
    • contextual advertising. When conducting contextual advertising campaigns in Yandex Direct, Google Ads, etc. the maximum number of relevant keywords is collected for which we are able and ready to satisfy the demand.
    • map of information needs (content plan).Having grouped keywords according to the intents (intentions) of users, technical specifications are drawn up and given to copywriters for writing articles.

Study of the search process in search engines

Psychology of Internet Search

People don't think in words. Words - symbols through which we convey our thoughts. Everyone has their own mechanism for transforming thoughts into words, each person has their own way of formulating questions. Every query entered into the search bar search engine, a person accompanies certain thoughts and expectations.

By understanding how people search online, you can tailor your marketing efforts to their interests. Knowing how the search process works, we select appropriate keywords and optimize the site, setting up contextual advertising.

After the PS user clicks on the “Find” button, the search results that appear should meet his expectations. In other words, search results (search results and contextual advertising) should help solve the user's question. Therefore, the marketer’s task is to customize the ad and search snippet so that they are relevant to the search query.

  1. reflect the search query;
  2. Consider the stage of the buying cycle.

Those. those words that will be indicated in snippets and ads will lay the foundation for the user’s expectations from your site. Therefore, the landing page that he will be taken to by clicking on the link must meet his expectations. By meeting these expectations, we increase the likelihood of a positive outcome. Advertising should lead the user to a place where he will immediately receive an answer.

Search categories:

  1. directly formulated (metal lathe, dentist);
  2. description of the problem (how to sharpen the shaft, toothache);
  3. symptoms of the problem (the feed box of the lathe does not work, a tooth is crumbled);
  4. description of the incident (crunching sound during turning on a TV-16 lathe);
  5. name of the product, article, brand, manufacturer.

If you carefully study the keywords, you can get to the root of the problem: while turning on a lathe, a gear in the feed box broke, so we can offer to manufacture it or suggest a new machine. Since the person did not treat the diseased tooth and it crumbled due to caries, we, as dentistry, will offer to install an implant.

Classification and types of search queries

By search type:

  • informational – queries to find information, for example, “speed of light”, “how to make a fishing rod with your own hands”, “why the earth is round”, etc.;
  • navigational – queries by which users search for an organization, brand, person, etc. For example, “Coca-cola”, “restaurant “Pyatkin”, “Lev Tolstoy”;
  • transactional – queries entered by users with the intention of performing some targeted action. For example, "buy samsung phone Galaxy S6”, “download the online book “Web Analytics in Practice”;
  • fuzzy queries – all queries that cannot be unambiguously attributed to one of the types described above, i.e. clearly define what exactly the user is looking for. For example, “Maine Coon” - it is not clear what the user wants: to find out what kind of cat breed it is or to look for where to buy it, or perhaps something else.

By geodependence:

  • geo-dependent – ​​requests that depend on the user’s location. For example, “grocery stores”, “tire service center”.
  • geo-independent - do not depend on a person’s location. For example, “recipe for cutlets”, “how to install an alarm”.

By naturalness:

  • natural – queries entered by users in natural human language: “prices for Samsung laptops”, “characteristics of lever scissors”;
  • telegraphic – queries entered in “telegraphic language”: “Samsung laptop prices”, “lever scissors specifications”.

By seasonality:

  • seasonal – time-sensitive keywords. Such requests are " Winter tires", "New Year's fireworks", "Easter eggs", etc.
  • non-seasonal - they are not sensitive to time, they are popular at any time of the year. Examples of such requests are: " wrist watch", "how to cook pizza", "install Windows".

By frequency:

  • HF – high frequency requests.
  • MF – mid-frequency requests.
  • LF - low frequency queries.
  • “Long tail” – microfrequency search queries, usually consisting of 4 or more words and having a frequency of 1-3 per month. The total volume of such requests adds up to tangible traffic with the least competition in the search results and practically without much effort in promotion.

It is impossible to say specifically that a certain number of queries correspond to a high-frequency query, and which number corresponds to a low-frequency one, since these values ​​vary greatly from niche to niche. Somewhere 1000 requests per month may correspond to a low-frequency request, while in another niche it will be a high-frequency one.

Keyword frequency values ​​are conditional and are intended for ranking by popularity.

By competitiveness:

  • VK – highly competitive queries.
  • SC – average competitive requests.
  • NK – low-competitive requests.

This classification allows you to create a list of priority key queries that will be used for search engine promotion. In addition, reduce the cost per click in contextual advertising campaigns.

Common goals of the user, webmaster and search engine

In the process of searching for information through a search engine, 3 parties are involved: the search engine, the user and the web resource. And each side has its own goals: the user needs to find an answer to his query, and the search engine and web resource need to make money from this.

If webmasters begin to somehow manipulate the work of the search engine, without giving the required answers to users, then everyone loses: the user does not receive an answer to his request and goes to look in another search engine on another site.

Therefore, the needs of the users are primary, because Without them, neither the PS nor the web resource will work. First of all, by satisfying the interests of PS users, we contribute to overall earnings. The search engine will work on contextual advertising, web resource - on the sale of goods or services to users themselves or advertisers. Everyone wins. Link your goals to your users' goals. Then the probability of a positive outcome increases sharply.

Keyword Research

As we have already found out, keywords are thoughts expressed in verbal form. Our goal is to select keywords that reflect consumer thoughts and demand that we can satisfy. If we have a keyword, the user will see our message, if not, he will not see it.

Keywords alone generate high traffic, others - small. Some give high conversions, others generate low-quality traffic.

Each keyword constitutes a separate submarket with its own clientele. Behind each key phrase lies some need, desire, question or suggestion that a person may not be aware of.

By determining which stage of the purchasing cycle the keyword belongs to, we will understand when and why the user is looking for it, and therefore, we will provide information that is relevant to him and meets his expectations.

Before you begin your research, ask yourself the following questions:

  1. What keywords should we use to reach our target audience?
  2. What key phrases do our interesting customer segments use when searching for our products?
  3. What is going on in the user's mind when writing this request?
  4. What buying cycle are they in using this key phrase?

Keyword Research Objectives

  1. Gain insight into the existing “ecosystem” and develop a strategy for natural and paid search.
  2. Identify the needs of potential clients and develop appropriate responses to them.

Anatomy of requests

Key phrases consist of 3 elements:

[body]+[qualifier]+[tail],

where the body (also called a “mask”) is the basis of the request, from which alone it is impossible to understand the users’ intentions; specifiers determine user intent and classify a request as transactional, informational, or navigational; the tail only details the intention or need.

For example, buy a lathe, 6P12 milling machine specifications, buy a bimetallic metal band saw in Moscow time.

Knowledge of the anatomy of search queries allows you to collect all the masks when working out the semantics, as well as correctly distribute the collected keywords according to the purchasing cycle when developing a paid and natural search strategy.

Keyword Segmentation

When searching for masks and working through an already collected semantic core, it becomes necessary to segment keywords for more convenient subsequent work. Having segmented keys, we understand how people search, therefore, we expand them with additional key queries, assess the likelihood of sales and work according to the strategy. There are no specific segmentation rules, because... semantics can vary greatly from niche to niche.

Here I will just give some examples based on what criteria semanticists segment cores:

  • by type of keyword:
    • direct demand - they are looking for what we sell, for example, a milling machine;
    • indirect demand - they are looking for a milling machine, and we sell cutters for them;
    • situational - the neighbors flooded, we made repairs;
    • other - navigational, vital requests.
  • by search objects:
    • company, object (for example, repair team);
    • product (repair of milling machines);
    • production, sales (wholesale/retail) (production of spare parts for repairs according to drawings);
    • action on the object (commissioning work);
    • specialist (design engineer);
    • part of the object, sub-service (development of design documentation for spare parts for a milling machine).
  • on expected checks.

Long tail strategy

Long-tail or the concept of the “long tail” was popularized in 2004 by Wired magazine editor Chris Anderson. The essence of the concept is that the company sells rare goods through a wide assortment for an amount greater than bestsellers.

The concept can be seen using the example of a bookshelf. The store owner, due to limited space, will try to stock only the products that are most popular. If the fashion for a product has already ended, then the place of the book is taken by another one that is gaining popularity.

In online bookstores, the shelf is not limited; the catalog contains all available books. From the studies conducted, it turned out that due to the wide range of books, the sales volume of “unpopular” books exceeds the sales volume of bestsellers. This concept also works in sales of music, films, medicines etc., and of course when compiling a semantic core.

As with the books example, key search phrases from the long tail can bring in more traffic than high-frequency queries.

From practice, long tail phrases have the highest conversion rate, i.e. people are most likely to be in the purchasing decision stage.

New keywords

If you are an opinion leader, have your own audience and can influence it, try creating new key search phrases around which your content will be built. If the audience picks them up, then you will be the first to appear in search results.

Segmentation and sales funnel

Customer segmentation and role principle

Before collecting keywords, a company needs to understand its target audience, segments, and its customer avatars. To make it clearer, I’ll give you an example: the company sells vibrating plates. Therefore, her target audience will construction companies, and the main segments will be companies carrying out road work, laying something underground, etc. Avatars are people who make purchasing decisions and search for goods and services.

We will not dwell on this in detail here.

The role principle is that you need to pay attention to the type of people who might be looking for your product, for example it could be an individual, a supplier, an engineer or a CEO. People in different roles may use different keywords. Therefore, knowing your client’s avatar, his behavioral characteristics are taken into account, keywords are selected taking into account the required roles.

For example, if your company's customer is an engineer, his search queries may include specialized technical terms.

Before we begin, it should be noted that each business has its own specific sales funnel. The general concept is discussed here. Consists of 2 parts: propaganda and loyalty.

Sales funnel stages:

  1. Awareness — inform about our product everywhere so that people know about it. This stage includes keywords of a general nature.
  2. Interest— to encourage the consumer to think about how our product will make his life better. At this stage, the benefits and benefits of the product are communicated. The main goal is to create desire for the product.
  3. Studying— the consumer is looking for information to make an informed decision: get acquainted with the professional jargon of the industry, brands appear in search queries, the name of specialized services, etc. The main goal is to convey the benefits and capabilities of the product in as much detail as possible.
  4. Comparison of analogues — the consumer compares similar products. Keywords become specific, indicating that the consumer has a certain level of knowledge.
  5. Purchase— before making a purchase decision, the buyer studies information about prices, guarantees, delivery costs, terms of service, returns, etc. Keywords: low-frequency queries, queries with selling additives.

Keyword Research Tools

Kernel extension algorithm, collection of nested queries

After all the masks have been collected, we move on to collecting key queries in depth.

You need to collect nested queries for:

  • writing relevant advertisements for KS;
  • setting the required rate for a specific CS;
  • installing a relevant link in the ad leading to the required page.

Automated tools for collecting nested queries are software, installed on a PC, online services, browser extensions. There are quite a lot of them, but we use the most popular one - Key Collector - a program that parses keywords and their frequencies, installed on a computer, and also allows you to carry out all the necessary activities to collect the semantic core.

It is advisable to parse each semantic group separately.

The expansion algorithm will be as follows:
  1. parsing masks in Yandex Wordstat;
  2. parsing masks in Google AdWords;
  3. parsing masks in the Bukvariks database;
  4. parsing masks in the Keys.so database;
  5. downloading keywords from Yandex Metrics and Google Analytics;
  6. cleaning and collecting keyword frequencies;
  7. batch collection of search tips;
  8. batch collection of similar search queries from search results;
  9. cleaning and collecting frequencies.

Using the Yandex Wordstat and Google AdWords tools, we will get the main key search phrases that have frequency and popularity in search engines. Bukvariks, Keys.so, downloading KS from Yandex Metrics and Google Analytics, search tips and similar search queries will also give “tail” words from users.

Adaptation of the semantic core for contextual advertising

The preparation algorithm looks like this:

  1. choose selling keywords;
  2. segment the CS;
  3. work on negative keywords and negative phrases;
  4. put operators.

Keywords for YAN and CMS are selected according to a slightly different principle, unlike keywords in search.

Selecting selling keywords

From the existing list of key phrases, we need to understand what a person wants (his needs), what answer he wants to hear to his question. Our task is to answer, in the context of a search, those questions of a person that are interesting to us, i.e. choose the keywords that are most likely to lead to conversions.

In addition, with the help of competent selection of CS, we will reduce non-targeted impressions, which will increase CTR and reduce the cost of a click.

There are situations when the meaning of the request is not clear. In order for us to understand the meaning of what most people want in such cases, it is necessary to enter this query into the search engine and look at the search results. Thanks to machine learning and other search customization technologies Yandex and Google already know what people want for each specific request. All that remains is to analyze the search results and make the right decision. The second way is to view the attachments of the word form in Yandex Wordstat, the third is to think out the meaning, but mark it for further elaboration.

The completeness of the CS is one of the important factors influencing the success of an advertising campaign. Consequently, the future result will depend on the quality of keyword development. In contextual advertising, one should strive not for the volume of the strategic language, but for its high-quality elaboration.

Depending on your goals, you can use a strategy in the future: identify the most conversion queries, test them, and then scale the advertising campaign.

CS segmentation

It is impossible to identify any clear segments, because... everything varies from niche to niche. Most commerce sites can be segmented based on stages of the buying cycle. Or you can identify some segments yourself by studying your core.

The main task of segmentation is the ability to easily manage the company in the future: set bids and budgets, quickly find an ad and turn on/stop its display, etc.

Working on negative words and phrases

Even at the stage of collecting the semantic core, you collected negative words and phrases. All that remains is to adapt them to your advertising company and conduct a cross-backing track.

Placing operators

Operators are used for HF queries to avoid black competition, as well as to save budget in highly competitive topics and more accurately formulate the phrase. Operators can be combined with each other.

Yandex Direct operators

+word— fixation of stop words, auxiliary parts of speech: prepositions, conjunctions, particles, pronouns, numerals.

!word- fixation of word form.

[word1 word2]— fixation of word order.

-word- word exception.

negative phrases- excluding the phrase, .

Google AdWords Operators: Keyword Match Types

Broad match type — used by default, the ad will be shown by synonym, if there is a typo, by similar phrases and the same intents, for example, for the request “offices in Moscow” it may appear by the keyword “real estate Moscow”.

Broad match modifier — ads will appear for queries containing the “+” sign and their close variants (but not synonyms), located in any order. For example,+ car + Hyundai + Tucsan.

Phrase matching — the ad will appear based on phrases that exactly match the keywords or contain similar words. Sensitive to word order. For example, for the query “prices, Benq monitor” can show an ad for keyword “Benq monitor”.

Exact match — the ad will appear on queries that exactly match the keyword or its close variants. For example, a search for “tire service for trucks” may display an ad for the keyword phrase[ truck tire service] .

Negative words— ads will be shown for queries that do not contain negative keywords.

Adaptation of the semantic core for search engine optimization (SEO)

We will need the core to develop a clear logical structure of the site and complete coverage of the topic (we will describe our topic with certain keywords that are characteristic of it).

The algorithm for preparing a CS for SEO is as follows:

  1. remove information requests from the communication language (keep only commercial ones);

Clustering of the semantic core

Clustering— combining queries into groups based on user intentions, in other words, it is necessary to combine different queries into one group for which a person is looking for the same thing. Requests are distributed into groups so that they can be promoted on the same page (united by user intent).

As an example, you cannot promote informational and commercial requests on the same page. Moreover, it is recommended to promote these queries on different sites.

For example, special clothes - work clothes, zig machine - zigovka - zigovochny machine, circular saw - circular saw - sawing machine.

Clustering can be:

  • manual - grouping occurs manually in a specialized program or Excel. The person carrying out the grouping simply must have a good understanding of the topic, otherwise nothing meaningful will come of it;
  • automatic - grouping occurs in automatic mode based on search results. This method allows you to speed up the ungrouping of the semantic core, consisting of huge amount key phrases. Grouping is highly accurate (much more accurate if done manually by a person who does not understand the topic). Main advantage this method is to group together queries of only the appropriate type, i.e. commercial and informational ones will not be combined into one group (the situation is well illustrated by the queries “smartphone” and “smartphones”: 1st - informational and geo-independent, 2nd - commercial and geo-independent , but “laptop” and “laptops” are both commercial and geo-dependent);
  • semi-automatic - first clusters are created automatically, and then manually grouped. This type of clustering combines both the pros and cons of the first 2.

By type, clustering of the semantic core can be:

For commercial sites, hard clustering is used in most cases. In special cases you can use middle.

Relevance map

A relevance map is necessary for planning pages and working out the structure of the site. The main elements are:

  • name of the tree element (category, tag, page, etc.);
  • cluster name;
  • cluster keywords;
  • exact frequency (“!key!word”);
  • Title;
  • Description;
  • previous Title;
  • previous H1;
  • previous Description.

To visualize the structure of a website, mind maps are often used.

Adaptation of the semantic core for information sites

Information requests, when viewed from the commercial use side, are more likely to relate to the next stages of the sales funnel: awareness, interest, study, comparison of analogues. Those. keywords do not directly convert into sales. But based on them, we can inform and influence the buyer’s decision-making.

If we are talking about creating websites to make money from advertising, then it is necessary to specialize in a certain topic and develop it completely. The site should answer all questions on the topic thanks to the competent elaboration of all semantics.

Algorithm for preparing CS for information sites:

  1. remove commercial requests from the ML (keep only informational ones);
  2. carry out clustering of the remaining synonyms;
  3. prepare a relevance map based on the resulting clusters.

As you can see, the algorithm is fundamentally no different from the work of adapting for SEO. The main nuance is the type of clustering. For information sites, choose soft or middle clustering.

Semantic core to order

The cost of the semantic core is determined at the rate of 3-7 rubles. for the keyword. Thus, a clustered semantic core for SEO or an information site with 10,000 keywords will cost an average of 50,000 rubles. Plus, the price will increase if you need to segment keywords for contextual advertising. The price greatly depends on the quality of work. If you are offered cheaper than the specified rates, then you should at least think about why. After all, it sometimes takes up to 16 hours of work to properly design just the masks. If you save on collecting the semantic core (you won’t cover the full scope and depth of the topic), you’ll then lose on contextual advertising (you’ll be shown on the most competitive topics) and won’t get enough customers from the search results.

Here simplest example quality of elaboration of the semantic core: when requesting “creasing machine” you will compete in the search results between 36 competitors, when requesting “creasing machines” - 27 competitors, and “creasing machine” - only 8 competitors.

Request "Zigovochny machine"

Request "Zigovochny machine"