Natural Tech has just released AdGenerator https://natural.do/ad-generator-demo , an application that offers Google-ish ready-to-use ads for thousand of commercial products and services. Through state-of-the-arte NLG techniques, it generates both catchy slogans and a brief accurate description for any type of commercial product or service. AdGenerator is currently available only for Portuguese language.
Each Google Ad has two main parts that attract audience’s attention, known as “slogan” and “description”. Our system offers texts to fulfill each one and thus produce complete ads observing Google Ads guidelines.
In essence, the presented tool combines an algorithm based on GPT-2 trained for Brazilian Portuguese, adjusted for vertical advertising, along with the capabilities of the SentiLecto PT NLG algorithm also developed by Natural Tech. The process is simple, a series of “inputs” (company name, product characteristics, etc.) that feed the algorithms generate relevant texts that serve as “slogan” and as a “description” of an advertisement.
Specifically, the GPT-2 fine-tuned models are responsible for building slogans of a new advertisement while SentiLecto rules develops its description. Additionally, the application allows the user to make some manual adjustments to achieve the desired final result.
Brazilian E-commerce market is segmented into large hyperdomains of products, such as Food and Beverages, Clothing and Accessories, etc. Of course, each of them comprises more specific domains. AdGenerator has a GPT-2 model optimized for each hyperdomain, powering convergence towards a more grammatical and pertinent NLG device yet creative enough to come up with catchy original slogans.
1.1. GPT-2 as a slogan generator
Can be segmented the Brazilian consumer market into large hyperdomains of products, such as Food and Beverages, Clothing and Accessories, etc. Of course, each of them comprises more specific domains. AdGenerator has a GPT-2 model optimized for each hyperdomain, that is, was used an independent training set per group.
The training datasets are composed of different product domains of the Brazilian market (each corpus represents a hyperdomain) and phrases associated with them in the Portuguese language, collected from the internet and cured. They have suitable semantics so that the models generating new creative texts that attract attention.
Language Models were evaluated in training with perplexity and cross-entropy measures, which estimate how “safe” our model is about their predictions.
1.2. Ad Generator general operation
To build an ad, initially, Ad Generator asks the user information about his company and the product that needs to promote. Suppose we have a wormwood sales venture with the following data (in Portuguese):
● Company name (required): Fada Verde.
● Product description (optional): -
● Product segment or domain (required): absinto.
● Company url (optional): www.fadaverde.com.br
● Company city (optional): Rio de Janeiro.
Commercial data requested by Ad Generator https://natural.do/ad-generator-demo
According to Wikipedia (https://es.wikipedia.org/wiki/Absenta, absinto (translated to “wormwood” or “absinthe”) is an alcoholic beverage of slight anise flavor, with a bitter background of complex dyes due to the contribution of the herbs it contains, mainly Artemisia absinthium. Its alcohol concentration can range from 55% to 75%, so it is a drink that should be consumed in moderation. It was thought in the past to produce hallucinations, so it earned the nickname”Fada verde” (translated to “Diablo or Fairy Green “), was even banned for centuries. — “Liquid alchemy that changes ideas,” as the writer Ernest Hemingway said, was already impressing from aesthetics: as icy water poured on its bright green surface, bohemian artists at the time watched in amazement as their color changed as if by magic. (https://latam.historyplay.tv/noticias/el-hada-verde-historia-de-la-bebida-mas-poderosa-del-mundo)
When completing the required fields, runs the generator and the application recommends to the user 20 phrases that can serve as a slogan for the advertisement.
Slogans generated by the application with GPT-2
Phrases such as “Uma questão de cuidado” (translated to “A matter of care”) and “Experiência rara no paladar” (translated to “Rare palate experience”) are observed in the results. It is important to understand that transformers models must “translate” the texts with which they feed into word embeddings; such representations have some context (http://ai.stanford.edu/blog/contextual). That means that the generation of slogans involves a process within which the text entered into the model (“absinto licor & bebidas espirituosas”) is related to the information (web texts) that was used to pre-train and optimize it (fine-tuning). For example, the phrase “Uma questão de cuidado” can be interpreted in the context that wormwood is a drink with a high level of alcohol and that its consumption has generated controversy in the past.
We select “Uma questão de cuidado” and moves on. In this way, the tool takes to another window where new text originates for the ad description. This time the SentiLecto PT NLG algorithm is responsible for producing the sentences of the description. Besides, the technology sets keywords in the generated document and gives the user the ability to modify them by others established according to the commercial context.
Application-generated descriptions with SentiLecto PT NLG
In our example has been formed the phrase “…O absinto mais exclusivo de Rio de Janeiro…” (translated “…The most exclusive wormwood in Rio de Janeiro …”) where the word “exclusivo” (translated “exclusive”) can be modified by “divertido” (translated “fun”), “saboroso” (translated “tasty”), etc.
Fit the ad description
Finally, we preview the ad in the tool and, if necessary, we can make manual changes to the description and the slogan.
Previewing the ad created by AdGenerator
1.3. GPT-2 Fine-tuning as a slogan generator
GPT-2 is a trained language model with 40 GB of web text. When fed it with a word, a subpart of it, or multiple words (phrase) can predict the probability distribution of the next word considering the previous terms that preceded it; we know those terms as tokens. Then can be repeat the prediction multiple times until generating large spaces of coherent text.
Let’s take the domain (“absinto”) used in our demonstration concatenated with its first parent domain (“licor & bebidas espirituosas”) to assemble the following sentence: “absinto licor & bebidas espirituosas”.
Parent categories for “absinthe” in AdGenerator UI
The phrase is tokenized as:
[absinto] [licor] [&] [bebidas] [espirituosas]
The example presents word-level tokens for simplicity, but in practice GPT-2 uses a subword tokenization scheme (byte pair encoding) as shown below:
[abs] [into] [lic] [or] [&] [bebidas] [espiri] [tuosas]
The mentioned tokens represent the context given to the model from which new words are generated, therefore, the following tokens that will form the ad slogans are predicted.
<context> [absinto] [licor] [&] [bebidas] [espirituosas] <slogan> PREDICTIONS
In this case, the model adjusted to the hyperdomain “Alimentos y Bebidas” (translated “Food and Beverages”) does not have as input a description of the product to be advertised. Because the field is optional in the application, the context is defined by the product description (if any), the domain of it (required field), and its parent category:
<context> [<domain> <parent domain> <description>] <slogan> PREDICTIONS
It is detailed that GPT-2 was previously trained with long texts (1024 tokens) and is originally not designed for short sentences such as slogans. Therefore, sequences are populated with special tokens so that we can work with a variable length of text.
In practice, there are several methods for generating a sequence of tokens from a language model, such as:
● Greedy sampling.
● Beam search.
● Top-k sampling.
● Top-p sampling.
The “greedy sampling” or “greedy search” technique consists of always choose the token most likely as your next word, this leads to very predictable and repetitive results. The main drawback of greedy search is that it omits high-probability words hidden behind a low-probability word. Instead, “Beam search” generates multiple sequences at the same time and returns the sequence whose overall probability is the highest. This means that the algorithm may end up choosing tokens with less probability at some point in the text string but will lead to a final sequence more likely than the greedy procedure, although it is not guaranteed to find the best output, that is, the most likely one.
“Top-k” and “Top-p” sample random tokens according to their probabilities given by the model, but the choice is made only from the top K tokens with the highest probabilities in the list or the top tokens that together represent at least P probability, in other words, the sum of their probabilities is greater than P. Each of these techniques or a combination of the two usually lead to better results for most applications. AdGenerator produces new text sequences (slogans) using both methods.