Scalability: is your problem WORM or RW?

I wanted to write an article about secrets of scalability, but it appears that this subject is too complex for one article. Instead let’s just dissect some scalability problems as we go.

When you think about scalability, it is important to distinguish two different types of problems: those that require reading much more often than updating and those that require reading as often or even less often than updating. First type of problems is called WORM (write once read many), second is called RW (read-write). It turns out that they are fundamentally different and here is why. Continue reading

Posted in scalability | Comments Off

“Multi-armed bandit” A/B testing optimality proved?

Correct me if I’m wrong, but it seems that this paper proves optimality of “multi-armed bandit” approach to A/B testing. The latter one was described in this post earlier this year.

For those who do not understand what it is about: A/B testing requires investment in the form of sample size (usually it is equal to number of unique users), which is time and money. “Multi-armed bandit” approach is about optimising this investment.

I wouldn’t say you’re ancient if you aren’t doing it already, but it’s interesting to see how abstract science creates new opportunities for business.

Posted in ab testing | Comments Off

Counting unique visitors

In version 1.0.2 of redislog module I added a feature that allows you to do conditional logging. What can you do with it? For example, logging only unique visitors. E.g.:

userid         on;
userid_name    uid;
userid_expires 365d;
access_redislog test "$server_name$redislog_yyyymmdd" ifnot=$uid_got;

$uid_got becomes empty whenever a visitor doesn’t have an UID cookie. Therefore, this directive effectively logs all hits of unique visitors. You can populate a list (one per virtual host and day or hour) with unique visitor records and count them with LLEN. For that just use LPUSH command instead of default APPEND. Details could be found in the manual.

Posted in nginx | Comments Off

Better logging for nginx

Somehow the problem of logging was not completely addressed in nginx. I managed to implement 2 solutions for remote logging: nginx socketlog module and nginx redislog module. Each of these modules maintain one permanent connection per worker to logging peer (BSD syslog server or redis database server). Messages are buffered in 200k buffer even when logging peer is offline and pushed to logging peer as soon as possible.

If logging connection interrupts, these modules try to reestablish it periodically and if successful, buffered messages get flushed to remote. That is, if logging peer closes connection gracefully, you can restart it without restarting nginx.

In addition to that, redis logging module is able to generate destination key names dynamically, so you can do some interesting tricks with it, e.g. having one log file per server or per IP per day.

Take a look at the manual in order to get the idea of how it works.

Posted in nginx | 2 Comments

Configuration directives

In one of the previous articles I discussed the basics of HTTP modules. As the power of Nginx comes from configuration files, you definitely need to know how to configure your module and make it ready for variety of environments that users of your module can have. Here is how you can define configuration directives. Continue reading

Posted in nginx | Comments Off

Book review finished!

Yesterday I received my copy of Nginx 1 Web Server Implementation Cookbook. This book is a joint effort of several Nginx enthusiasts. I am proud to be one of the reviewers.

Nginx 1 Web Server Implementation Cookbook
is extremely useful for people who just want to know how to get going with Nginx. It contains a set of comprehensive recipes for various everyday situations. Enormous effort was invested into making the materials inside the book more accessible to the reader, so that anyone who wants to use Nginx can easily jump in.

Enjoy reading!

Posted in site news | Comments Off

Measuring time spent on page

One of the challenges of A/B testing is insufficient observations due to low traffic. In other words, if you measured the conversion rate on our web site, it would take months or even years before we’d get conclusive result. What you can try to measure are microconversion and microobservations. That’s what I was up to recently. There are couple of microobservation types I identified so far: time spent and the depth. The time spent is basically how much time a visitor has spent on the web site in seconds and the depth is how many clicks he made after seeing the landing page. As you might notice, you always have some time spent and depth measurements, unless the visitor is a bot.

The other way you can enlarge your data set is by using visits instead of visitors. In case of time spent and depth metrics it makes much more sense.

I used standard Nginx userid module in order to identify visitors. When a visitor requests a page, a special action in C++ application is requested through a subrequest using ssi module. This actions registers the UID and the experiment in memory table and assigns a variant (A or B). Then it returns the variant in response and it gets stored in an Nginx variable. After that I use the value of this variable to display proper variant of the page.

In order to track time I use a Java script that sends periodic updates to the server. Nginx sends these requests to the C++ application via FastCGI and the application updates the timestamps in memory tables. The depth tracker works in same way, but the tracking action gets invoked only when the page is loaded. Although periodic updates might produce intensive load on the server even for medium sites, as you might already know for Nginx it’s a piece of cake.

A separate thread in the C++ application saves the content of memory tables to a file periodically, and that’s how the observations get stored permanently.

Of course this application requires Java script working on client’s browser, but who doesn’t have it nowadays? A positive side effect is that you get bots automatically filtered out.

One of the interesting questions is what statistical distribution do the time spent and the depth have? My hypothesis was that they have exponential distribution. For me it is still not completely clear. I spent some time implementing code for calculating statistical properties of exponential distribution. It is not trivial and results don’t look very trustworthy. I haven’t had success with exponential distribution yet. Instead I’m using normal distribution properties for the time spent and the depth at the moment. After removing outliers, these numbers look very trustworthy.

Posted in ab testing, nginx | 2 Comments

An HTTP module basics and configuration

In the previous article I explained how modules of all types link into Nginx. Now let’s  look closer at the specifics of HTTP modules.

An HTTP module has the value NGX_HTTP_MODULE in its type field and the ctx field points to a global instance of a structure ngx_http_module_t: Continue reading

Posted in nginx | 2 Comments

How your module plugs into Nginx

In previous articles I have deliberately omitted almost everything related to the question of linking your module with Nginx. It is important, however, that you know about it.

Let’s take a closer look at the metainformation that your module must contain, so that Nginx can initialise and configure it. Continue reading

Posted in nginx | Comments Off

Working with cookies in a Nginx module

Imagine you run a PPC advertising campaign and you want to find out how many visitors coming from a search engine result in sales. We will create an Nginx module and use cookies for this purpose. Whenever a visitor clicks on your ad, a landing page is requested with a tracking argument in it. The tracking argument looks  like that: ‘?source=whatever’. We will put the content of tracking argument into a cookie that will be called a source cookie and write it into a log file. Whenever a visitor makes a transaction (e.g. buys an article or makes a booking), the name of the source will be recorded and we will be able to easily attribute every transaction to a source.

Let’s start with declaring a structure that will contain configuration of our module: Continue reading

Posted in nginx | 4 Comments