Author Archives: vkholodkov

Horizontal Partitioning

So how do you store a lot of data if there is already over your head? The simplest answer is: partition horizontally, in other words divide and conquer. In horizontal partitioning each data instance is stored on a single storage node, so you can increase your total capacity by adding more nodes.

For horizontal partitioning to work data instances have to have no dependency between each other. If this condition is satisfied, there is no need to visit multiple storage nodes in order to reconcile a single data instance.

Sounds easy, ah? But not as it seems… Continue reading

Comments are back!

Long time ago I disabled comments because I was getting a lot of spam. Apparently this was man-made spam, because they did go through CAPTCHA. Today I installed Social and also added Twitter and Facebook accounts.

Please comment and follow!

Reading list: Foundations of Statistical Natural Language Processing

Recently I was looking into making my NLP knowledge more solid and I found this book by reference: Foundations of Statistical Natural Language Processing. It’s a classic book and certainly it was a good read.

Now, the topics it discusses might sound quite theoretical, so let me translate them to few examples how each of them could be applied in your work.
Continue reading

Scalability: is your problem WORM or RW?

I wanted to write an article about secrets of scalability, but it appears that this subject is too complex for one article. Instead let’s just dissect some scalability problems as we go.

When you think about scalability, it is important to distinguish two different types of problems: those that require reading much more often than updating and those that require reading as often or even less often than updating. First type of problems is called WORM (write once read many), second is called RW (read-write). It turns out that they are fundamentally different and here is why. Continue reading

“Multi-armed bandit” A/B testing optimality proved?

Correct me if I’m wrong, but it seems that this paper proves optimality of “multi-armed bandit” approach to A/B testing. The latter one was described in this post earlier this year.

For those who do not understand what it is about: A/B testing requires investment in the form of sample size (usually it is equal to number of unique users), which is time and money. “Multi-armed bandit” approach is about optimising this investment.

I wouldn’t say you’re ancient if you aren’t doing it already, but it’s interesting to see how abstract science creates new opportunities for business.

Counting unique visitors

In version 1.0.2 of redislog module I added a feature that allows you to do conditional logging. What can you do with it? For example, logging only unique visitors. E.g.:

userid         on;
userid_name    uid;
userid_expires 365d;
access_redislog test "$server_name$redislog_yyyymmdd" ifnot=$uid_got;

$uid_got becomes empty whenever a visitor doesn’t have an UID cookie. Therefore, this directive effectively logs all hits of unique visitors. You can populate a list (one per virtual host and day or hour) with unique visitor records and count them with LLEN. For that just use LPUSH command instead of default APPEND. Details could be found in the manual.

Better logging for nginx

Somehow the problem of logging was not completely addressed in nginx. I managed to implement 2 solutions for remote logging: nginx socketlog module and nginx redislog module. Each of these modules maintain one permanent connection per worker to logging peer (BSD syslog server or redis database server). Messages are buffered in 200k buffer even when logging peer is offline and pushed to logging peer as soon as possible.

If logging connection interrupts, these modules try to reestablish it periodically and if successful, buffered messages get flushed to remote. That is, if logging peer closes connection gracefully, you can restart it without restarting nginx.

In addition to that, redis logging module is able to generate destination key names dynamically, so you can do some interesting tricks with it, e.g. having one log file per server or per IP per day.

Take a look at the manual in order to get the idea of how it works.

Configuration directives

In one of the previous articles I discussed the basics of HTTP modules. As the power of Nginx comes from configuration files, you definitely need to know how to configure your module and make it ready for variety of environments that users of your module can have. Here is how you can define configuration directives. Continue reading

Book review finished!

Yesterday I received my copy of Nginx 1 Web Server Implementation Cookbook. This book is a joint effort of several Nginx enthusiasts. I am proud to be one of the reviewers.

Nginx 1 Web Server Implementation Cookbook
is extremely useful for people who just want to know how to get going with Nginx. It contains a set of comprehensive recipes for various everyday situations. Enormous effort was invested into making the materials inside the book more accessible to the reader, so that anyone who wants to use Nginx can easily jump in.

Enjoy reading!

Measuring time spent on page

One of the challenges of A/B testing is insufficient observations due to low traffic. In other words, if you measured the conversion rate on our web site, it would take months or even years before we’d get conclusive result. What you can try to measure are microconversion and microobservations. That’s what I was up to recently. There are couple of microobservation types I identified so far: time spent and the depth. The time spent is basically how much time a visitor has spent on the web site in seconds and the depth is how many clicks he made after seeing the landing page. As you might notice, you always have some time spent and depth measurements, unless the visitor is a bot.

The other way you can enlarge your data set is by using visits instead of visitors. In case of time spent and depth metrics it makes much more sense.

I used standard Nginx userid module in order to identify visitors. When a visitor requests a page, a special action in C++ application is requested through a subrequest using ssi module. This actions registers the UID and the experiment in memory table and assigns a variant (A or B). Then it returns the variant in response and it gets stored in an Nginx variable. After that I use the value of this variable to display proper variant of the page. Continue reading