ПРОЕКТЫ 


  АРХИВ 


Apache-Talk @lexa.ru 

Inet-Admins @info.east.ru 

Filmscanners @halftone.co.uk 

Security-alerts @yandex-team.ru 

nginx-ru @sysoev.ru 


  СТАТЬИ 


  ПЕРСОНАЛЬНОЕ 


  ПРОГРАММЫ 



ПИШИТЕ
ПИСЬМА












     АРХИВ :: nginx-ru
Nginx-ru mailing list archive (nginx-ru@sysoev.ru)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

nginx patches for Igor Sysoev



Привет.

Томаш (Tomash Brechko <tomash.brechko@xxxxxxxxx>)прислал разработанные
им  и  немного  Максимом  Дуниным  патчи для более корректной работы с
несколькими  мемкашедами  со  сжатыми ключиками. nginx с этими патчами
работает  на  моих серверах уже более полугода. Эти патчи совместимы с
перловым модулем Cache::Memcached::Fast. Если в конфиге менять флажок,
отвечающий  за  сжатие, то сжатые ключики можно отдавать даже если они
записаны  в  мемкашед другими клиентами. У разных клиентов этот флажок
задаётся  в  разных  битах.  Также можно менять алгоритм распределения
ключиков  по  мемкашедам:  namespace  может  или  учитываться,  или не
учитываться  при  расчёте  мемкашед-бэкенда. Также реализована ketama.
Вот что написал Томаш:

----------------------------------------------------------------------

Присоединены патчи для Игоря Сысоева. Все патчи против 0.7.6. В каждом
патче есть комментарий, что именно он делает. Также дай ему ссылки:

  http://openhack.ru/nginx-patched/wiki/MemcachedGzip
  http://openhack.ru/nginx-patched/wiki/MemcachedHash

Ссылки  могут  помочь  понять, что именно мы реализовали. Я сам уже не
нестолько  хорошо разбираюсь в этом коде, чтобы отвечать на вопросы "а
почему  здесь  так?".  Поэтому  пусть  старается  разобраться  сам  по
возможности.  В  принципе  memcached_hash  можно  включить в релиз как
есть,  и  делегировать  баги  мне  (если  будут). Так что сложный патч
только  для  gunzip.  Но  там  тоже  все не так трагично: в код сжатия
добавлено  расжатие,  по флажку r->gunzip. Разобраться можно. Там есть
еще небольшой фикс с таким commit message (в отдельный патч не выделял
сейчас):

    Move (ctx->zstream.avail_out == 0) test after (rc == Z_STREAM_END) test.
    
    (ctx->zstream.avail_out == 0) doesn't necessary means that zlib will
    output more data.  First we test (ctx->flush == Z_SYNC_FLUSH).  If true
    we flush current buffer, and there will be one more with the trailer.
    If false we test (rc == Z_STREAM_END) and act accordingly.  Only if this
    test is false (ctx->zstream.avail_out == 0) would mean that there'd be
    more output from zlib.
    
    You may test the effect of this patch by setting buffer size with
    gzip_buffers to some small value, say 1 byte.  The bug may be triggered
    with any buffer size actually when the last chunk of compressed data
    will fill the whole buffer.  Note that during 1-byte test a warning may
    be given that zero size buffer was in the write queue.  This happens
    when gzip flushes the last buffer, enqueues the next empty buffer
    (of 1 byte), but 8-byte header doesn't fit there, so this empty buffer
    is passed down as-is.


В общем можешь все это Игорю переслать.  И удачи ему ;).


-- 
   Tomash Brechko
 
>From e049fed661bad9beb8a40db3153f9042cb5836e1 Mon Sep 17 00:00:00 2001
From: Tomash Brechko <tomash.brechko@xxxxxxxxx>
Date: Sat, 19 Jul 2008 11:20:50 +0400
Subject: [PATCH] Enhance next upstream logic.

Fix 503, process 502, 504, 507 HTTP status codes.

Most of the code is by Maxim Dounin.
---
 server/src/http/modules/ngx_http_fastcgi_module.c |    3 +
 server/src/http/modules/ngx_http_proxy_module.c   |    3 +
 server/src/http/ngx_http_upstream.c               |   52 +++++++++++++++++++++
 server/src/http/ngx_http_upstream.h               |    3 +
 4 files changed, 61 insertions(+), 0 deletions(-)

diff --git a/server/src/http/modules/ngx_http_fastcgi_module.c 
b/server/src/http/modules/ngx_http_fastcgi_module.c
index b975c06..525d535 100644
--- a/server/src/http/modules/ngx_http_fastcgi_module.c
+++ b/server/src/http/modules/ngx_http_fastcgi_module.c
@@ -144,7 +144,10 @@ static ngx_conf_bitmask_t  
ngx_http_fastcgi_next_upstream_masks[] = {
     { ngx_string("timeout"), NGX_HTTP_UPSTREAM_FT_TIMEOUT },
     { ngx_string("invalid_header"), NGX_HTTP_UPSTREAM_FT_INVALID_HEADER },
     { ngx_string("http_500"), NGX_HTTP_UPSTREAM_FT_HTTP_500 },
+    { ngx_string("http_502"), NGX_HTTP_UPSTREAM_FT_HTTP_502 },
     { ngx_string("http_503"), NGX_HTTP_UPSTREAM_FT_HTTP_503 },
+    { ngx_string("http_504"), NGX_HTTP_UPSTREAM_FT_HTTP_504 },
+    { ngx_string("http_507"), NGX_HTTP_UPSTREAM_FT_HTTP_507 },
     { ngx_string("http_404"), NGX_HTTP_UPSTREAM_FT_HTTP_404 },
     { ngx_string("off"), NGX_HTTP_UPSTREAM_FT_OFF },
     { ngx_null_string, 0 }
diff --git a/server/src/http/modules/ngx_http_proxy_module.c 
b/server/src/http/modules/ngx_http_proxy_module.c
index 8a7f2ab..2b3f4a2 100644
--- a/server/src/http/modules/ngx_http_proxy_module.c
+++ b/server/src/http/modules/ngx_http_proxy_module.c
@@ -148,7 +148,10 @@ static ngx_conf_bitmask_t  
ngx_http_proxy_next_upstream_masks[] = {
     { ngx_string("timeout"), NGX_HTTP_UPSTREAM_FT_TIMEOUT },
     { ngx_string("invalid_header"), NGX_HTTP_UPSTREAM_FT_INVALID_HEADER },
     { ngx_string("http_500"), NGX_HTTP_UPSTREAM_FT_HTTP_500 },
+    { ngx_string("http_502"), NGX_HTTP_UPSTREAM_FT_HTTP_502 },
     { ngx_string("http_503"), NGX_HTTP_UPSTREAM_FT_HTTP_503 },
+    { ngx_string("http_504"), NGX_HTTP_UPSTREAM_FT_HTTP_504 },
+    { ngx_string("http_507"), NGX_HTTP_UPSTREAM_FT_HTTP_507 },
     { ngx_string("http_404"), NGX_HTTP_UPSTREAM_FT_HTTP_404 },
     { ngx_string("off"), NGX_HTTP_UPSTREAM_FT_OFF },
     { ngx_null_string, 0 }
diff --git a/server/src/http/ngx_http_upstream.c 
b/server/src/http/ngx_http_upstream.c
index 27f27ae..7dd99ba 100644
--- a/server/src/http/ngx_http_upstream.c
+++ b/server/src/http/ngx_http_upstream.c
@@ -1213,6 +1213,45 @@ ngx_http_upstream_process_header(ngx_event_t *rev)
         }
     }
 
+    if (u->headers_in.status_n == NGX_HTTP_BAD_GATEWAY) {
+
+        if (u->peer.tries > 1
+            && u->conf->next_upstream & NGX_HTTP_UPSTREAM_FT_HTTP_502)
+        {
+            ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_HTTP_502);
+            return;
+        }
+    }
+
+    if (u->headers_in.status_n == NGX_HTTP_SERVICE_UNAVAILABLE) {
+
+        if (u->peer.tries > 1
+            && u->conf->next_upstream & NGX_HTTP_UPSTREAM_FT_HTTP_503)
+        {
+            ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_HTTP_503);
+            return;
+        }
+    }
+
+    if (u->headers_in.status_n == NGX_HTTP_GATEWAY_TIME_OUT) {
+
+        if (u->peer.tries > 1
+            && u->conf->next_upstream & NGX_HTTP_UPSTREAM_FT_HTTP_504)
+        {
+            ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_HTTP_504);
+            return;
+        }
+    }
+
+    if (u->headers_in.status_n == NGX_HTTP_INSUFFICIENT_STORAGE) {
+
+        if (u->peer.tries > 1
+            && u->conf->next_upstream & NGX_HTTP_UPSTREAM_FT_HTTP_507)
+        {
+            ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_HTTP_507);
+            return;
+        }
+    }
 
     if (u->headers_in.status_n >= NGX_HTTP_BAD_REQUEST
         && u->conf->intercept_errors)
@@ -2239,6 +2278,7 @@ ngx_http_upstream_next(ngx_http_request_t *r, 
ngx_http_upstream_t *u,
     } else {
         switch(ft_type) {
 
+        case NGX_HTTP_UPSTREAM_FT_HTTP_504:
         case NGX_HTTP_UPSTREAM_FT_TIMEOUT:
             status = NGX_HTTP_GATEWAY_TIME_OUT;
             break;
@@ -2251,6 +2291,18 @@ ngx_http_upstream_next(ngx_http_request_t *r, 
ngx_http_upstream_t *u,
             status = NGX_HTTP_NOT_FOUND;
             break;
 
+        case NGX_HTTP_UPSTREAM_FT_HTTP_502:
+            status = NGX_HTTP_BAD_GATEWAY;
+            break;
+
+        case NGX_HTTP_UPSTREAM_FT_HTTP_503:
+            status = NGX_HTTP_SERVICE_UNAVAILABLE;
+            break;
+
+        case NGX_HTTP_UPSTREAM_FT_HTTP_507:
+            status = NGX_HTTP_INSUFFICIENT_STORAGE;
+            break;
+
         /*
          * NGX_HTTP_UPSTREAM_FT_BUSY_LOCK and NGX_HTTP_UPSTREAM_FT_MAX_WAITING
          * never reach here
diff --git a/server/src/http/ngx_http_upstream.h 
b/server/src/http/ngx_http_upstream.h
index 2ed2797..6754d65 100644
--- a/server/src/http/ngx_http_upstream.h
+++ b/server/src/http/ngx_http_upstream.h
@@ -24,6 +24,9 @@
 #define NGX_HTTP_UPSTREAM_FT_HTTP_404        0x00000040
 #define NGX_HTTP_UPSTREAM_FT_BUSY_LOCK       0x00000080
 #define NGX_HTTP_UPSTREAM_FT_MAX_WAITING     0x00000100
+#define NGX_HTTP_UPSTREAM_FT_HTTP_502        0x00000200
+#define NGX_HTTP_UPSTREAM_FT_HTTP_504        0x00000400
+#define NGX_HTTP_UPSTREAM_FT_HTTP_507        0x00000800
 #define NGX_HTTP_UPSTREAM_FT_NOLIVE          0x40000000
 #define NGX_HTTP_UPSTREAM_FT_OFF             0x80000000
 
-- 
1.5.5.1.116.ge4b9c

>From 9d7dec95ab584acedda805ac366926d04b339eff Mon Sep 17 00:00:00 2001
From: Tomash Brechko <tomash.brechko@xxxxxxxxx>
Date: Sat, 19 Jul 2008 11:26:33 +0400
Subject: [PATCH] Gunzip the reply if ngx_http_request_t::gunzip is set.

---
 .../src/http/modules/ngx_http_gzip_filter_module.c |  358 ++++++++++++--------
 server/src/http/ngx_http_core_module.c             |   11 +
 server/src/http/ngx_http_core_module.h             |    1 +
 server/src/http/ngx_http_request.h                 |    1 +
 4 files changed, 232 insertions(+), 139 deletions(-)

diff --git a/server/src/http/modules/ngx_http_gzip_filter_module.c 
b/server/src/http/modules/ngx_http_gzip_filter_module.c
index 1ff45ea..5f6b543 100644
--- a/server/src/http/modules/ngx_http_gzip_filter_module.c
+++ b/server/src/http/modules/ngx_http_gzip_filter_module.c
@@ -212,32 +212,38 @@ ngx_http_gzip_header_filter(ngx_http_request_t *r)
 
     conf = ngx_http_get_module_loc_conf(r, ngx_http_gzip_filter_module);
 
-    if (!conf->enable
-        || (r->headers_out.status != NGX_HTTP_OK
-            && r->headers_out.status != NGX_HTTP_FORBIDDEN
-            && r->headers_out.status != NGX_HTTP_NOT_FOUND)
+    if ((r->headers_out.status != NGX_HTTP_OK
+         && r->headers_out.status != NGX_HTTP_FORBIDDEN
+         && r->headers_out.status != NGX_HTTP_NOT_FOUND)
         || r->header_only
-        || r->headers_out.content_type.len == 0
-        || (r->headers_out.content_encoding
-            && r->headers_out.content_encoding->value.len)
-        || (r->headers_out.content_length_n != -1
-            && r->headers_out.content_length_n < conf->min_length)
-        || ngx_http_gzip_ok(r) != NGX_OK)
+        || r->headers_out.content_type.len == 0)
     {
         return ngx_http_next_header_filter(r);
     }
 
-    type = conf->types->elts;
-    for (i = 0; i < conf->types->nelts; i++) {
-        if (r->headers_out.content_type.len >= type[i].len
-            && ngx_strncasecmp(r->headers_out.content_type.data,
-                               type[i].data, type[i].len) == 0)
+    if (!r->gunzip) {
+        if (!conf->enable
+            || (r->headers_out.content_encoding
+                && r->headers_out.content_encoding->value.len)
+            || (r->headers_out.content_length_n != -1
+                && r->headers_out.content_length_n < conf->min_length)
+            || ngx_http_gzip_ok(r) != NGX_OK)
         {
-            goto found;
+            return ngx_http_next_header_filter(r);
         }
-    }
 
-    return ngx_http_next_header_filter(r);
+        type = conf->types->elts;
+        for (i = 0; i < conf->types->nelts; i++) {
+            if (r->headers_out.content_type.len >= type[i].len
+                && ngx_strncasecmp(r->headers_out.content_type.data,
+                                   type[i].data, type[i].len) == 0)
+            {
+                goto found;
+            }
+        }
+
+        return ngx_http_next_header_filter(r);
+    }
 
 found:
 
@@ -250,18 +256,20 @@ found:
 
     ctx->request = r;
 
-    h = ngx_list_push(&r->headers_out.headers);
-    if (h == NULL) {
-        return NGX_ERROR;
-    }
+    if (!r->gunzip) {
+        h = ngx_list_push(&r->headers_out.headers);
+        if (h == NULL) {
+            return NGX_ERROR;
+        }
 
-    h->hash = 1;
-    h->key.len = sizeof("Content-Encoding") - 1;
-    h->key.data = (u_char *) "Content-Encoding";
-    h->value.len = sizeof("gzip") - 1;
-    h->value.data = (u_char *) "gzip";
+        h->hash = 1;
+        h->key.len = sizeof("Content-Encoding") - 1;
+        h->key.data = (u_char *) "Content-Encoding";
+        h->value.len = sizeof("gzip") - 1;
+        h->value.data = (u_char *) "gzip";
 
-    r->headers_out.content_encoding = h;
+        r->headers_out.content_encoding = h;
+    }
 
     ctx->length = r->headers_out.content_length_n;
 
@@ -284,6 +292,7 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
     ngx_chain_t           *cl, out;
     ngx_http_gzip_ctx_t   *ctx;
     ngx_http_gzip_conf_t  *conf;
+    const char            *method = (r->gunzip ? "inflate" : "deflate");
 
     ctx = ngx_http_get_module_ctx(r, ngx_http_gzip_filter_module);
 
@@ -294,10 +303,10 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
     conf = ngx_http_get_module_loc_conf(r, ngx_http_gzip_filter_module);
 
     if (ctx->preallocated == NULL) {
-        wbits = conf->wbits;
+        wbits = (!r->gunzip ? conf->wbits : 15);
         memlevel = conf->memlevel;
 
-        if (ctx->length > 0) {
+        if (!r->gunzip && ctx->length > 0) {
 
             /* the actual zlib window size is smaller by 262 bytes */
 
@@ -332,46 +341,55 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
         ctx->zstream.zfree = ngx_http_gzip_filter_free;
         ctx->zstream.opaque = ctx;
 
-        rc = deflateInit2(&ctx->zstream, (int) conf->level, Z_DEFLATED,
-                          -wbits, memlevel, Z_DEFAULT_STRATEGY);
+        if (!r->gunzip) {
+            rc = deflateInit2(&ctx->zstream, (int) conf->level, Z_DEFLATED,
+                              -wbits, memlevel, Z_DEFAULT_STRATEGY);
+        } else {
+            rc = inflateInit2(&ctx->zstream, 15 + 16);
+        }
 
         if (rc != Z_OK) {
             ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
-                          "deflateInit2() failed: %d", rc);
+                          "%sInit2() failed: %d", method, rc);
             ngx_http_gzip_error(ctx);
             return NGX_ERROR;
         }
 
-        b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
-        if (b == NULL) {
-            ngx_http_gzip_error(ctx);
-            return NGX_ERROR;
-        }
+        if (!r->gunzip) {
+            b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t));
+            if (b == NULL) {
+                ngx_http_gzip_error(ctx);
+                return NGX_ERROR;
+            }
 
-        b->memory = 1;
-        b->pos = gzheader;
-        b->last = b->pos + 10;
+            b->memory = 1;
+            b->pos = gzheader;
+            b->last = b->pos + 10;
 
-        out.buf = b;
-        out.next = NULL;
+            out.buf = b;
+            out.next = NULL;
 
-        /*
-         * We pass the gzheader to the next filter now to avoid its linking
-         * to the ctx->busy chain.  zlib does not usually output the compressed
-         * data in the initial iterations, so the gzheader that was linked
-         * to the ctx->busy chain would be flushed by ngx_http_write_filter().
-         */
+            /*
+             * We pass the gzheader to the next filter now to avoid
+             * its linking to the ctx->busy chain.  zlib does not
+             * usually output the compressed data in the initial
+             * iterations, so the gzheader that was linked to the
+             * ctx->busy chain would be flushed by
+             * ngx_http_write_filter().
+             */
 
-        if (ngx_http_next_body_filter(r, &out) == NGX_ERROR) {
-            ngx_http_gzip_error(ctx);
-            return NGX_ERROR;
+            if (ngx_http_next_body_filter(r, &out) == NGX_ERROR) {
+                ngx_http_gzip_error(ctx);
+                return NGX_ERROR;
+            }
+
+            ctx->crc32 = crc32(0L, Z_NULL, 0);
         }
 
         r->connection->buffered |= NGX_HTTP_GZIP_BUFFERED;
 
         ctx->last_out = &ctx->out;
 
-        ctx->crc32 = crc32(0L, Z_NULL, 0);
         ctx->flush = Z_NO_FLUSH;
     }
 
@@ -422,7 +440,29 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
                 /**/
 
                 if (ctx->in_buf->last_buf) {
-                    ctx->flush = Z_FINISH;
+                    /*
+                     * We use Z_BLOCK below for the following reason:
+                     * if we would use Z_FINISH, then decompression
+                     * may return Z_BUF_ERROR, meaning there wasn't
+                     * enough room for decompressed data.  This error
+                     * is not fatal according to zlib.h, however
+                     * ignoring it is dangerous and may mask real
+                     * bugs.  For instance, sometimes completely empty
+                     * last buffer is passed to this filter, and
+                     * decompression would enter the infinite loop: no
+                     * progress is possible because input is void and
+                     * Z_BUF_ERROR is ignored.  We can't use
+                     * Z_NO_FLUSH, because it will never return
+                     * Z_STREAM_END.  We can't use Z_SYNC_FLUSH,
+                     * because it has a special meaning.  So we use
+                     * Z_BLOCK, which eventually would return
+                     * Z_STREAM_END.
+                     *
+                     * Perhaps the above is also true for compression,
+                     * but right now we won't try to change the old
+                     * behaviour.
+                     */
+                    ctx->flush = (!r->gunzip ? Z_FINISH : Z_BLOCK);
 
                 } else if (ctx->in_buf->flush) {
                     ctx->flush = Z_SYNC_FLUSH;
@@ -433,14 +473,14 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
                         continue;
                     }
 
-                } else {
+                } else if (!r->gunzip) {
                     ctx->crc32 = crc32(ctx->crc32, ctx->zstream.next_in,
                                        ctx->zstream.avail_in);
                 }
             }
 
 
-            /* is there a space for the gzipped data ? */
+            /* is there a space for the output data ? */
 
             if (ctx->zstream.avail_out == 0) {
 
@@ -469,30 +509,36 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
                 ctx->zstream.avail_out = conf->bufs.size;
             }
 
-            ngx_log_debug6(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-                         "deflate in: ni:%p no:%p ai:%ud ao:%ud fl:%d redo:%d",
+            ngx_log_debug7(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+                         "%s in: ni:%p no:%p ai:%ud ao:%ud fl:%d redo:%d",
+                         method,
                          ctx->zstream.next_in, ctx->zstream.next_out,
                          ctx->zstream.avail_in, ctx->zstream.avail_out,
                          ctx->flush, ctx->redo);
 
-            rc = deflate(&ctx->zstream, ctx->flush);
+            if (!r->gunzip) {
+                rc = deflate(&ctx->zstream, ctx->flush);
+            } else {
+                rc = inflate(&ctx->zstream, ctx->flush);
+            }
 
             if (rc != Z_OK && rc != Z_STREAM_END) {
                 ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
-                              "deflate() failed: %d, %d", ctx->flush, rc);
+                              "%s() failed: %d, %d", method, ctx->flush, rc);
                 ngx_http_gzip_error(ctx);
                 return NGX_ERROR;
             }
 
-            ngx_log_debug5(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-                           "deflate out: ni:%p no:%p ai:%ud ao:%ud rc:%d",
+            ngx_log_debug6(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+                           "%s out: ni:%p no:%p ai:%ud ao:%ud rc:%d",
+                           method,
                            ctx->zstream.next_in, ctx->zstream.next_out,
                            ctx->zstream.avail_in, ctx->zstream.avail_out,
                            rc);
 
-            ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-                           "gzip in_buf:%p pos:%p",
-                           ctx->in_buf, ctx->in_buf->pos);
+            ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+                           "%s in_buf:%p pos:%p",
+                           method, ctx->in_buf, ctx->in_buf->pos);
 
 
             if (ctx->zstream.next_in) {
@@ -505,58 +551,52 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
 
             ctx->out_buf->last = ctx->zstream.next_out;
 
-            if (ctx->zstream.avail_out == 0) {
-
-                /* zlib wants to output some more gzipped data */
-
-                cl = ngx_alloc_chain_link(r->pool);
-                if (cl == NULL) {
-                    ngx_http_gzip_error(ctx);
-                    return NGX_ERROR;
-                }
-
-                cl->buf = ctx->out_buf;
-                cl->next = NULL;
-                *ctx->last_out = cl;
-                ctx->last_out = &cl->next;
-
-                ctx->redo = 1;
-
-                continue;
-            }
-
             ctx->redo = 0;
 
             if (ctx->flush == Z_SYNC_FLUSH) {
 
-                ctx->zstream.avail_out = 0;
                 ctx->out_buf->flush = 1;
                 ctx->flush = Z_NO_FLUSH;
 
-                cl = ngx_alloc_chain_link(r->pool);
-                if (cl == NULL) {
-                    ngx_http_gzip_error(ctx);
-                    return NGX_ERROR;
-                }
+                /*
+                 * On decompression there might be not enough input
+                 * data to produce any output data.
+                 */
+                if (ctx->out_buf->last > ctx->out_buf->pos) {
 
-                cl->buf = ctx->out_buf;
-                cl->next = NULL;
-                *ctx->last_out = cl;
-                ctx->last_out = &cl->next;
+                    ctx->zstream.avail_out = 0;
 
-                break;
+                    cl = ngx_alloc_chain_link(r->pool);
+                    if (cl == NULL) {
+                        ngx_http_gzip_error(ctx);
+                        return NGX_ERROR;
+                    }
+
+                    cl->buf = ctx->out_buf;
+                    cl->next = NULL;
+                    *ctx->last_out = cl;
+                    ctx->last_out = &cl->next;
+
+                    break;
+                }
             }
 
             if (rc == Z_STREAM_END) {
 
                 ctx->zin = ctx->zstream.total_in;
-                ctx->zout = 10 + ctx->zstream.total_out + 8;
+                if (!r->gunzip) {
+                    ctx->zout = 10 + ctx->zstream.total_out + 8;
 
-                rc = deflateEnd(&ctx->zstream);
+                    rc = deflateEnd(&ctx->zstream);
+                } else {
+                    ctx->zout = ctx->zstream.total_out;
+
+                    rc = inflateEnd(&ctx->zstream);
+                }
 
                 if (rc != Z_OK) {
                     ngx_log_error(NGX_LOG_ALERT, r->connection->log, 0,
-                                  "deflateEnd() failed: %d", rc);
+                                  "%sEnd() failed: %d", method, rc);
                     ngx_http_gzip_error(ctx);
                     return NGX_ERROR;
                 }
@@ -569,55 +609,70 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
                     return NGX_ERROR;
                 }
 
-                cl->buf = ctx->out_buf;
-                cl->next = NULL;
-                *ctx->last_out = cl;
-                ctx->last_out = &cl->next;
-
-                if (ctx->zstream.avail_out >= 8) {
-                    trailer = (struct gztrailer *) ctx->out_buf->last;
-                    ctx->out_buf->last += 8;
-                    ctx->out_buf->last_buf = 1;
-
-                } else {
-                    b = ngx_create_temp_buf(r->pool, 8);
-                    if (b == NULL) {
-                        ngx_http_gzip_error(ctx);
-                        return NGX_ERROR;
-                    }
-
-                    b->last_buf = 1;
-
-                    cl = ngx_alloc_chain_link(r->pool);
-                    if (cl == NULL) {
-                        ngx_http_gzip_error(ctx);
-                        return NGX_ERROR;
-                    }
-
-                    cl->buf = b;
+                /*
+                 * On decompression we could already output everything
+                 * under (ctx->flush == Z_SYNC_FLUSH), so we test here
+                 * that the buffer is not empty, or Transfer-Encoding:
+                 * is chunked.  In the latter case we output empty
+                 * buffer to get last zero length chunk.
+                 */
+                if (!r->gunzip || ctx->out_buf->last > ctx->out_buf->pos
+                    || r->chunked) {
+                    cl->buf = ctx->out_buf;
                     cl->next = NULL;
                     *ctx->last_out = cl;
                     ctx->last_out = &cl->next;
-                    trailer = (struct gztrailer *) b->pos;
-                    b->last += 8;
                 }
 
+                if (!r->gunzip) {
+                    if (ctx->zstream.avail_out >= 8) {
+                        trailer = (struct gztrailer *) ctx->out_buf->last;
+                        ctx->out_buf->last += 8;
+                        ctx->out_buf->last_buf = 1;
+
+                    } else {
+                        b = ngx_create_temp_buf(r->pool, 8);
+                        if (b == NULL) {
+                            ngx_http_gzip_error(ctx);
+                            return NGX_ERROR;
+                        }
+
+                        b->last_buf = 1;
+
+                        cl = ngx_alloc_chain_link(r->pool);
+                        if (cl == NULL) {
+                            ngx_http_gzip_error(ctx);
+                            return NGX_ERROR;
+                        }
+
+                        cl->buf = b;
+                        cl->next = NULL;
+                        *ctx->last_out = cl;
+                        ctx->last_out = &cl->next;
+                        trailer = (struct gztrailer *) b->pos;
+                        b->last += 8;
+                    }
+
 #if (NGX_HAVE_LITTLE_ENDIAN && NGX_HAVE_NONALIGNED)
 
-                trailer->crc32 = ctx->crc32;
-                trailer->zlen = ctx->zin;
+                    trailer->crc32 = ctx->crc32;
+                    trailer->zlen = ctx->zin;
 
 #else
-                trailer->crc32[0] = (u_char) (ctx->crc32 & 0xff);
-                trailer->crc32[1] = (u_char) ((ctx->crc32 >> 8) & 0xff);
-                trailer->crc32[2] = (u_char) ((ctx->crc32 >> 16) & 0xff);
-                trailer->crc32[3] = (u_char) ((ctx->crc32 >> 24) & 0xff);
-
-                trailer->zlen[0] = (u_char) (ctx->zin & 0xff);
-                trailer->zlen[1] = (u_char) ((ctx->zin >> 8) & 0xff);
-                trailer->zlen[2] = (u_char) ((ctx->zin >> 16) & 0xff);
-                trailer->zlen[3] = (u_char) ((ctx->zin >> 24) & 0xff);
+                    trailer->crc32[0] = (u_char) (ctx->crc32 & 0xff);
+                    trailer->crc32[1] = (u_char) ((ctx->crc32 >> 8) & 0xff);
+                    trailer->crc32[2] = (u_char) ((ctx->crc32 >> 16) & 0xff);
+                    trailer->crc32[3] = (u_char) ((ctx->crc32 >> 24) & 0xff);
+
+                    trailer->zlen[0] = (u_char) (ctx->zin & 0xff);
+                    trailer->zlen[1] = (u_char) ((ctx->zin >> 8) & 0xff);
+                    trailer->zlen[2] = (u_char) ((ctx->zin >> 16) & 0xff);
+                    trailer->zlen[3] = (u_char) ((ctx->zin >> 24) & 0xff);
 #endif
+                } else {
+                    ctx->out_buf->last_buf = 1;
+                    r->gunzip = 0;
+                }
 
                 ctx->zstream.avail_in = 0;
                 ctx->zstream.avail_out = 0;
@@ -629,6 +684,26 @@ ngx_http_gzip_body_filter(ngx_http_request_t *r, 
ngx_chain_t *in)
                 break;
             }
 
+            if (ctx->zstream.avail_out == 0) {
+
+                /* zlib wants to output some more gzipped data */
+
+                cl = ngx_alloc_chain_link(r->pool);
+                if (cl == NULL) {
+                    ngx_http_gzip_error(ctx);
+                    return NGX_ERROR;
+                }
+
+                cl->buf = ctx->out_buf;
+                cl->next = NULL;
+                *ctx->last_out = cl;
+                ctx->last_out = &cl->next;
+
+                ctx->redo = 1;
+
+                continue;
+            }
+
             if (conf->no_buffer && ctx->in == NULL) {
 
                 cl = ngx_alloc_chain_link(r->pool);
@@ -737,7 +812,12 @@ ngx_http_gzip_filter_free(void *opaque, void *address)
 static void
 ngx_http_gzip_error(ngx_http_gzip_ctx_t *ctx)
 {
-    deflateEnd(&ctx->zstream);
+    if (! ctx->request->gunzip) {
+        deflateEnd(&ctx->zstream);
+    } else {
+        ctx->request->gunzip = 0;
+        inflateEnd(&ctx->zstream);
+    }
 
     if (ctx->preallocated) {
         ngx_pfree(ctx->request->pool, ctx->preallocated);
diff --git a/server/src/http/ngx_http_core_module.c 
b/server/src/http/ngx_http_core_module.c
index 20aba86..a7d8186 100644
--- a/server/src/http/ngx_http_core_module.c
+++ b/server/src/http/ngx_http_core_module.c
@@ -603,6 +603,14 @@ static ngx_command_t  ngx_http_core_commands[] = {
       0,
       NULL },
 
+    { ngx_string("gunzip"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF
+                        |NGX_CONF_FLAG,
+      ngx_conf_set_flag_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_core_loc_conf_t, gunzip),
+      NULL },
+
 #endif
 
       ngx_null_command
@@ -701,6 +709,7 @@ ngx_http_handler(ngx_http_request_t *r)
 
     r->valid_location = 1;
     r->gzip = 0;
+    r->gunzip = 0;
 
     r->write_event_handler = ngx_http_core_run_phases;
     ngx_http_core_run_phases(r);
@@ -2617,6 +2626,7 @@ ngx_http_core_create_loc_conf(ngx_conf_t *cf)
 #if (NGX_HTTP_GZIP)
     lcf->gzip_vary = NGX_CONF_UNSET;
     lcf->gzip_http_version = NGX_CONF_UNSET_UINT;
+    lcf->gunzip = NGX_CONF_UNSET;
 #if (NGX_PCRE)
     lcf->gzip_disable = NGX_CONF_UNSET_PTR;
 #endif
@@ -2845,6 +2855,7 @@ ngx_http_core_merge_loc_conf(ngx_conf_t *cf, void 
*parent, void *child)
 #if (NGX_HTTP_GZIP)
 
     ngx_conf_merge_value(conf->gzip_vary, prev->gzip_vary, 0);
+    ngx_conf_merge_value(conf->gunzip, prev->gunzip, 0);
     ngx_conf_merge_uint_value(conf->gzip_http_version, prev->gzip_http_version,
                               NGX_HTTP_VERSION_11);
     ngx_conf_merge_bitmask_value(conf->gzip_proxied, prev->gzip_proxied,
diff --git a/server/src/http/ngx_http_core_module.h 
b/server/src/http/ngx_http_core_module.h
index 85d96b0..eea7eb7 100644
--- a/server/src/http/ngx_http_core_module.h
+++ b/server/src/http/ngx_http_core_module.h
@@ -304,6 +304,7 @@ struct ngx_http_core_loc_conf_s {
 
     ngx_uint_t    gzip_http_version;       /* gzip_http_version */
     ngx_uint_t    gzip_proxied;            /* gzip_proxied */
+    ngx_flag_t    gunzip;
 
 #if (NGX_PCRE)
     ngx_array_t  *gzip_disable;            /* gzip_disable */
diff --git a/server/src/http/ngx_http_request.h 
b/server/src/http/ngx_http_request.h
index be22db6..50a4bf3 100644
--- a/server/src/http/ngx_http_request.h
+++ b/server/src/http/ngx_http_request.h
@@ -429,6 +429,7 @@ struct ngx_http_request_s {
     unsigned                          subrequest_in_memory:1;
 
     unsigned                          gzip:2;
+    unsigned                          gunzip:1;
 
     unsigned                          proxy:1;
     unsigned                          bypass_cache:1;
-- 
1.5.5.1.116.ge4b9c

>From e8cb3e0af8cd9e9621d3307aaab000cf5d7ee879 Mon Sep 17 00:00:00 2001
From: Tomash Brechko <tomash.brechko@xxxxxxxxx>
Date: Sat, 19 Jul 2008 11:30:58 +0400
Subject: [PATCH] Add memcached_gzip_flag config parameter and 
memcached_namespace config variable.

Note that 'if (clcf->gzip_vary)' part may be obsolete, it
seems corresponding code has been removed from gzip module.
---
 .../src/http/modules/ngx_http_memcached_module.c   |  128 +++++++++++++++++--
 server/src/http/ngx_http_variables.c               |   15 +++
 2 files changed, 129 insertions(+), 14 deletions(-)

diff --git a/server/src/http/modules/ngx_http_memcached_module.c 
b/server/src/http/modules/ngx_http_memcached_module.c
index 64592f3..3e3289e 100644
--- a/server/src/http/modules/ngx_http_memcached_module.c
+++ b/server/src/http/modules/ngx_http_memcached_module.c
@@ -13,6 +13,8 @@
 typedef struct {
     ngx_http_upstream_conf_t   upstream;
     ngx_int_t                  index;
+    ngx_uint_t                 gzip_flag;
+    ngx_int_t                  ns_index;
 } ngx_http_memcached_loc_conf_t;
 
 
@@ -99,6 +101,14 @@ static ngx_command_t  ngx_http_memcached_commands[] = {
       offsetof(ngx_http_memcached_loc_conf_t, upstream.next_upstream),
       &ngx_http_memcached_next_upstream_masks },
 
+    { ngx_string("memcached_gzip_flag"),
+      NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF
+                        |NGX_CONF_TAKE1,
+      ngx_conf_set_num_slot,
+      NGX_HTTP_LOC_CONF_OFFSET,
+      offsetof(ngx_http_memcached_loc_conf_t, gzip_flag),
+      NULL },
+
     { ngx_string("memcached_upstream_max_fails"),
       NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
       ngx_http_memcached_upstream_max_fails_unsupported,
@@ -149,6 +159,7 @@ ngx_module_t  ngx_http_memcached_module = {
 
 
 static ngx_str_t  ngx_http_memcached_key = ngx_string("memcached_key");
+static ngx_str_t  ngx_http_memcached_ns = ngx_string("memcached_namespace");
 
 
 #define NGX_HTTP_MEMCACHED_END   (sizeof(ngx_http_memcached_end) - 1)
@@ -228,11 +239,11 @@ static ngx_int_t
 ngx_http_memcached_create_request(ngx_http_request_t *r)
 {
     size_t                          len;
-    uintptr_t                       escape;
+    uintptr_t                       escape, ns_escape = 0;
     ngx_buf_t                      *b;
     ngx_chain_t                    *cl;
     ngx_http_memcached_ctx_t       *ctx;
-    ngx_http_variable_value_t      *vv;
+    ngx_http_variable_value_t      *vv, *ns_vv;
     ngx_http_memcached_loc_conf_t  *mlcf;
 
     mlcf = ngx_http_get_module_loc_conf(r, ngx_http_memcached_module);
@@ -247,7 +258,15 @@ ngx_http_memcached_create_request(ngx_http_request_t *r)
 
     escape = 2 * ngx_escape_uri(NULL, vv->data, vv->len, NGX_ESCAPE_MEMCACHED);
 
-    len = sizeof("get ") - 1 + vv->len + escape + sizeof(CRLF) - 1;
+    ns_vv = ngx_http_get_indexed_variable(r, mlcf->ns_index);
+
+    if (ns_vv != NULL && !ns_vv->not_found && ns_vv->len != 0) {
+        ns_escape = 2 * ngx_escape_uri(NULL, ns_vv->data,
+                                       ns_vv->len, NGX_ESCAPE_MEMCACHED);
+    }
+
+    len = sizeof("get ") - 1 + ns_vv->len + ns_escape
+                             + vv->len + escape + sizeof(CRLF) - 1;
 
     b = ngx_create_temp_buf(r->pool, len);
     if (b == NULL) {
@@ -270,6 +289,16 @@ ngx_http_memcached_create_request(ngx_http_request_t *r)
 
     ctx->key.data = b->last;
 
+    if (ns_vv != NULL && !ns_vv->not_found && ns_vv->len != 0) {
+        if (ns_escape == 0) {
+            b->last = ngx_copy(b->last, ns_vv->data, ns_vv->len);
+        } else {
+            b->last = (u_char *) ngx_escape_uri(b->last, ns_vv->data,
+                                                ns_vv->len,
+                                                NGX_ESCAPE_MEMCACHED);
+        }
+    }
+
     if (escape == 0) {
         b->last = ngx_copy(b->last, vv->data, vv->len);
 
@@ -299,10 +328,14 @@ ngx_http_memcached_reinit_request(ngx_http_request_t *r)
 static ngx_int_t
 ngx_http_memcached_process_header(ngx_http_request_t *r)
 {
-    u_char                    *p, *len;
+    u_char                    *p, *beg;
     ngx_str_t                  line;
     ngx_http_upstream_t       *u;
     ngx_http_memcached_ctx_t  *ctx;
+    ngx_http_memcached_loc_conf_t  *mlcf;
+    uint32_t                   flags;
+    ngx_table_elt_t           *h;
+    ngx_http_core_loc_conf_t  *clcf;
 
     u = r->upstream;
 
@@ -347,23 +380,21 @@ found:
             goto no_valid;
         }
 
-        /* skip flags */
+        beg = p;
 
-        while (*p) {
-            if (*p++ == ' ') {
-                goto length;
-            }
-        }
+        while (*p && *p++ != ' ') { /* void */ }
 
-        goto no_valid;
+        if (! *p) {
+            goto no_valid;
+        }
 
-    length:
+        flags = ngx_atoof(beg, p - beg - 1);
 
-        len = p;
+        beg = p;
 
         while (*p && *p++ != CR) { /* void */ }
 
-        r->headers_out.content_length_n = ngx_atoof(len, p - len - 1);
+        r->headers_out.content_length_n = ngx_atoof(beg, p - beg - 1);
         if (r->headers_out.content_length_n == -1) {
             ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
                           "memcached sent invalid length in response \"%V\" "
@@ -372,6 +403,59 @@ found:
             return NGX_HTTP_UPSTREAM_INVALID_HEADER;
         }
 
+        mlcf = ngx_http_get_module_loc_conf(r, ngx_http_memcached_module);
+
+        if (flags & mlcf->gzip_flag) {
+#if (NGX_HTTP_GZIP)
+            clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
+
+            if (ngx_http_gzip_ok(r) == NGX_OK) {
+                h = ngx_list_push(&r->headers_out.headers);
+                if (h == NULL) {
+                    return NGX_ERROR;
+                }
+
+                h->hash = 1;
+                h->key.len = sizeof("Content-Encoding") - 1;
+                h->key.data = (u_char *) "Content-Encoding";
+                h->value.len = sizeof("gzip") - 1;
+                h->value.data = (u_char *) "gzip";
+
+                r->headers_out.content_encoding = h;
+
+                if (clcf->gzip_vary) {
+                    h = ngx_list_push(&r->headers_out.headers);
+                    if (h == NULL) {
+                        return NGX_ERROR;
+                    }
+
+                    h->hash = 1;
+                    h->key.len = sizeof("Vary") - 1;
+                    h->key.data = (u_char *) "Vary";
+                    h->value.len = sizeof("Accept-Encoding") - 1;
+                    h->value.data = (u_char *) "Accept-Encoding";
+                }
+            } else {
+                if (clcf->gunzip) {
+                    r->gunzip = 1;
+                } else {
+#endif
+                    /*
+                     * If the client can't accept compressed data, and
+                     * automatic decompression is not enabled, we
+                     * return 404 in the hope that the next upstream
+                     * will return uncompressed data.
+                     */
+                    u->headers_in.status_n = 404;
+                    u->state->status = 404;
+
+                    return NGX_OK;
+#if (NGX_HTTP_GZIP)
+                }
+            }
+#endif
+        }
+
         u->headers_in.status_n = 200;
         u->state->status = 200;
         u->buffer.pos = p + 1;
@@ -533,6 +617,8 @@ ngx_http_memcached_create_loc_conf(ngx_conf_t *cf)
 
     conf->upstream.buffer_size = NGX_CONF_UNSET_SIZE;
 
+    conf->gzip_flag = NGX_CONF_UNSET_UINT;
+
     /* the hardcoded values */
     conf->upstream.cyclic_temp_file = 0;
     conf->upstream.buffering = 0;
@@ -548,6 +634,7 @@ ngx_http_memcached_create_loc_conf(ngx_conf_t *cf)
     conf->upstream.pass_request_body = 0;
 
     conf->index = NGX_CONF_UNSET;
+    conf->ns_index = NGX_CONF_UNSET;
 
     return conf;
 }
@@ -578,6 +665,9 @@ ngx_http_memcached_merge_loc_conf(ngx_conf_t *cf, void 
*parent, void *child)
                                |NGX_HTTP_UPSTREAM_FT_ERROR
                                |NGX_HTTP_UPSTREAM_FT_TIMEOUT));
 
+    ngx_conf_merge_uint_value(conf->gzip_flag,
+                              prev->gzip_flag, 0);
+
     if (conf->upstream.next_upstream & NGX_HTTP_UPSTREAM_FT_OFF) {
         conf->upstream.next_upstream = NGX_CONF_BITMASK_SET
                                        |NGX_HTTP_UPSTREAM_FT_OFF;
@@ -592,6 +682,10 @@ ngx_http_memcached_merge_loc_conf(ngx_conf_t *cf, void 
*parent, void *child)
         conf->index = prev->index;
     }
 
+    if (conf->ns_index == NGX_CONF_UNSET) {
+        conf->ns_index = prev->ns_index;
+    }
+
     return NGX_CONF_OK;
 }
 
@@ -638,6 +732,12 @@ ngx_http_memcached_pass(ngx_conf_t *cf, ngx_command_t 
*cmd, void *conf)
         return NGX_CONF_ERROR;
     }
 
+    lcf->ns_index = ngx_http_get_variable_index(cf, &ngx_http_memcached_ns);
+
+    if (lcf->ns_index == NGX_ERROR) {
+        return NGX_CONF_ERROR;
+    }
+
     return NGX_CONF_OK;
 }
 
diff --git a/server/src/http/ngx_http_variables.c 
b/server/src/http/ngx_http_variables.c
index e014170..f561a27 100644
--- a/server/src/http/ngx_http_variables.c
+++ b/server/src/http/ngx_http_variables.c
@@ -1335,6 +1335,15 @@ ngx_http_variables_add_core_vars(ngx_conf_t *cf)
 }
 
 
+static ngx_int_t
+ngx_http_optional_variable(ngx_http_request_t *r,
+    ngx_http_variable_value_t *v, uintptr_t data)
+{
+    *v = ngx_http_variable_null_value;
+    return NGX_OK;
+}
+
+
 ngx_int_t
 ngx_http_variables_init_vars(ngx_conf_t *cf)
 {
@@ -1396,6 +1405,12 @@ ngx_http_variables_init_vars(ngx_conf_t *cf)
             continue;
         }
 
+        if (ngx_strncmp(v[i].name.data, "memcached_namespace", 19) == 0) {
+            v[i].get_handler = ngx_http_optional_variable;
+
+            continue;
+        }
+
         ngx_log_error(NGX_LOG_EMERG, cf->log, 0,
                       "unknown \"%V\" variable", &v[i].name);
 
-- 
1.5.5.1.116.ge4b9c

>From 2beea92c9d87c4ff1f9189470d3338d0173f21b0 Mon Sep 17 00:00:00 2001
From: Tomash Brechko <tomash.brechko@xxxxxxxxx>
Date: Sat, 19 Jul 2008 11:32:16 +0400
Subject: [PATCH] Add memcached_hash module.

---
 memcached_hash/Changes                             |   35 ++
 memcached_hash/README                              |  216 ++++++++
 memcached_hash/config                              |    6 +
 .../ngx_http_upstream_memcached_hash_module.c      |  548 ++++++++++++++++++++
 server/src/core/ngx_string.h                       |    1 +
 server/src/http/ngx_http_upstream.c                |    1 +
 server/src/http/ngx_http_upstream.h                |    1 +
 7 files changed, 808 insertions(+), 0 deletions(-)
 create mode 100644 memcached_hash/Changes
 create mode 100644 memcached_hash/README
 create mode 100644 memcached_hash/config
 create mode 100644 memcached_hash/ngx_http_upstream_memcached_hash_module.c

diff --git a/memcached_hash/Changes b/memcached_hash/Changes
new file mode 100644
index 0000000..579cda4
--- /dev/null
+++ b/memcached_hash/Changes
@@ -0,0 +1,35 @@
+Revision history of ngx_http_upstream_memcached_hash_module.
+
+0.03  2008-05-01
+        - bugfix release.
+
+        Fix key distribution bug in compatible mode.  Because of
+        accumulated rounding error some keys were mapped to the
+        different server than with Cache::Memcached.
+
+
+0.02  2008-02-19
+        - add support for $memcached_namespace variable.
+
+        If Cache::Memcached::Fast uses
+
+           namespace => 'prefix',
+
+        then nginx configuration file should have
+
+          set $memcached_namespace "prefix";
+
+        This is not the same as prepending "prefix" to $memcached_key:
+        namespace prefix should not be hashed.
+
+
+0.01  2008-01-27
+        - first official release.
+
+        The hashing is fully compatible with Cache::Memcached::Fast
+        and Cache::Memcached, and thus with any other client that is
+        compatible with C::M.
+
+
+0.00  2007-12-24
+        - development started.
diff --git a/memcached_hash/README b/memcached_hash/README
new file mode 100644
index 0000000..13266dc
--- /dev/null
+++ b/memcached_hash/README
@@ -0,0 +1,216 @@
+ngx_http_upstream_memcached_hash_module 0.03
+============================================
+
+This module is a load balancer for nginx server that is meant to be
+used together with Cache::Memcached::Fast Perl module (or its ancestor
+Cache::Memcached).  It distributes requests among several memcached
+servers in a way consistent with the named Perl modules.  I.e. unlike
+other load balancers that try the servers one after another, this
+module calculates a certain hash function of the request URI, and then
+delegates the request to the "right" memcached server, the same that
+would be used by the Perl module.  This enables the setup where the
+data is uploaded to memcached servers by the Perl script (possibly a
+CGI script), and then served by nginx from there.
+
+
+INSTALLATION
+
+The latest release and Git repository are available from
+
+  http://openhack.ru/nginx-patched
+
+If you install both this module and nginx 0.5.x from the
+memcached_hash branch of the Git repo, you don't have to apply any
+patches as described below.
+
+
+All nginx modules are meant to be statically compiled into nginx
+server, so first you have to obtain nginx server source code:
+
+  http://nginx.net/
+
+Unpack server and module archives into some temporal location, and cd
+to _server_ directory.  From there do
+
+  cat /path/to/ngx_http_upstream_memcached_hash_module/nginx-patches/* +      | patch -N -p2
+
+This will apply minor patches to the server code.  The patches do not
+change nginx functionality, they only add some utility functions that
+are used in the module.  The patches should apply cleanly against
+nginx 0.5 stable branch (fuzz shift is OK).  One of the patches is
+actually a backport of some functionality from 0.6, so when applied to
+0.6 development branch the following warning will be given:
+
+  Reversed (or previously applied) patch detected!  Skipping patch.
+
+This is OK for files src/core/nginx.c, src/core/ngx_crc32.c,
+src/core/ngx_crc32.h, and this is what -N argument above is for.
+
+After applying the patches do
+
+  ./configure +      --add-module=/path/to/ngx_http_upstream_memcached_hash_module/
+
+If you already have nginx installed, you'd probably want to also
+specify the configure parameters you have used before.  To see what
+arguments have been given to configure for currently installed nginx
+server, run 'nginx -V'.  At last, do
+
+  make
+  make install
+
+This will install nginx server.  Do not forget to restart the server
+if it was running, but before starting the new one you will have to
+update the configuration file.  See the next section.
+
+
+USAGE
+
+As a quick start, you'll add something like
+
+  upstream memcached_cluster {
+      memcached_hash  ketama_points=150  weight_scale=10;
+
+      server  backend1.example.com:11211  weight=15;
+      server  127.0.0.1:11211  weight=10  max_fails=3  fail_timeout=30s;
+
+      server  unix:/tmp/memcached.sock  weight=10  down;
+  }
+
+into the nginx server configuration file, and then you'll use
+memcached_cluster in memcached_pass directive:
+
+  server {
+      location / {
+          set             $memcached_key   "$uri$is_args$args";
+          set             $memcached_namespace   "prefix";
+          memcached_pass  memcached_cluster;
+          error_page      404 502 504 = @fallback;
+      }
+
+      location @fallback {
+          proxy_pass      http://backend;
+      }
+  }
+
+
+In the upstream block the essential directive is memcached_hash.
+There are two different hashing modes: basic and Ketama.  Basic mode
+is compatible with both Cache::Memcached::Fast and Cache::Memcached.
+It is enabled by specifying memcached_hash without any parameters,
+i.e.
+
+  upstream memcached_cluster {
+      memcached_hash;
+      ...
+  }
+
+In this mode you specify the same servers and their weights as you did
+in the Perl script, _in the same order_.  For instance, if the script
+has
+
+  use Cache::Memcached::Fast;
+
+  my $memd = new Cache::Memcached::Fast({
+      servers => [ { address => 'localhost:11211', weight => 2 },
+                   '192.168.254.2:11211',
+                   { address => '/path/to/unix.sock', weight => 4 } ],
+      ...
+  });
+
+in nginx configuration file you'd write
+
+  upstream memcached_cluster {
+      memcached_hash;
+
+      server localhost:11211 weight=2;
+      server 192.168.254.2:11211;
+      server unix:/path/to/unix.sock weight=4;
+  }
+
+Note that the server order is the same, and weight=1 is the default in
+both configurations.
+
+Ketama mode uses the Ketama consistent hashing algorithm that is
+compatible with Cache::Memcached::Fast (see Perl module documentation
+for further explanation and references).  It is enabled by specifying
+positive ketama_points argument to memcached_hash, and possibly
+weight_scale, since nginx's weights are always integer.  For instance,
+if you have this in your Perl script
+
+  use Cache::Memcached::Fast;
+
+  my $memd = new Cache::Memcached::Fast({
+      servers => [ { address => 'localhost:11211', weight => 2.5 },
+                   '192.168.254.2:11211',
+                   { address => '/path/to/unix.sock', weight => 4 } ],
+      ketama_points => 150,
+      ...
+  });
+
+(note the rational server weight of 2.5, and ketama_points), your
+nginx configuration will have
+
+  upstream memcached_cluster {
+      memcached_hash ketama_points=150 weight_scale=10;
+
+      server localhost:11211 weight=25;
+      server 192.168.254.2:11211 weight=10;
+      server unix:/path/to/unix.sock weight=40;
+  }
+
+Note that 192.168.254.2:11211 has the default weight of 1, but since
+we scale all weights to 10, we have to explicitly specify weight=10.
+
+You may actually use rational server weights without enabling the
+Ketama algorithm by using weight_scale while omitting ketama_points
+(or setting it to zero) in both Cache::Memcached::Fast constructor and
+nginx configuration file.  As of this writing Cache::Memcached
+supports only integer weights and does not support consistent hashing.
+
+If the client uses a namespace, i.e. constructor has
+
+  namespace => 'prefix',
+
+then you have to set $memcached_namespace variable in nginx
+configuration file:
+
+  set  $memcached_namespace  "prefix";
+
+Note that this is not the same as prepending prefix to $memcached_key:
+namespace prefix is not hashed when the key is hashed to decide which
+memcached server to talk to.
+
+Also note that nginx escapes an URI key before sending the request to
+memcached.  As of this writing the transformation is equivalent to the
+Perl code
+
+  use bytes;
+  $uri =~ s/[\x00-\x1f %]/ sprintf "%%%02x", ord $& /ge;
+
+I.e. percent sign ('%'), space (' '), and control characters with the
+codes in the range 0x00--0x1f are replaced with percent sign and two
+digit hexadecimal character code (with a-f in lowercase).  You have to
+escape URI keys the same way before uploading the data to memcached
+server, otherwise nginx won't find them.
+
+
+SUPPORT
+
+http://openhack.ru/nginx-patched - project home.
+
+Send bug reports and feature requests to <tomash.brechko@xxxxxxxxx>.
+
+
+ACKNOWLEDGEMENTS
+
+Development of this module is sponsored by Monashev Co. Ltd.
+
+
+COPYRIGHT AND LICENCE
+
+Copyright (C) 2007-2008 Tomash Brechko.  All rights reserved.
+
+This module is distributed on the same terms as the rest of nginx
+source code.
diff --git a/memcached_hash/config b/memcached_hash/config
new file mode 100644
index 0000000..b823f17
--- /dev/null
+++ b/memcached_hash/config
@@ -0,0 +1,6 @@
+# ngx_http_upstream_memcached_hash_module config.
+
+ngx_addon_name=ngx_http_upstream_memcached_hash_module
+HTTP_MODULES="$HTTP_MODULES ngx_http_upstream_memcached_hash_module"
+NGX_ADDON_SRCS="$NGX_ADDON_SRCS +  $ngx_addon_dir/ngx_http_upstream_memcached_hash_module.c"
diff --git a/memcached_hash/ngx_http_upstream_memcached_hash_module.c 
b/memcached_hash/ngx_http_upstream_memcached_hash_module.c
new file mode 100644
index 0000000..e0d5a5f
--- /dev/null
+++ b/memcached_hash/ngx_http_upstream_memcached_hash_module.c
@@ -0,0 +1,548 @@
+/*
+  Copyright (C) 2007-2008 Tomash Brechko.  All rights reserved.
+
+  Development of this module was sponsored by Monashev Co. Ltd.
+
+  This file is distributed on the same terms as the rest of nginx
+  source code.
+
+  Version 0.03.
+*/
+
+#include <ngx_config.h>
+#include <ngx_core.h>
+#include <ngx_http.h>
+
+
+#define CONTINUUM_MAX_POINT  0xffffffffU
+
+
+static ngx_str_t memcached_ns = ngx_string("memcached_namespace");
+
+
+struct memcached_hash_continuum
+{
+  unsigned int point;
+  unsigned int index;
+};
+
+
+struct memcached_hash_peer
+{
+  ngx_http_upstream_server_t *server;
+  unsigned int addr_index;
+  time_t accessed;
+  unsigned int fails;
+};
+
+
+struct memcached_hash
+{
+  struct memcached_hash_continuum *buckets;
+  struct memcached_hash_peer *peers;
+  unsigned int buckets_count;
+  unsigned int peer_count;
+  unsigned int total_weight;
+  unsigned int ketama_points;
+  unsigned int scale;
+  ngx_int_t ns_index;
+};
+
+
+struct memcached_hash_find_ctx
+{
+  struct memcached_hash *memd;
+  ngx_http_upstream_server_t *server;
+  ngx_http_request_t *request;
+};
+
+
+static
+unsigned int
+memcached_hash_find_bucket(struct memcached_hash *memd, unsigned int point)
+{
+  struct memcached_hash_continuum *left, *right;
+
+  left = memd->buckets;
+  right = memd->buckets + memd->buckets_count;
+
+  while (left < right)
+    {
+      struct memcached_hash_continuum *middle = left + (right - left) / 2;
+      if (middle->point < point)
+        {
+          left = middle + 1;
+        }
+      else if (middle->point > point)
+        {
+          right = middle;
+        }
+      else
+        {
+          /* Find the first point for this value.  */
+          while (middle != memd->buckets && (middle - 1)->point == point)
+            --middle;
+
+          return (middle - memd->buckets);
+        }
+    }
+
+  /* Wrap around.  */
+  if (left == memd->buckets + memd->buckets_count)
+    left = memd->buckets;
+
+  return (left - memd->buckets);
+}
+
+
+static
+ngx_int_t
+memcached_hash_get_peer(ngx_peer_connection_t *pc, void *data)
+{
+  struct memcached_hash_peer *peer = data;
+  ngx_peer_addr_t *addr;
+
+  if (peer->server->down)
+    goto fail;
+
+  if (peer->server->max_fails > 0 && peer->fails >= peer->server->max_fails)
+    {
+      time_t now = ngx_time();
+      if (now - peer->accessed <= peer->server->fail_timeout)
+        goto fail;
+      else
+        peer->fails = 0;
+    }
+
+  addr = &peer->server->addrs[peer->addr_index];
+
+  pc->sockaddr = addr->sockaddr;
+  pc->socklen = addr->socklen;
+  pc->name = &addr->name;
+
+  return NGX_OK;
+
+fail:
+  /* This is the last try.  */
+  pc->tries = 1;
+
+  return NGX_BUSY;
+}
+
+
+static
+ngx_int_t
+memcached_hash_find_peer(ngx_peer_connection_t *pc, void *data)
+{
+  struct memcached_hash_find_ctx *find_ctx = data;
+  struct memcached_hash *memd = find_ctx->memd;
+  u_char *key;
+  size_t len;
+  unsigned int point, bucket, index;
+
+  if (memd->peer_count == 1)
+    {
+      index = 0;
+    }
+  else
+    {
+      ngx_chain_t *request_bufs = find_ctx->request->upstream->request_bufs;
+      ngx_http_variable_value_t *ns_vv =
+        ngx_http_get_indexed_variable(find_ctx->request, memd->ns_index);
+
+      /*
+        We take the key directly from request_buf, because there it is
+        in the escaped form that will be seen by memcached server.
+      */
+      key = request_bufs->buf->start + (sizeof("get ") - 1);
+      if (ns_vv && ! ns_vv->not_found && ns_vv->len != 0)
+        {
+          key += ns_vv->len + 2 * ngx_escape_uri(NULL, ns_vv->data, ns_vv->len,
+                                                 NGX_ESCAPE_MEMCACHED);
+        }
+        
+      len = request_bufs->buf->last - key - (sizeof("\r\n") - 1);
+
+      point = ngx_crc32_long(key, len);
+
+      if (memd->ketama_points == 0)
+        {
+          unsigned int scaled_total_weight =
+            (memd->total_weight + memd->scale / 2) / memd->scale;
+          point = ((point >> 16) & 0x00007fffU);
+          point = point % scaled_total_weight;
+          point = ((uint64_t) point * CONTINUUM_MAX_POINT
+                   + scaled_total_weight / 2) / scaled_total_weight;
+          /*
+            Shift point one step forward to possibly get from the
+            border point which belongs to the previous bucket.
+          */
+          point += 1;
+        }
+
+      bucket = memcached_hash_find_bucket(memd, point);
+      index = memd->buckets[bucket].index;
+    }
+
+  pc->data = &memd->peers[index];
+  pc->get = memcached_hash_get_peer;
+  pc->tries = find_ctx->server[index].naddrs;
+
+  return memcached_hash_get_peer(pc, pc->data);
+}
+
+
+static
+void
+memcached_hash_free_peer(ngx_peer_connection_t *pc, void *data,
+                         ngx_uint_t state)
+{
+  struct memcached_hash_peer *peer = data;
+
+  if (state & NGX_PEER_FAILED)
+    {
+      if (peer->server->max_fails > 0)
+        {
+          time_t now = ngx_time();
+          if (now - peer->accessed > peer->server->fail_timeout)
+            peer->fails = 0;
+          ++peer->fails;
+          if (peer->fails == 1 || peer->fails == peer->server->max_fails)
+            peer->accessed = ngx_time();
+        }
+
+      if (--pc->tries > 0)
+        {
+          if (++peer->addr_index == peer->server->naddrs)
+            peer->addr_index = 0;
+        }
+    }
+  else if (state & NGX_PEER_NEXT)
+    {
+      /*
+        If memcached gave negative (NOT_FOUND) reply, there's no need
+        to try the same cache though different address.
+      */
+      pc->tries = 0;
+    }
+}
+
+
+static
+ngx_int_t
+memcached_hash_init_peer(ngx_http_request_t *r,
+                         ngx_http_upstream_srv_conf_t *us)
+{
+  struct memcached_hash *memd = us->peer.data;
+  struct memcached_hash_find_ctx *find_ctx;
+
+  find_ctx = ngx_palloc(r->pool, sizeof(*find_ctx));
+  if (! find_ctx)
+    return NGX_ERROR;
+  find_ctx->memd = memd;
+  find_ctx->request = r;
+  find_ctx->server = us->servers->elts;
+
+  r->upstream->peer.free = memcached_hash_free_peer;
+
+  /*
+    The following values will be replaced by
+    memcached_hash_find_peer().
+  */
+  r->upstream->peer.get = memcached_hash_find_peer;
+  r->upstream->peer.data = find_ctx;
+  r->upstream->peer.tries = 1;
+
+  return NGX_OK;
+}
+
+
+static
+ngx_int_t
+memcached_init_hash(ngx_conf_t *cf, ngx_http_upstream_srv_conf_t *us)
+{
+  struct memcached_hash *memd = us->peer.data;
+  ngx_http_upstream_server_t *server;
+  unsigned int buckets_count, i;
+
+  if (! us->servers)
+    return NGX_ERROR;
+
+  server = us->servers->elts;
+
+  us->peer.init = memcached_hash_init_peer;
+
+  memd->peers = ngx_palloc(cf->pool,
+                           sizeof(*memd->peers) * us->servers->nelts);
+  if (! memd->peers)
+    return NGX_ERROR;
+
+  memd->total_weight = 0;
+
+  for (i = 0; i < us->servers->nelts; ++i)
+    {
+      memd->total_weight += server[i].weight;
+      ngx_memzero(&memd->peers[i], sizeof(memd->peers[i]));
+      memd->peers[i].server = &server[i];
+    }
+  memd->peer_count = us->servers->nelts;
+
+  if (memd->ketama_points == 0)
+    {
+      buckets_count = us->servers->nelts;
+    }
+  else
+    {
+      buckets_count = 0;
+      for (i = 0; i < us->servers->nelts; ++i)
+        buckets_count += (memd->ketama_points * server[i].weight
+                          + memd->scale / 2) / memd->scale;
+    }
+
+  memd->buckets = ngx_palloc(cf->pool, sizeof(*memd->buckets) * buckets_count);
+  if (! memd->buckets)
+    return NGX_ERROR;
+
+  if (memd->ketama_points == 0)
+    {
+      unsigned int total_weight = 0;
+      for (i = 0; i < us->servers->nelts; ++i)
+        {
+          unsigned int j;
+
+          total_weight += server[i].weight;
+          for (j = 0; j < i; ++j)
+            {
+              memd->buckets[j].point -=
+                (uint64_t) memd->buckets[j].point * server[i].weight
+                / total_weight;
+            }
+
+          memd->buckets[i].point = CONTINUUM_MAX_POINT;
+          memd->buckets[i].index = i;
+        }
+      memd->buckets_count = buckets_count;
+    }
+  else
+    {
+      memd->buckets_count = 0;
+      for (i = 0; i < us->servers->nelts; ++i)
+        {
+          static const char delim = '\0';
+          u_char *host, *port;
+          size_t len, port_len = 0;
+          unsigned int crc32, count, j;
+
+          host = server[i].name.data;
+          len = server[i].name.len;
+
+#if NGX_HAVE_UNIX_DOMAIN
+          if (ngx_strncasecmp(host, (u_char *) "unix:", 5) == 0)
+            {
+              host += 5;
+              len -= 5;
+            }
+#endif /* NGX_HAVE_UNIX_DOMAIN */
+
+          port = host;
+          while (*port)
+            {
+              if (*port++ == ':')
+                {
+                  port_len = len - (port - host);
+                  len = (port - host) - 1;
+                  break;
+                }
+            }
+
+          ngx_crc32_init(crc32);
+          ngx_crc32_update(&crc32, host, len);
+          ngx_crc32_update(&crc32, (u_char *) &delim, 1);
+          ngx_crc32_update(&crc32, port, port_len);
+
+          count = (memd->ketama_points * server[i].weight
+                   + memd->scale / 2) / memd->scale;
+          for (j = 0; j < count; ++j)
+            {
+              u_char buf[4];
+              unsigned int point = crc32, bucket;
+
+              /*
+                We want the same result on all platforms, so we
+                hardcode size of int as 4 8-bit bytes.
+              */
+              buf[0] = j & 0xff;
+              buf[1] = (j >> 8) & 0xff;
+              buf[2] = (j >> 16) & 0xff;
+              buf[3] = (j >> 24) & 0xff;
+
+              ngx_crc32_update(&point, buf, 4);
+              ngx_crc32_final(point);
+
+              if (memd->buckets_count > 0)
+                {
+                  bucket = memcached_hash_find_bucket(memd, point);
+
+                  /*
+                    Check if we wrapped around but actually have new
+                    max point.
+                  */
+                  if (bucket == 0 && point > memd->buckets[0].point)
+                    {
+                      bucket = memd->buckets_count;
+                    }
+                  else
+                    {
+                      /*
+                        Even if there's a server for the same point
+                        already, we have to add ours, because the
+                        first one may be removed later.  But we add
+                        ours after the first server for not to change
+                        key distribution.
+                      */
+                      while (bucket != memd->buckets_count
+                             && memd->buckets[bucket].point == point)
+                        ++bucket;
+
+                      /* Move the tail one position forward.  */
+                      if (bucket != memd->buckets_count)
+                        {
+                          ngx_memmove(memd->buckets + bucket + 1,
+                                      memd->buckets + bucket,
+                                      (memd->buckets_count - bucket)
+                                      * sizeof(*memd->buckets));
+                        }
+                    }
+                }
+              else
+                {
+                  bucket = 0;
+                }
+
+              memd->buckets[bucket].point = point;
+              memd->buckets[bucket].index = i;
+
+              ++memd->buckets_count;
+            }
+        }
+    }
+
+  return NGX_OK;
+}
+
+
+static
+char *
+memcached_hash(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
+{
+  ngx_str_t *value = cf->args->elts;
+  ngx_http_upstream_srv_conf_t *uscf;
+  struct memcached_hash *memd;
+  int ketama_points, scale;
+  unsigned int i;
+
+  ketama_points = 0;
+  scale = 1;
+
+  uscf = ngx_http_conf_get_module_srv_conf(cf, ngx_http_upstream_module);
+
+  for (i = 1; i < cf->args->nelts; ++i)
+    {
+      if (ngx_strncmp(value[i].data, "ketama_points=", 14) == 0)
+        {
+          ketama_points = ngx_atoi(&value[i].data[14], value[i].len - 14);
+
+          if (ketama_points == NGX_ERROR || ketama_points < 0)
+            goto invalid;
+
+          continue;
+        }
+
+      if (ngx_strncmp(value[i].data, "weight_scale=", 13) == 0)
+        {
+          scale = ngx_atoi(&value[i].data[13], value[i].len - 13);
+
+          if (scale == NGX_ERROR || scale <= 0)
+            goto invalid;
+
+          continue;
+        }
+
+      goto invalid;
+    }
+
+  memd = ngx_palloc(cf->pool, sizeof(*memd));
+  if (! memd)
+    return "not enough memory";
+
+  memd->ketama_points = ketama_points;
+  memd->scale = scale;
+  memd->ns_index = ngx_http_get_variable_index(cf, &memcached_ns);
+
+  if (memd->ns_index == NGX_ERROR) {
+      return NGX_CONF_ERROR;
+  }
+
+  uscf->peer.data = memd;
+
+  uscf->peer.init_upstream = memcached_init_hash;
+
+  uscf->flags = (NGX_HTTP_UPSTREAM_CREATE
+                 | NGX_HTTP_UPSTREAM_WEIGHT
+                 | NGX_HTTP_UPSTREAM_MAX_FAILS
+                 | NGX_HTTP_UPSTREAM_FAIL_TIMEOUT
+                 | NGX_HTTP_UPSTREAM_DOWN);
+
+  return NGX_CONF_OK;
+
+invalid:
+  ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+                     "invalid parameter \"%V\"", &value[i]);
+
+  return NGX_CONF_ERROR;
+}
+
+
+static ngx_command_t memcached_hash_commands[] = {
+  {
+    ngx_string("memcached_hash"),
+    NGX_HTTP_UPS_CONF | NGX_CONF_ANY, /* Should be 0|1|2 params.  */
+    memcached_hash,
+    0,
+    0,
+    NULL
+  },
+
+  ngx_null_command
+};
+
+
+static ngx_http_module_t memcached_hash_module_ctx = {
+  NULL,                         /* preconfiguration */
+  NULL,                         /* postconfiguration */
+
+  NULL,                         /* create main configuration */
+  NULL,                         /* init main configuration */
+
+  NULL,                         /* create server configuration */
+  NULL,                         /* merge server configuration */
+
+  NULL,                         /* create location configuration */
+  NULL                          /* merge location configuration */
+};
+
+
+ngx_module_t  ngx_http_upstream_memcached_hash_module = {
+  NGX_MODULE_V1,
+  &memcached_hash_module_ctx,   /* module context */
+  memcached_hash_commands,      /* module directives */
+  NGX_HTTP_MODULE,              /* module type */
+  NULL,                         /* init master */
+  NULL,                         /* init module */
+  NULL,                         /* init process */
+  NULL,                         /* init thread */
+  NULL,                         /* exit thread */
+  NULL,                         /* exit process */
+  NULL,                         /* exit master */
+  NGX_MODULE_V1_PADDING
+};
diff --git a/server/src/core/ngx_string.h b/server/src/core/ngx_string.h
index 3514e52..b78d5e7 100644
--- a/server/src/core/ngx_string.h
+++ b/server/src/core/ngx_string.h
@@ -64,6 +64,7 @@ typedef struct {
 #define ngx_memzero(buf, n)       (void) memset(buf, 0, n)
 #define ngx_memset(buf, c, n)     (void) memset(buf, c, n)
 
+#define ngx_memmove(dst, src, n)  (void) memmove(dst, src, n)
 
 #if (NGX_MEMCPY_LIMIT)
 
diff --git a/server/src/http/ngx_http_upstream.c 
b/server/src/http/ngx_http_upstream.c
index 7dd99ba..a71b3c7 100644
--- a/server/src/http/ngx_http_upstream.c
+++ b/server/src/http/ngx_http_upstream.c
@@ -3289,6 +3289,7 @@ ngx_http_upstream_server(ngx_conf_t *cf, ngx_command_t 
*cmd, void *conf)
         goto invalid;
     }
 
+    us->name = u.url;
     us->addrs = u.addrs;
     us->naddrs = u.naddrs;
     us->weight = weight;
diff --git a/server/src/http/ngx_http_upstream.h 
b/server/src/http/ngx_http_upstream.h
index 6754d65..96554b1 100644
--- a/server/src/http/ngx_http_upstream.h
+++ b/server/src/http/ngx_http_upstream.h
@@ -68,6 +68,7 @@ typedef struct {
 
 
 typedef struct {
+    ngx_str_t                       name;
     ngx_peer_addr_t                *addrs;
     ngx_uint_t                      naddrs;
     ngx_uint_t                      weight;
-- 
1.5.5.1.116.ge4b9c



 




Copyright © Lexa Software, 1996-2009.