Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incorrect response from cache #23

Closed
greenx opened this issue Aug 7, 2018 · 4 comments
Closed

incorrect response from cache #23

greenx opened this issue Aug 7, 2018 · 4 comments

Comments

@greenx
Copy link

greenx commented Aug 7, 2018

I try use nuster for cache openstack swift.
It have REST API for requesting list of container.
If I use standard python-swift client.
Starting at the second attempt listing goes into an infinite loop.
Example request:

swift -A http://10.10.10.10:8080/auth/v1.0 -U testuser:admin -K password list test

As a result, we get two HTTP requests:

1. GET /v1/AUTH_testuser/test?format=json
2. GET /v1/AUTH_testuser/test?format=json&marker=main2.conf

First request: return json with list of files.
Second request: We make sure that there are no more files. Requesting the listing for the last file from the previous query. In a normal situation, an empty JSON is returned.
I saw tcpdump, for second request returned JSON from first request - python-swift going in loops.

If you interrupt the connection and recreate the second query using the CURL the correct answer is returned - '[]'
If you look at the nuster logs, there is an endless towel or the second parameter is lost:

...
0000000c:swift-cluster.accept(0004)=000c from [10.76.180.236:41450] ALPN=<none>
0000000c:swift-cluster.clireq[000c:ffffffff]: GET /v1/AUTH_testuser/test?format=json HTTP/1.1
0000000c:swift-cluster.clihdr[000c:ffffffff]: Host: 10.76.163.121:8080
0000000c:swift-cluster.clihdr[000c:ffffffff]: user-agent: python-swiftclient-3.4.0
0000000c:swift-cluster.clihdr[000c:ffffffff]: accept-encoding: gzip
0000000c:swift-cluster.clihdr[000c:ffffffff]: x-auth-token: AUTH_tk9112b306a60241168c8621b9d066b781
[CACHE] Checking rule: r1
[CACHE] Calculate key: method.scheme.host.header_range.path.delimiter.query.param_marker.
GET.HTTP.10.76.163.121:8080./v1/AUTH_testuser/test.?.format=json.
[CACHE] Got key: GET.HTTP.10.76.163.121:8080./v1/AUTH_testuser/test.?.format=json.
[CACHE] Checking key existence: EXIST
[CACHE] Hit
0000000c:swift-cluster.srvrep[000c:ffffffff]: HTTP/1.1 200 OK
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Content-Length: 670
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Object-Count: 4
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Accept-Ranges: bytes
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Storage-Policy: gold
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Bytes-Used: 12269314
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Timestamp: 1533551033.27059
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Content-Type: application/json; charset=utf-8
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Trans-Id: txd8c490debd2f4bc0a627d-005b69a2f1
0000000c:swift-cluster.srvhdr[000c:ffffffff]: X-Openstack-Request-Id: txd8c490debd2f4bc0a627d-005b69a2f1
0000000c:swift-cluster.srvhdr[000c:ffffffff]: Date: Tue, 07 Aug 2018 13:47:29 GMT

...
00000057:swift-cluster.clireq[000c:ffffffff]: GET /v1/AUTH_testuser/test?format=json&marker=main2.conf HTTP/1.1
00000057:swift-cluster.clihdr[000c:ffffffff]: Host: 10.76.163.121:8080
00000057:swift-cluster.clihdr[000c:ffffffff]: user-agent: python-swiftclient-3.4.0
00000057:swift-cluster.clihdr[000c:ffffffff]: accept-encoding: gzip
00000057:swift-cluster.clihdr[000c:ffffffff]: x-auth-token: AUTH_tk9112b306a60241168c8621b9d066b781
00000057:swift-cluster.srvrep[000c:ffffffff]: HTTP/1.1 200 OK
00000057:swift-cluster.srvhdr[000c:ffffffff]: Content-Length: 670
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Object-Count: 4
00000057:swift-cluster.srvhdr[000c:ffffffff]: Accept-Ranges: bytes
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Storage-Policy: gold
00000057:swift-cluster.srvhdr[000c:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Bytes-Used: 12269314
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Timestamp: 1533551033.27059
00000057:swift-cluster.srvhdr[000c:ffffffff]: Content-Type: application/json; charset=utf-8
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Trans-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000057:swift-cluster.srvhdr[000c:ffffffff]: X-Openstack-Request-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000057:swift-cluster.srvhdr[000c:ffffffff]: Date: Tue, 07 Aug 2018 13:47:29 GMT
00000058:swift-cluster.clireq[000c:ffffffff]: GET /v1/AUTH_testuser/test?format=json&marker=main2.conf HTTP/1.1
00000058:swift-cluster.clihdr[000c:ffffffff]: Host: 10.76.163.121:8080
00000058:swift-cluster.clihdr[000c:ffffffff]: user-agent: python-swiftclient-3.4.0
00000058:swift-cluster.clihdr[000c:ffffffff]: accept-encoding: gzip
00000058:swift-cluster.clihdr[000c:ffffffff]: x-auth-token: AUTH_tk9112b306a60241168c8621b9d066b781
00000058:swift-cluster.srvrep[000c:ffffffff]: HTTP/1.1 200 OK
00000058:swift-cluster.srvhdr[000c:ffffffff]: Content-Length: 670
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Object-Count: 4
00000058:swift-cluster.srvhdr[000c:ffffffff]: Accept-Ranges: bytes
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Storage-Policy: gold
00000058:swift-cluster.srvhdr[000c:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Container-Bytes-Used: 12269314
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Timestamp: 1533551033.27059
00000058:swift-cluster.srvhdr[000c:ffffffff]: Content-Type: application/json; charset=utf-8
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Trans-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000058:swift-cluster.srvhdr[000c:ffffffff]: X-Openstack-Request-Id: txd8c490debd2f4bc0a627d-005b69a2f1
00000058:swift-cluster.srvhdr[000c:ffffffff]: Date: Tue, 07 Aug 2018 13:47:29 GMT
...

Etc. in logs not [CACHE]
In the normal situation, the response to the second request is logged as follows:

0000056b:swift-cluster.accept(0004)=000a from [10.76.180.236:41862] ALPN=<none>
0000056b:swift-cluster.clireq[000a:ffffffff]: GET /v1/AUTH_testuser/test?format=json&marker=main2.conf HTTP/1.1
0000056b:swift-cluster.clihdr[000a:ffffffff]: Host: 10.76.163.121:8080
0000056b:swift-cluster.clihdr[000a:ffffffff]: User-Agent: curl/7.61.0
0000056b:swift-cluster.clihdr[000a:ffffffff]: Accept: */*
0000056b:swift-cluster.clihdr[000a:ffffffff]: Accept-Encoding: gzip
0000056b:swift-cluster.clihdr[000a:ffffffff]: X-Auth-Token: AUTH_tk9112b306a60241168c8621b9d066b781
[CACHE] Checking rule: r1
[CACHE] Calculate key: method.scheme.host.header_range.path.delimiter.query.param_marker.
[CACHE] Got key: GET.HTTP.10.76.163.121:8080./v1/AUTH_testuser/test.?.format=json&marker=main2.conf.main2.conf.
[CACHE] Checking key existence: EXIST
[CACHE] Hit
0000056b:swift-cluster.srvrep[000a:ffffffff]: HTTP/1.1 200 OK
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Content-Length: 2
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Container-Object-Count: 4
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Accept-Ranges: bytes
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Storage-Policy: gold
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Last-Modified: Tue, 07 Aug 2018 05:42:38 GMT
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Container-Bytes-Used: 12269314
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Timestamp: 1533551033.27059
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Content-Type: application/json; charset=utf-8
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Trans-Id: txb64e1f0088aa44739aa2c-005b69a467
0000056b:swift-cluster.srvhdr[000a:ffffffff]: X-Openstack-Request-Id: txb64e1f0088aa44739aa2c-005b69a467
0000056b:swift-cluster.srvhdr[000a:ffffffff]: Date: Tue, 07 Aug 2018 13:53:43 GMT
0000056c:swift-cluster.clicls[000a:ffffffff]
0000056c:swift-cluster.closed[000a:ffffffff]

@jiangwenyuan
Copy link
Owner

jiangwenyuan commented Aug 8, 2018

Hi @greenx Thank you very much for this report.

I've read the log and cannot get any clue from it, could you please help me to reproduce it?

I've created a swift test server according to https://docs.openstack.org/swift/latest/development_saio.html

And I created test container:

swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing post test

and upload some file:

swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing upload test test01

then list gives me:

swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing list test
test01

so far so good, then I put nuster in front of swift using this conf:

global
    nuster cache on data-size 100m
defaults
    mode http
frontend fe
    bind *:8081
    default_backend be1
backend be1
    nuster cache on
    nuster rule all key method.scheme.host.header_range.path.delimiter.query.param_marker
    server s1 127.0.0.1:8080

and test against 8081 twice:

swift -A http://127.0.0.1:8081/auth/v1.0 -U test:tester -K testing list test
test01

There is no infinite loop. could you please share your nuster conf?

@greenx
Copy link
Author

greenx commented Aug 8, 2018

Caught! A funny bug :)
Here so it works correctly (you config):

global
    nuster cache on data-size 100m
defaults
    mode http
frontend fe
    bind *:8081
    default_backend be1
backend be1
    nuster cache on
    nuster rule all key method.scheme.host.header_range.path.delimiter.query.param_marker
    server s1 127.0.0.1:8080

And it leaves to loop. But, in theory, this is the same:

global
    nuster cache on data-size 100m
defaults
    mode http

listen li1
    bind *:8081
    nuster cache on
    nuster rule all key method.scheme.host.header_range.path.delimiter.query.param_marker
    server s1 127.0.0.1:8080

@jiangwenyuan
Copy link
Owner

jiangwenyuan commented Aug 8, 2018

Aha! Confirmed. interesting..it should be same basically.

I'll look into this, also welcome pull request :)

Meanwhile, please avoid using listen as per haproxy configruation:

A "listen" section defines a complete proxy with its frontend and backend
parts combined in one section. It is generally useful for TCP-only traffic.

@jiangwenyuan
Copy link
Owner

The reason is that in listen section, nuster cache|rule is added to frontend, while it has to be in backend. I'm going to disable using nuster in listen.

jiangwenyuan added a commit that referenced this issue Aug 15, 2018
jiangwenyuan added a commit that referenced this issue Feb 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants