-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
maybe 3 bugs #28
Comments
Hi, thanks for the reporting:) I've did a little investigation and confirmed bug 1 but cannot confirm bug2/3. bug 2: if you didn't set key first, 404 is returned:
if you have set key already:
bug 3: I use this script:
cannot reproduce it. could you please help me reproduce it? |
Bug 2-3 is really tricky only occurs in a certain configuration. Today, I sketched out the code to check the bug and it did not work for me, it was ok. If you checked with the configuration of nuster for a bug 1 where on one backend the cache is disabled, and chache on the second the bugs 2-3 are disabled in this configuration do not show themselves. Try to enable caching on both backends, but even in this configuration I can not yet achieve such a 100% effect as yesterday. The bug 2 is very rarely seen by Nuster, it breaks the connection and issues Internal Server Error. To achieve leakage of the processor while it was not possible. Very sample code is just for you to understand.
The bugs surfaced under the coded code, which I already loaded in the wrk. If I get to find the configuration in which 100% worked the bug I will write. |
I was able to find out why there is a processor leak.
|
Thanks, but I still cannot reproduce bug2/3, your app only gives me this error which i think has nothing to do with nuster, I tried with nginx(only get part) , and can get this error too.
BTW, bug 1 has beed fixed in 2d94461 |
Yes this is what I was talking about. The bug 2 nuster terminates the connection and the aiohttp client can not write to it. CancelledError can still be a ServerDisconnect. To exclude errors aiohttp checked on another web framework the situation was similar. Did you notice after these errors that Nuster did not consume 100% of the CPU? (Bag3) |
Hi, if you try this code:
and run
Do you mean No, nuster did not consume 100% of the CPU. |
Hi, I've being trying to reproduce bug2/3, but still cannot reproduce. I'm closing it now, feel free to reopen it. BTW bug1 has fixed and merged into master. thanks |
Is this a BUG report or FEATURE request?:
BUG report
Environment:
Thank you very much for your work. A very interesting solution is to unite HAproxy with Cache and NoSQL.
BUG 1
a bug with a freeze request cache from the backend.
- Create in the settings 2 backend the first for static caching, the second for html hitting without caching.
- The page should have several static elements.
- Start nuster in debug mode.
- Open the browser in Inspector mode. Refresh the page several times, observe the speed of getting the page.
approximate config
Debug nuster in freeze moment
Effects:
I had a freeze of feedback html response backend client from 10 seconds to infinity.
Judging by the nuster log he tried to find the backend cache by which the cache was not included.
![wait_req]
(https://user-images.githubusercontent.com/16289977/44980870-b8bb8200-af79-11e8-8dab-e9f7dca44e87.png)
BUG 2
I wanted to transfer the session from Redis to Nuster NoSql but I encountered 2 bugs.
- Get it for example using the wrk utility or any other fast asynchronous method.
./wrk -t2 -c200 -d30s http://127.0.0.1/nosql/key1
See to the number of errors.
Effect: After a few requests, Nuster breack connections with the client.
BUG 3
Bug with NoSql + CPU utilization
Similar to the bug above, but you need to create and get the same key simultaneously.
Effect: In the uniprocessor version, the CPU will be fully utilized 100% and only restart Nuster will help.
In a lot of CPU, if you repeat the procedure several times, all the kernels except for one trampoline are 100% recycled and restart will be required.
The text was updated successfully, but these errors were encountered: