Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve code safety and error handling + performance improvements #85

Merged
merged 19 commits into from
Jan 7, 2025

Conversation

thatstoasty
Copy link
Collaborator

@thatstoasty thatstoasty commented Jan 6, 2025

  • Switch to Pointer where possible
  • Improve libc error handling with errno checking and Error raising.
  • Free allocated UnsafePointers that were not owned by any structs.
  • Speed up parsing by using Span instead of List[Byte] for ByteReader.
  • Reduce allocations by removing lines where we copy strings and lists just to have a shorter variable name.
  • Cleaned up some subtle bugs where we "mutate" objects, but in reality we consume them.
  • Cleaned up other potential subtle bugs where we return an empty String or 0 instead of raising and exiting properly. Most situations cannot be recovered if that failure case was hit.

@thatstoasty
Copy link
Collaborator Author

@saviorand @bgreni Please review the changes and give feedback! A lot of the code was touched.

One thing I noticed is that the connection is always shut down by the client before the server can do so. Each time the server tried to shutdown the socket, it fails saying the socket isn't connected.

@thatstoasty
Copy link
Collaborator Author

thatstoasty commented Jan 6, 2025

Maybe we should run the benchmarks in the ci pipeline?

Old benchmarks

--------------------------------------------------------------------------------
Benchmark results
--------------------------------------------------------------------------------
name          , met (ms)    , iters   , min (ms)    , mean (ms)   , max (ms)    , duration (ms)
HeaderEncode  ,     0.002075,   571763,     0.002075,     0.002075,     0.002075,  1186.646000
HeaderParse   ,     0.013825,    87064,     0.013825,     0.013825,     0.013825,  1203.660000
RequestEncode ,     0.014535,    88400,     0.014535,     0.014535,     0.014535,  1284.910000
RequestParse  ,     0.059278,    34772,     0.059278,     0.059278,     0.059278,  2061.219000
ResponseEncode,     0.071990,    59699,     0.071990,     0.071990,     0.071990,  4297.749000
ResponseParse ,     0.241491,    10000,     0.241491,     0.241491,     0.241491,  2414.913000

New benchmark (on my machine)

--------------------------------------------------------------------------------
Benchmark results
--------------------------------------------------------------------------------
name          , met (ms)    , iters   , min (ms)    , mean (ms)   , max (ms)    , duration (ms)
HeaderEncode  ,     0.002138,   558984,     0.002138,     0.002138,     0.002138,  1194.991000
HeaderParse   ,     0.010977,   100000,     0.010977,     0.010977,     0.010977,  1097.661000
RequestEncode ,     0.012886,    92187,     0.012886,     0.012886,     0.012886,  1187.919000
RequestParse  ,     0.028500,    42256,     0.028500,     0.028500,     0.028500,  1204.309000
ResponseEncode,     0.011634,   100000,     0.011634,     0.011634,     0.011634,  1163.411000
ResponseParse ,     0.023872,    53206,     0.023872,     0.023872,     0.023872,  1270.141000

@bgreni
Copy link
Collaborator

bgreni commented Jan 6, 2025

Quick look and things seem good, out of curiosity what are the results for magic run bench_server?

@thatstoasty
Copy link
Collaborator Author

thatstoasty commented Jan 6, 2025

@bgreni Hmm, the amount of read errors is concerning. It seems like all of the requests are erroring out 😂. Guess I have a little digging to do to see why the benchmark fails, but the integration test doesn't.

Running benchmark
Running 10s test @ http://0.0.0.0:8080/
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.52ms   12.85ms 171.86ms   98.45%
    Req/Sec     8.65k     3.19k   10.22k    84.21%
  16348 requests in 10.00s, 0.00B read
  Socket errors: connect 1, read 16348, write 0, timeout 0
Requests/sec:   1634.33
Transfer/sec:       0.00B

Is there a way to check what the errors are that are being thrown by wrk? I don't see any errors being thrown on the server side, and the final response looks ok.

HTTP/1.1 200 OK
server: lightbug_http
content-type: text/plain
connection: keep-alive
content-length: 12
date: 2025-01-06T23:36:39.590700+00:00

Hello world!

@thatstoasty
Copy link
Collaborator Author

It seems to be related to the earlier issue I noted about the connection being closed already before the server can shutdown the connection.

Wrk read errors occur when the connection is closed before it can read from it. I'll poke around to see what I changed that caused this behavior.

@saviorand
Copy link
Collaborator

@thatstoasty yeah there's not really a good way to see errors in wrk , the author responds to Github issues about this by suggesting to use tcpdump 😄
Last time I had troubles like this it was due to a missing User-Agent header in client requests, some hosts require it to be set and it was not set, therefore requests were failing. Might not be the same problem here though

@thatstoasty
Copy link
Collaborator Author

@saviorand I was hoping one of you guys would have the magic solution to checking haha. Yeah I dug through the open issues and saw a bunch of requests asking for error information to be captured. I guess I'll try tcpdump to see if I can glean anything from that.

@thatstoasty
Copy link
Collaborator Author

@saviorand @bgreni Fixed it, I was incorrectly adding a null terminator to the buffer being sent via send. It was the other way around, it needs to NOT be null terminated. So I added an error raise to Connection.write to check for the null terminator before running send

🔥🐝 Lightbug is listening on http://0.0.0.0:8080
Ready to accept connections...
Running benchmark
Running 10s test @ http://0.0.0.0:8080/
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   115.88us  418.65us   8.17ms   97.41%
    Req/Sec    16.34k     1.27k   17.94k    66.34%
  164060 requests in 10.10s, 25.66MB read
Requests/sec:  16241.21
Transfer/sec:      2.54MB

lightbug_http/client.mojo Outdated Show resolved Hide resolved
@saviorand saviorand merged commit 273568a into Lightbug-HQ:main Jan 7, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants