Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: issue#4346 #4348

Closed
wants to merge 3 commits into from
Closed

fix: issue#4346 #4348

wants to merge 3 commits into from

Conversation

mqk
Copy link
Contributor

@mqk mqk commented Dec 19, 2023

What does this PR address?

This PR fixes issue #4346.

Before submitting:

  • Does the Pull Request follow Conventional Commits specification naming? Here are GitHub's
    guide
    on how to create a pull request.

  • Does the code follow BentoML's code style, pre-commit run -a script has passed (instructions)?
    The exception is that pdm-lock-check failed, for reason I don't understand. This was a result of using a newer version of pdm (2.11.1 rather than 2.10.4, which your CICD uses). Downgrading pdm solved the issue.

  • Did you write tests to cover your changes?

No, but I think it would be a good idea to write a test with a sufficiently large payload that would trigger this error without the fix. But I don't know where or how to do that, so would need some guidance.

@mqk mqk requested a review from a team as a code owner December 19, 2023 19:54
@mqk mqk requested review from ssheng and removed request for a team December 19, 2023 19:54
@@ -25,7 +25,7 @@ classifiers = [
dependencies = [
"Jinja2>=3.0.1",
"PyYAML>=5.0",
"aiohttp",
"aiohttp>=3.9.1",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Older versions of aiohttp(e.g. 3.8.6) don't expose the ClientSession.max_field_size kwarg.

@jianshen92 jianshen92 requested review from sauyon and aarnphm December 21, 2023 17:14
@mqk
Copy link
Contributor Author

mqk commented Dec 22, 2023

I've attached an example to reproduce the issue on main. Usage:

> tar xzf picklable_model.tar.gz
> cd picklable_model/
> python ./save_model.py
> bentoml build
> bentoml serve picklable_model_imputation:latest --port 5001

Then navigate to localhost:5001 and click on "Try it out" > "Execute" under /impute. You'll get this error on main:

2023-12-22T11:09:41-0800 [ERROR] [api_server:picklable_model_imputation:9] Exception on /impute [POST] (trace=238a2c870205053cf22121983fc3cb55,span=a2da156f8a15101d,sampled=0,service.name=picklable_model_imputation)
Traceback (most recent call last):
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 905, in start
    message, payload = await protocol.read()  # type: ignore[union-attr]
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/aiohttp/streams.py", line 616, in read
    await self._waiter
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/aiohttp/client_proto.py", line 213, in data_received
    messages, upgraded, tail = self._parser.feed_data(data)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "aiohttp/_http_parser.pyx", line 557, in aiohttp._http_parser.HttpParser.feed_data
  File "aiohttp/_http_parser.pyx", line 732, in aiohttp._http_parser.cb_on_header_value
aiohttp.http_exceptions.LineTooLong: 400, message:
  Got more than 8190 bytes (16293) when reading Header value is too long.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/mikekuhlen/Code/BentoML/src/bentoml/_internal/server/http_app.py", line 343, in api_func
    output = await run_in_threadpool(api.func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2106, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 833, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/bentoml/bentos/picklable_model_imputation/uasnkvfa7w5ctaty/src/service.py", line 16, in impute
    return imputation_runner.run(df)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/src/bentoml/_internal/runner/runner.py", line 52, in run
    return self.runner._runner_handle.run_method(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/src/bentoml/_internal/runner/runner_handle/remote.py", line 346, in run_method
    anyio.from_thread.run(
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/anyio/from_thread.py", line 45, in run
    return async_backend.run_async_from_thread(func, args, token=token)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2121, in run_async_from_thread
    return f.result()
           ^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/mikekuhlen/Code/BentoML/src/bentoml/_internal/runner/runner_handle/remote.py", line 216, in async_run_method
    async with self._client.post(
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/aiohttp/client.py", line 1167, in __aenter__
    self._resp = await self._coro
                 ^^^^^^^^^^^^^^^^
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/aiohttp/client.py", line 586, in _request
    await resp.start(conn)
  File "/Users/mikekuhlen/Code/BentoML/.venv/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 907, in start
    raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Got more than 8190 bytes (16293) when reading Header value is too long.', url=URL('http://127.0.0.1:8000/')
2023-12-22T11:09:41-0800 [INFO] [api_server:picklable_model_imputation:9] 127.0.0.1:49864 (scheme=http,method=POST,path=/impute,type=application/json,length=83358) (status=500,type=application/json,length=110) 860.815ms (trace=238a2c870205053cf22121983fc3cb55,span=a2da156f8a15101d,sampled=0,service.name=picklable_model_imputation)

picklable_model.tar.gz

@jianshen92
Copy link
Contributor

Thanks for opening this PR. I believe this is resolved on the data container level in this PR . I will close this one

@jianshen92 jianshen92 closed this Jan 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: aiohttp.client_exceptions.ClientResponseError "Header value is too long"
2 participants