-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ROOT PATH is broken for all use cases #1262
Comments
Seems a deployment issue to me. Are you sure Chainlit is running on Note that if Chainlit can't open a socket yet won't give an error message that would be a bug on our side and we'd like to know about it. :) |
@dokterbob I've confirmed that CL is running on http://127.0.0.1:8000 and doesn't have any port binding conflicts. More specifically, when CL is started without a root path (thereby bypassing the nginx location block), the UI opens in a browser normally and establishes a connection:
However, when the root path is specified (thereby allowing nginx to reverse proxy requests), the UI opens in a browser and "Could not reach..." error occurs:
The identical tests work fine with 1.1.402. However, today in re-testing I noticed errors in the debug log that seem informative:
It looks like some requests don't include the root-path, and some use the localhost IP while some use my computer's LAN IP. Hope this helps, and if you see anything in my nginx config that could explain this, please let me know (but it seems pretty vanilla/basic). In case it might be relevant, I'll also note that my local server uses SSL only, but I have not configured SSL explicitly in CL. I don't actually intend to, which is one reason for the reverse proxy setup. |
#1252 could be the same problem. |
@stephenrs Thanks for the additional feedback. If I get it well, with chainlit mounted on a subpath, under some conditions, the wrong URL is used sometimes by the frontend to connect to the websocket. Does that sound correct? Could you perhaps supply a minimal test project (GIT repo or ZIP or whatever) so we can replicate/work on the issue? I'm not sure whether #1252 is the same issue, hard to asses from the information provided. But I do think #1260 tries to address the same issue. I would love to see #1223 implemented, to get rid of |
@dokterbob I've attached a zip file containing 3 files:
I used the OAI Assistants cookbook example as my starting point, so you might recognize most of the code. I also recently added the CORS stuff to the nginx config to make sure CORS wasn't the problem, but it didn't make a difference and 1.1.402 doesn't need it. Seems like the recent changes to server.py might be to blame. Here's the log output from the startup of both the default and nginx proxy approaches: chainlit run path/to/test_connect.py --port 8000 -d - avoids nginx, connects fine
chainlit run path/to/test_connect.py --port 8000 -d --root-path /chainlit - nginx proxies requests, falls to connect
Hope this helps. Let me know if you need anything else. [accepted] |
Thanks @stephenrs for providing the feedback and the test project. However, I am unable to run it because it seems I need an I will not ask you to send me your OpenAI API key and setting up an assistant for an issue unrelated to assistants is really not in scope. Hope you understand, although I would really help to solve this issue, and I can definitely think that adding an example nginx config to docs regarding deployment is a Good Thing(tm). I am also not aware what Ideally, I would love a unit or e2e test, failing on the specific issue you're having. Second to that, very specific instructions to replicate your issue (e.g. clear steps), including test project with minimal dependencies as some guidance how to recognise the issue. For example, while your initial logs showed 404's and incorrect pathnames, the logs you posted above only have 200's and correct pathnames. Thanks for helping us move this forward! |
I guess I assumed you guys would have test accounts with the LLMs to use to facilitate investigating related problems (folks don't seem likely to publicize their OAI/Assistant credentials when problems arise). Also, since the logs don't give an indication of what is failing, causing the app to decide that it can't reach the server, or whether it has anything to do with connecting to an assistant, I thought a test app that is a little closer to a real-world scenario might be helpful.
As above, I wasn't sure either, but since it's CL's code I guess I didn't think it would be problematic :) The EventHandler handles streaming responses and I got it from the example project here: https://github.com/Chainlit/openai-assistant In any case, however, I've discovered that you don't need a test app to reproduce the issue. You can observe it with the included hello.py target by specifying a root path when you run it. There are steps to reproduce in the original report, but it's actually simpler, as below (from the project root):
Instead of "What is your name?", when my browser opens to http://localhost:8000/bob it says "Could not reach the server." Running "poetry run chainlit run chainlit/hello.py" works as expected, as does specifying a root path in 1.1.402. So, it doesn't appear to be related to the nginx proxy.
I realized that the first time I posted the logs I had another tab(s) open that was also trying to contact the CL server. The most recent logs represent a "clean" test. Sorry for the confusion. |
Quick comment (haven't yet had time to read your response above). Could the (undocumented) |
I'm having the same issue but using |
Sweet, this is the information I need. We get lot's of feedback and PR's and due to the huge number of use cases and integrations supported (and thus far lack of test coverage), our challenge pertains to keeping track of and switching between use cases. That's why having clear reproduction steps and/or a demo project really helps. @Tug I've found your description a bit hard to understand. Please, if you can, explain the issue your facing a bit more clearly. Thanks! |
--root
The error is the same but the test case is different, I'll open another issue 👍 Edit: opened #1317 |
@dokterbob I noticed that you changed the title of this issue, but as I've commented above this is not related to the proxy, because the proxy is not involved in the latest reproduction steps I noted. The root-path seems to be broken with or without a proxy, as @Tug is experiencing. I initially provided a more complex test case involving a proxy because I guess I just assumed it wouldn't be possible that something so fundamental and globally affecting would be released without testing. I trusted "--root-path", but it appears to be broken for any case. |
Looks like this is the same thing: #1313 So it seems like anyone who uses root-path should freeze themselves on 1.1.402 until this project stabilizes. |
--root
I am assuming this is the same as #1313, so addressed by #1337, so closing this one. If the issue persists, please provide as minimal as possible replication case as I have not been able to replicate the issue based on instructions here. |
Describe the bug
The hello.py script works fine for me with the latest build, but my test app never connects to the server. The test app works fine with v1.1.402.
Nothing appears in the debug log that indicates a problem, as follows:
However, I'm using nginx as a reverse proxy, and the nginx error log shows this:
[error] 72918#0: *5798 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 1.1.1.1, server: server.name, request: "GET /chainlit/ws/socket.io/?EIO=4&transport=polling&t=P6CRLYY HTTP/1.1", upstream: "http://127.0.0.1:8000/chainlit/ws/socket.io/?EIO=4&transport=polling&t=P6CRLYY", host: "server.name", referrer: "https://server.name/chainlit"
To Reproduce
Expected behavior
The connection to the server should succeed and the input box should be enabled.
Desktop (please complete the following information):
The text was updated successfully, but these errors were encountered: