-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flow control and discard watermark #2295
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did the state tests for ENQ and well, we cannot rely on a C0 byte for this as they are typically ignored / valid payload for the subparser states OSC, DCS, APC, PM and SOS. To keep going with a simple one byte in-band variant we would have to go with an unused C1 byte. Hottest candidates for this are PU1 or PU2 (meant for private usage). Alternatively we could go with a totally custom sequence (more bytes allowed), but would have to patch the parser to carry forward the previous state to be re-applied afterwards. Already time to dump the idea to handle it on VT parser level and go with transport level instead? Well it kinda belongs there and not in the VT parser, but would need us to introduce a tiny "protocol" to wrap the payload and to implement a backpropagation from the internal write buffer states. By doing it right at the parser level we can skip this complexity and perf penalty. Hmm. To circumvent the perf penalty of wrapping every payload chunk into some custom protocol, we could go back to our original idea of a one-sided report: Whenever xterm.js finished processing of xy bytes it could just announce that to the backend. The backend then can approximate whether xterm.js is to far behind and pause/resume the pty. Last but not least we also should take into account that other things might want to "abuse" the backend-frontend connection for their stuff (like zmodem, or embedding ideas like #2300) - this has to work for those too without complicating things on xterm.js and backend side to much. |
@Tyriar Done with the basic rewrite. What we have now:
With these changes the inner write states can be tracked. Now its easy to decide whether the terminal is under high data pressure and falling behind, thus flow control can be implemented. The demo implements flow control as following:
To avoid problems on implementors' side the flow control setting in the attach addon is disabled by default. New Protocol: About the ENQ: Well slowly getting there, up for a review. Edit: Btw, foreign things like zmodem transport over the same websocket should be easy now. The data sink (zmodem) would just have to announce data handling back like xterm.js does, thus trigger an ACK reply every n-th bytes, so the server part can do the ACK accounting as with a normal terminal stream. |
@Tyriar Looked further into the UTF8 problem with arbitrary data #2326 and come to the conclusion that we should give up on string based transport for data from browser to server.js to avoid the (faulty) utf8 conversions. Thus I created a binary version of the protocol. |
Yes let's close this, we also need to remove some stuff in node-pty since our thinking changed right? |
Second attempt to solve the flow control issues. Other than the first attempt this relies on ENQ --> ACK cycles:
xterm.js handles the ENQ request in-band with the escape sequence parser, thus a response guarantees that previous data already got processed. With this the server side can implement strategies to measure data throughput and can pause the pty if needed.
The demo server implements a simple pending ACK strategy: Every n-th data bytes it sends an ENQ and peeks for ACK responses. If the pending requests hit a limit the pty gets paused.
Note: The request message is hardcoded to ENQ in xterm.js, the response message can be cumstomized by the option
answerbackString
(as suggested by xterm docs).Again this PR comes with a discard parachute to avoid crashing the browser. It is hardcoded to ~50MB.