Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to wait for a response from a server-generated emit() #194

Closed
bodgit opened this issue Jan 4, 2016 · 11 comments
Closed

Ability to wait for a response from a server-generated emit() #194

bodgit opened this issue Jan 4, 2016 · 11 comments
Labels

Comments

@bodgit
Copy link

bodgit commented Jan 4, 2016

Hi,

Is it all possible to wait for a response from a server-generated emit()? I'm using your example in the documentation to emit an event based on a regular HTTP message but I can't see an obvious way to wait for any sort of response/callback before returning the regular HTTP response:

@app.route('/ping')
def ping():
    socketio.emit('ping event', {'data': 42}, namespace='/chat')
    # wait a defined period for 'pongs', either x seconds or y responses, etc.
    return 'ok' # or maybe 'failed' if nothing answered

Granted it's a contrived example but is something like this possible?

Whilst typing this it occurred I could do something like this:

@socketio.on('pong event')
def pong_handler(json):
    # Pop response into some sort of store
    pass

@app.route('/ping')
def ping():
    socketio.emit('ping event', {'data': 42})
    # Poll store for x seconds or until y responses have been received
    return response

But I'm not sure how I'd do that and I'd rather not require an external dependency if at all possible.

@miguelgrinberg
Copy link
Owner

The idea of waiting isn't compatible with asynchronous frameworks, where everything is event based. Have you tried using the callback feature for this? That is part of the SocketIO protocol, it allows you to receive an acknowledgement that a message was received by the other side. This acknowledgement can include one or more arguments.

On the server side:

def ack(value):
    if value != 'pong':
        raise ValueError('unexpected return value')

@app.route('/ping')
def ping():
    socketio.emit('ping event', {'data': 42}, namespace='/chat', callback=ack)

Then on the client:

socket.on('ping event', function (name, fn) {
    fn('pong');
});

The callback function fn in the client is connected to the callback function ack in the server, so when the client calls its callback, the server automagically gets an equivalent call.

@bodgit
Copy link
Author

bodgit commented Jan 5, 2016

I'm using the Python socketIO-client library as a client and it took me a while to work out how to make it respond to server-emitted callbacks but I managed to get it responding with:

from socketIO_client import SocketIO, BaseNamespace, find_callback

class Namespace(BaseNamespace):

    def on_ping(self, *args):
        callback, args = find_callback(args)
        # Do stuff with remaining args
        callback('pong')

with SocketIO('localhost', 8082, Namespace) as socketio:
    socketio.wait()

So now the callback is received by the server but by that point the server has already returned the regular HTTP response back to the client, (I also don't see much value in the exception, if I raise one in the callback handler it doesn't appear to do anything).

I guess because I'm using this particular client I saw the socketio.wait() and socketio.wait_for_callbacks() methods and assumed that might be part of the spec and was looking for something similar to use in your extension.

@miguelgrinberg
Copy link
Owner

but by that point the server has already returned the regular HTTP response back to the client

Right. The whole point of using Flask-SocketIO is that you are not constrained by the HTTP request-response dynamics. This is actually a feature, not a limitation. Normally applications that use Socket.IO do not need to send HTTP requests anymore once the socket connection is established.

You haven't provided enough information for me to understand what you are trying to do, but to me, mixing HTTP requests and Socket.IO events seems like the wrong approach.

@bodgit
Copy link
Author

bodgit commented Jan 5, 2016

Ultimately I have some remote HTTP API's that I'd like to access directly but I can't due to the firewall not accepting inbound connections and it being in a different administrative domain.

However I can place an agent next to the API's which can connect outbound on HTTP(S) so I have written a small Flask app with your Flask-SocketIO extension that accepts connections from this agent which I've also written so I now have a two-way connection punched through the firewall. The Flask app has a catch-all regular HTTP route that serialises each request, emit()'s it down the websocket connection to the agent which makes the actual HTTP request to the remote API, serialises the response and then emit()'s it back for the Flask app to return to the original user with a bit of massaging of the response headers where necessary.

So it's basically an HTTP reverse-ish proxy. It's entirely possible I didn't specifically need SocketIO however the only other extension for Flask I could find that integrated websockets was Flask-Sockets and your extension is by far the nicer/maintained of the two 😄

The only bit I was stuck on was getting the response back to the original user. I've kludged it for now by using a Redis instance as a temporary response store, after the Flask app emit()'s the request it polls Redis for the response which gets placed there by the SocketIO handler in the app. If a response doesn't appear within a reasonable time then the Flask app returns a 504 error just like a normal proxy would.

@miguelgrinberg
Copy link
Owner

Nice, it's a very interesting and novel use of the extension that I haven't seen before. I would have probably attempted to do this with SSH tunnels, but I have to take back my assessment that you are doing it wrong, I think your solution can be made to work.

When your Flask app receives an HTTP request to be sent over the firewall, it needs to block until the response is available, as you correctly described above. The problem is that the emit Socket.IO call is non-blocking.

A possible solution that you can investigate is to use events to handle the blocking part. When a request arrives, you can create an event, somehow associate it with the request (maybe by assigning the request a UUID), and then do the emit(). Since the emit is non-blocking, once you make the call you will wait on the event created specifically for this request. Eventually, a socket handler in the Flask app will be called form the other side of the firewall, with the response to the request. This handler needs to also receive the UUID from the originating request. With the UUID you can locate the event object, and at that point you can signal it. The main request thread will then wake up and be able to collect the response and send it back via HTTP.

I hope this makes sense. Pretty sure this will work well, but maybe you should investigate if you can open up this API using tunnels, that is probably going to be much simpler.

@bodgit
Copy link
Author

bodgit commented Jan 6, 2016

Thanks Miguel, events seem to be the way to go. I had already poked about with the eventlet library to monkey patch the standard library as my original timeout code with time.sleep() wasn't cooperating properly without it. Here's a much stripped down example of what I've ended up with:

from eventlet import event
from eventlet.timeout import Timeout
from flask import Flask, Response, abort, request
from flask.ext.socketio import SocketIO
import uuid

app = Flask(__name__)
socketio = SocketIO(app)

events = {}

@app.route('/', defaults={'path': '/'})
@app.route('/<path:path>')
def proxy(path):

    u = uuid.uuid4()

    req = {
        'uuid': u,
        'method': request.method,
        # etc.
    }

    socketio.emit('request', req, callback=response, ...)

    timeout = Timeout(10)
    try:
        e = events[u] = event.Event()
        resp = e.wait()
    except Timeout:
        abort(504)
    finally:
        events.pop(u, None)
        timeout.cancel()

    response = Response(resp[...])
    return response

def response(data):
    try:
        e = events[data['uuid']]
        e.send(data)
    except KeyError:
        pass

@socketio.on('response')
def response_handler(data):
    response(data)

if __name__ == '__main__':
    socketio.run(app, debug=True)

I was a bit nervous about just using a global dict for tracking the requests without some sort of locking around it but from what I can tell from reading the docs as eventlet is using green threads it should be safe. I've pushed concurrent requests at it and it doesn't seem to deadlock or get the responses jumbled up so it seems to work.

The only thing that didn't work reliably was using a callback to return the response rather than a separate event; more often than not the callback handler never seems to fire but then does work on occasion. Using a separate event always seems to work and the code above handles both ways of returning the response.

@miguelgrinberg
Copy link
Owner

I was a bit nervous about just using a global dict for tracking the requests without some sort of locking around it

Yeah, for this type of environment it is safe because you have full control of when and where the context switches happen.

The only thing that didn't work reliably was using a callback to return the response rather than a separate event

This is interesting, because callbacks are really nothing more than events that use a different label in the packet that transports them. I'll build something similar to your example to test them, maybe there is something going on there that I need to correct.

@bodgit
Copy link
Author

bodgit commented Jan 6, 2016

This is interesting, because callbacks are really nothing more than events that use a different label in the packet that transports them. I'll build something similar to your example to test them, maybe there is something going on there that I need to correct.

I think I managed to narrow it down to an instance of the server might or might not work properly but whatever behaviour it had wouldn't change for the lifetime of that instance. So occasionally the server would respond to callbacks every time but if I killed it and started a fresh copy it was random which behaviour it then had. The not-working behaviour was by far the more common case though.

@bartkl
Copy link

bartkl commented Nov 9, 2017

I'm trying to use (more or less) the pattern described by @bodgit , but for some reason the eventlet event wait() call seems to block even incoming (Flask-SocketIO) events, preventing the handler from triggering so that no message is sent to the eventlet event, after which the timeout exception occurs and all is not well.

Could someone please help me out? Let me know what kind of information is needed and I'll post it.
Thanks!

@miguelgrinberg
Copy link
Owner

miguelgrinberg commented Nov 9, 2017

@bartkl Yes, in its default configuration, events that are coming from a client are delivered in serialized form, which means that one event needs to return for the next event to be dispatched. This is to protect against race conditions and other potential synchronization problems when two or more events for the same client are running in parallel.

You have two options. You can disable the serialized dispatching of events by passing async_handlers=True in the SocketIO constructor. This will allow dispatching of events while a previous event is stuck on a wait() call.

A preferable option, in my opinion, is to start a background task that performs the wait instead of doing it in the event handler, allowing the event to finish.

@bartkl
Copy link

bartkl commented Nov 9, 2017

Thanks, that's great info!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants