-
-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unclose session after disconnect #447
Comments
I can't really comment on the state of your Redis set since that is controlled by your application, but if you think the connect/disconnect handlers are invoked more times than expected please provide a complete example app I can use to verify the problem. |
Thanks for the quick response! I will provite a demo we use. We tried to reproduce the problem, but it failed. It noly appear after run long time (24 hour or more). |
Pipfile: [[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
uvicorn = "*"
python-socketio = "*"
aioredis = "*"
[requires]
python_version = "3.6" server.py: import json
import socketio
import uvicorn
SOCKET_IO_PORT = 5006
# 调试端口,服务端口在docker-compose.yml
sio = socketio.AsyncServer(async_mode='asgi', cors_allowed_origins="*")
def get_jwt_claims_from_environ(environ):
bearer_token = environ.get('HTTP_AUTHORIZATION', '') # type:str
return bearer_token
connect_count = 0
@sio.event
async def connect(sid, environ):
global connect_count
connect_count += 1
print('connect count:', connect_count)
token = get_jwt_claims_from_environ(environ)
async with sio.session(sid) as session:
if token:
session['token'] = token
# raise ConnectionRefusedError(f'Duplicate connect {sid}')
return False
disconnect_count = 0
@sio.event
async def disconnect(sid):
global connect_count
connect_count -= 1
print('disconnect count:', connect_count)
async with sio.session(sid) as session:
print(session)
def get_online_user(sio):
online_user = []
for s in sio.eio.sockets.values():
session = s.session.get('/', {})
if session.get('token') is not None:
if session['token']:
online_user.append(session['token'])
return online_user
async def other_asgi_app(scope, receive, send):
assert scope['type'] == 'http'
status = 200
online_user_from_session = get_online_user(sio)
body = {
'online_user_from_session_count': len(online_user_from_session),
'online_user_from_session': online_user_from_session,
'online_user_from_session_set_count': len(
set(online_user_from_session)),
'online_user_from_session_set': list(set(online_user_from_session)),
}
await send({
'type': 'http.response.start',
'status': status,
'headers': [
[b'content-type', b'application/json'],
]
})
await send({
'type': 'http.response.body',
'body': json.dumps(body).encode(),
})
app = socketio.ASGIApp(sio, other_asgi_app=other_asgi_app)
if __name__ == "__main__":
uvicorn.run(
app,
host="0.0.0.0",
port=SOCKET_IO_PORT,
log_level="info",
access_log=True,
use_colors=True
) clinet.html: <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.3.0/socket.io.slim.dev.js"></script>
</head>
<body>
<script>
const token = new Date().toISOString()
const socket = io('http://localhost:5006', {
transportOptions: {
polling: {
extraHeaders: {
'Authorization': token
}
}
},
});
console.log('token', token)
</script>
</body>
</html> visit when open
It still have session when |
Did you wait a minute after closing the browser? You are using polling, where disconnections cannot be detected immediately, it takes up to a minute. |
In server,
I think, when |
Okay, I think I understand now. I'll look into it. |
Some error message in log, may be useful to position bug.
|
Hello! f.write("Opened sockets: {}".format(len(socketio.server.eio.sockets)))
f.write("\n")
for sid, socket in socketio.server.eio.sockets.iteritems():
f.write("SID: {}\n".format(sid))
f.write("State: connected {} closing {}, closed {}\n".format(socket.connected, socket.closing, socket.closed))
f.write("Socket Queue: size {} \n\n".format(socket.queue.qsize())) And the result looked quite sad, like so (don't mind the queue size, I'm broadcasting quite a lot of data, so no surprises here): Opened sockets: 30
SID: 68737ede0359448e8084c0e3c7b25b2e
State: connected True, upgrading False, upgraded False, closing True, closed True
Socket Queue: full False, empty False, size 169
SID: dc47bce25482424aac9a3a521fff30cf
State: connected True, upgrading False, upgraded False, closing True, closed True
Socket Queue: full False, empty False, size 135
SID: 3abd0a4f07dc491ea7f836d1dd4d65e1
State: connected True, upgrading False, upgraded False, closing True, closed True
Socket Queue: full False, empty False, size 137
SID: 24289a8c680449599600b9da6512b7e1
State: connected True, upgrading False, upgraded False, closing True, closed True
Socket Queue: full False, empty False, size 154
SID: 95c1437242cd485a9d9a922310a2a60d
State: connected True, upgrading False, upgraded False, closing True, closed True
Socket Queue: full False, empty False, size 139
... The worst part is that even when the selenium script is stopped and all the browsers closed, those sockets are not going anywhere. I can, of course, clean them up manually via this code: sessions = socketio.server.eio.sockets.keys()
for sid in sessions:
try:
socketio.server.eio._get_socket(sid)
except KeyError:
pass But this just don't seem to be right thing to do. My setup is straight forward
|
I've been looking into why my application code doesn't close old connections. It seems this is the very same issue I'm facing. Any update on this? |
server.py
version:
start:
nginx conf:
When connect, I store
user_ID
in session and redis sets. Ifuser_ID
exists in redis sets, i will refuse connect.When disconnect, remove user_ID from redis sets.
I find some user_ID not exists in redis sets but in session, I get session use follow code:
Sometimes, it has duplicate user_ID in session.
The text was updated successfully, but these errors were encountered: