-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High throughput fixes for csma and sixlowpan #648
Conversation
This is really good. We've seen similar problems on platforms with limited numbers of queuebufs, and this looks like a very nice way to solve it! A few quick comments:
|
I thought they were used in the debug trace, I will remove them and maybe include them in a CSMA_STAT like RPL_STAT in a later PR, what do you think ?
It's easy to miss one add or free in queuebuf.c especially with WITH_SWAP) so counting the used memb entries is safer. If you think it's better to use a counter per queuebuf type I will update it accordingly.
Agreed :) |
Travis is failing because we are again overflowing the rom on the Sky platform :
|
In principle, this pull request looks good (and solves a problem that we've also seen from time to time). But the global variables (and their naming) in this commit shouldn't go in: cetic@9562c9c Removing that commit would make this 👍 from me! |
Also, rebasing to current master might fix the sky compile problem. |
rebased, functions renamed as suggested, csma statistics commit removed and travis is ok :) |
👍 |
High throughput fixes for csma and sixlowpan
When traffic is send to a node with a higher rate than the radio can output this consumes all the available packet buffers in CSMA, nay other traffic is blocked or seriously slowed; this can break the DAG as DIO/DAO are no longer send timely. This fix add a new configuration parameter, CSMA_CONF_MAX_PACKET_PER_NEIGHBOR, limiting the number of packet buffer used by a given neighour. Its default value is set to MAX_QUEUED_PACKETS so that the default behaviour of CSMA is not modified.
Also, if this high rate traffic is fragmented, it will also consumes all the available queuebuf leading to a strange behaviour : only the first fragment is sent as at a time only one queuebuf can be released by the radio, all the remaining fragments are dropped and the message is never received by the target node. Also, as all the queuebuf are used almost no other traffic can go trough.
High throughput traffic also requires modifications in native-rdc, they will be part of another PR