-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collision detection #31
Comments
Basic goal: Spatial-tree-based, bounding-box-based collision detection that reports when objects start or stop being "nearby" each other, and is very fast in normal cases (O(1) overhead per movement when there are no nearby boxes, rather than O(log n)). Since the tree nodes are stationary, it's an (acceptable) flaw that moving objects have a cost of O(distance moved/size). The API for users of this would be something like:
|
A few particular issues complicate making this efficient in common cases.
|
So, here's my clever math for dealing with the issue: Think of a complete tree of power-of-2-aligned boxes. Now, double the size of every box. But don't scale them up around a corner – scale them up around their centers. The original tree had a problem: At high powers of 2, there were single boundaries that were shared between nodes of many different tree levels. But the Double Tree distributes the boundaries evenly over the integers, in each dimension). This gives it some very nice properties. In particular:
Therefore, here is how the actual data structure can be arranged: I aim to show that you can move an object from one node to a parent, child, or same-size cousin, without knowing in advance whether that node exists, in O(1) operations (plus the acceptable expense of O(number of objects in the nodes that overlap the object's node)). But I'll do that in another comment because this one is long already. |
Proof that we can maintain C, D, and E in O(1) time while moving an object in one of those ways:
Proof that we can maintain B in (almost) O(number of overlaps) time while moving an object in one of these ways:
|
I've just given it a bunch more thinking, and the above EDIT is probably an intractable problem with the design, at least with the exact invariants given above. It's hard to visualize, though! My current idea is to expand each node by two-thirds instead of 100%. This makes the tree structure much simpler: If D is a descendant of A, then if D is in A's left subtree, it may overlap with the A-sized node one step to A's left but never overlaps the one to A's right, and if D is in A's right subtree, it map overlap with the A-sized node one step to A's right but never overlaps the one to A's left. This differs from the current situation where, even considering only one dimension, a node can overlap 3 nodes of the same larger size. Also, I don't think it makes sense to keep all larger cousins of larger cousins - I think that'll always cascade to creating log n things at a time. I'm thinking we need to keep only the larger nodes that overlap live nodes... |
Okay, here's my current model. Unfortunately I'm not up to making it fully rigorous today, but here goes: Nodes are expanded by two-thirds, as noted above. A "neighbor" of a node A is a node at the same level as A that is exactly one step away from A in one or more dimensions (and zero steps away in the others). All existing nodes maintain pointers to their parent, children that exist, and neighbors that exist. WLOG regarding directions:
|
Oh, here's a little complication: how do we know when we are allowed to delete nodes? Perhaps we need to have each node maintain the count of entities which are currently enforcing its existence (which gets updated at the same time it's potentially-created, and then updated again using the same pointers when an object moves back in the other direction) |
The trouble with keeping a count is that when you create a new small entity next to an existing small entity, the count has to be updated in (log n) nodes. Take #N+1: The data structure is a radix tree of live nodes. Thus, there isn't automatically a (log n) cost to creating a new small entity in the middle of nowhere, or having an entity cross over a large boundary. As an aside, one downside of the radix tree model is this: suppose there are 2 tiny entities far away from each other, and nothing else. When these entities move, they might have to check their closest-existing-ancestor pointer, which points to the shared ancestor, and modify its children; this makes the events be entangled unnecessarily. But maybe we could model the radix tree in a way where each subtree of the shared ancestor is a separate entity, so the shared ancestor doesn't need to be touched most of the time. Decisions about the structure of the data structure can be partially separated from decisions about how it maps to entities. Just like my last model, nodes are expanded by two-thirds, and a node in a particular quadrant of an ancestor is "potentially overlapping" the in-the-direction-of-that-quadrant neighbors of that ancestor. Live nodes keep mutual references with all other live nodes they are "potentially overlapping". So next, we must address discoverability of newly overlapped live nodes: Similar to (2) from my last model, each live node or branching node keeps a pointer to any ancestor it's touching a midline of; those nodes are required to be explicitly represented in the tree (so that those pointers never have to be updated), but they do NOT require such midline-ancestor pointers themselves, unless they are also live-or-branching (otherwise they would recursively require the whole path to exist), and they keep an explicit count of how many live-or-branching nodes are requiring them to exist in this way (this is trivial to maintain). If I've analyzed things right, this rule will create only O(1) non-live, non-branching nodes along the link between any two live-or-branching nodes, meaning it doesn't incur any additional asymptotic cost. Thus, even these nodes can find their midline-ancestors (by looking up the tree until they find a real node). Also, a pragmatic optimization might be to only keep such pointers for midline-ancestors that are at least (some constant, maybe 3) levels up. (EDIT: we also have to worry about branches induced by these nodes - those shouldn't count as "branching" for the purposes of the above rule, "branching" should only mean branches between live nodes.) Next: how shall neighbor pointers be maintained? Gonna post this and then contine in another post... |
So, you're a node N, and you want to find your left neighbor (or its nearest existing ancestor). N can find its "midline ancestor for the direction left" in O(1), as discussed. The node we're looking for is in one of that midline ancestor's left quadrants – the one next to N. If that quadrant is empty, we are done. But if it has branches, it might cost us (log n) to follow them back down to N's level – that's no good. It follows that we need to have preemptively "moved the neighbor information downwards" when a branch occurs. Let's see: can a branch node that's on the midline of a (some constant number of levels)-larger node force the existence of, and mutual links with, its orthogonal neighbor node across the midline? If yes, then the procedure to find your orthogonal neighbor is just "follow parent pointers until you find a branch, follow neighbor pointer, follow child pointer" (because your branch forces the existence of the branch on the other side, and if there's a branch on the other side first, it forces the existence of a node on your side that can discover it). This does seem like it would force a rather large amount of nodes. Can we make it so that the neighbors are only forced if they actually have live descendants? That makes it trickier to find the nearest ancestor of a nonexistent neighbor (that node might be in the middle of the log n levels between ourself and our midline ancestor, and our side might be densely packed with branches). Would those nodes interfere with the process above for finding your midline ancestor in O(1)? Let's see… I think we could maintain pointers that allow us to to navigate the "live tree" (the part that doesn't include the extra nodes) – the only reason you couldn't do that is if you needed extra branches, but having extra tree-augmentation-children doesn't change the amount of live-tree-children in a subtree, so each node can just maintain two sets of descendant pointers. (EDIT: No, there's a problem - if you create a new live child of a leaf augmentation node, how does it find its live parent in O(1)? the augmentation nodes can't maintain pointers to their nearest live ancestor because it could change for a lot of them at a time...) As an aside... trivially, nodes only really need to keep midline ancestor pointers to ancestors that are beyond their direct ancestor within the tree, which could reduce the amount of midline ancestor nodes that are forced to exist? |
Wild idea: If these paired neighbors have to exist at the same time, can we make them be the "same node" in one or more senses? (Remember that there's not always just a pair - at corners, you have up to 2^dimensions nodes that are all linked) |
Let's see... the extreme form of that would be that ALL nodes are (dimensions)-way boundary nodes, so each node has 3^dimensions "children" - the middle one is its own, and the others are shared with its neighbors. So each node also has up to 2^dimensions "parents". And any neighbor of a node N is always a "child" of one of N's "parents". Nice! Also, we might not even need to do the "expanded node" thing - each level's nodes can simply partition the space, because an entity can only be overlapping (dimensions+1) split-surfaces at levels bigger than it, so there's always a node that will contain it while being no more than (2^(dimensions+1)) times bigger than it. Nice-ish! (We could also get away with making up to (2^dimensions) entries for the entity to keep it within an area 2-times-bigger than the entity, like we did with the old Z-order system from the Lasercake prototype). And the equivalent of "midpoint ancestor for a dimension" would be the unique ancestor-level where your node is split between nodes on that level along that dimension. So as an entity moves to an orthogonal neighbor, you just follow the before-and-after "split-level pointers" to find the nodes you have begun and ceased overlapping. Nice! Remains to be seen whether this can support radix-tree-like optimizations to avoid costing (log n) when creating and deleting entities... |
No description provided.
The text was updated successfully, but these errors were encountered: