Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ecs: major rethink & database-aligned design #157

Merged
merged 1 commit into from
Jan 28, 2022
Merged

Conversation

emidoots
Copy link
Member

@emidoots emidoots commented Jan 27, 2022

I promised the next blog post in the series would be code, not verbal explanation like this, so I'll likely allude to all of this (and link to this explanation) for how my thinking has developed since the last article. Then it will focus on the code here.

Limitations of our ECS

Previously, we had thought about our ECS in terms of archetypes defined at compile time (effectively arrays of archetype structs with comptime defined fields as components.) I believe that this is likely the most efficient way that one could ever represent entities. However, it comes with many limitations, namely that:

You have to define which components your entity will have at compile time: with our implementation, adding/removing components to an entity at runtime was not possible (although declaring components at comptime that had optional values at runtime was). This is contradictory with some goals that we have:

  • The ability to add/remove components at runtime:
    • In an editor for the game engine, e.g. adding a Physics component or similar to see how it behaves.
    • In a code file as part of Zig hot code swapping in the future, adding an arbitrary component to an entity while your game is running.
    • In more obscure cases: adding components at runtime as part of loading a config file, in response to network operations, etc.

Investigating sparse sets

To find the best way to solve this, I did begin to investigate sparse sets which I saw mentioned in various contexts with ECS implementations. My understanding is that many ECS implementations utilize sparse sets to store a relation between an entity ID and the dense arrays of components associated with it. My understanding is that sparse sets often imply storing components as distinct dense arrays (e.g. an array of physics component values, an array of weapon component values, etc.) and then using the sparse set to map entity IDs -> indexes within those dense component arrays, weapon_components[weapons_sparse_set[entityID]] is effectively used to lookup an entity's weapon component value, because not every entity is guaranteed to have the same components and so weapon_components[entityID] is not possible.

This of course introduces overhead, not only due to two arrays needed to lookup a component's value, but also because you may now be accessing weapon_components values non-sequentially which can easily introduce CPU cache misses. And so I began to think about how to reconcile the comptime-component-definition archetype approach I had written before and this sparse set approach that seems to be popular among other ECS implementations.

Thinking in terms of databases

What helped me was thinking about an ECS in terms of databases, where tables represent a rather arbitrary "type" of entity, rows represent entities (of that type) themselves, and the columns represent component values. This makes a lot of sense to me, and can be implemented at runtime easily to allow adding/removing "columns" (components) to an entity.

The drawback of this database model made the benefit of sparse sets obvious: If I have a table representing monster entities, and add a Weapon component to one monster - every monster must now pay the cost of storing such a component as we've introduced a column, whether they intend to store a value there or not. In this context, having a way to separately store components and associate them with an entity via a sparse set is nice: you pay a bit more to iterate over such components (because they are not stored as dense arrays), but you only pay the cost of storing them for entities that actually intend to use them. In fact, iteration could be faster due to not having to skip over "empty" column values.

So this was the approach I implemented here:

  • Entities is a database of tables.

    • It's a hashmap of table names (entity type names) to tables (EntityTypeStorage).
    • An "entity type" is some arbitrary type of entity likely to have the same components. It's optimized for that. But unlike an "archetype", adding/removing ocmponents does not change the type - it just adds/removes a new column (array) of data.
    • You would use just one set of these for any entities that would pass through the same system. e.g. one of these for all 3D objects, one for all 2D objects, one for UI components. Or one for all three.
  • EntityTypeStorage is a table, whose rows are entities and columns are components.

    • It's a hashmap of component names -> ComponentStorage(T)
    • Adding/removing a component is as simple as adding/removing a hashmap entry.
  • ComponentStorage(T) is one of two things:

    • (default) a dense array of component values, making it quite optimal for iterating over.
    • (optional) a sparsely stored map of (row ID) -> (component value).
  • EntityID thus becomes a simple 32-bit row ID + a 16-bit table ID, and it's globally unique within a set of Entities.

    • Also enables O(1) entity ID lookups, effectively entities.tables[tableID].rows[rowID]
  • Note: When I say "hashmap" above I really mean a Zig array hashmap, which appears to be quite similar to a sparse set and mostly optimal for smaller hashmaps from what I have found.

Benefits

Faster "give me all entities with components (T, U, V) queries"

One nice thing about this approach compared to other ECS I think is that to answer a query like "give me all entities with a 'weapon' component", we can reduce the search space dramatically right off the bat due to the entity types: an EntityTypeStorage has fast access to the set of components all entities within it may have set. Now, not all of them will have such a component, but most of them will. We just "know" that without doing any computations, our data is structured to hint this to us. And this makes sense logically, because most entities are similar: buttons, ogre monsters, players, etc. are often minor variations of something, not a truly unique type of entity with 100% random components.

Shared component values

In addition to having sparse storage for entity ID -> component value relations, we can also offer a third type of storage: shared storage. Because we allow the user to arbitrarily define entity types, we can offer to store components at the entity type (table) level: pay to store the component only once, not per-entity. This seems quite useful (and perhaps even unique to our ECS? I'd be curious to hear if others offer this!)

For example, if you want to have all entities of type "monster" share the same Renderer component value for example, we simply elevate the storage of that component value to the EntityTypeStorage / as part of the table itself, not as a column or sparse relation. This is a mere component name -> component value map. There is no entity ID -> component value relationship involved here, we just "know" that every entity of the "monster" entity type has that component value.

Runtime/editor introspection

This is not a benefit of thinking in terms of databases, but this implementation opens the possibility for runtime (future editor) manipulation & introspection:

  • Adding/removing components to an entity at runtime
  • Iterating all entity types within a world
    • Iterating all entities of a given type
      • Iterating all possibly-stored components for entities of this type
      • Iterating all entities of this type
        • Iterating all components of this entity (future)
  • Converting from sparse -> dense storage at runtime

A note about Bevy/EnTT

After writing this, and the above commit message, I got curious how Bevy/EnTT handle this. Do they do something similar?

I found Bevy has hybrid component storage (pick between dense and sparse) which appears to be more clearly specified in this linked PR which also indicates:

hecs, legion, flec, and Unity DOTS are all "archetypal ecs-es".
Shipyard and EnTT are "sparse set ecs-es".

Is our archetypal memory layout better than other ECS implementations?

One notable difference is that Bevy states about Archetypal ECS:

Comes at the cost of more expensive add/remove operations for an Entity's components, because all components need to be copied to the new archetype's "table"

Update: see #157 (comment)

I've seen this stated elsewhere, outside of Bevy, too. I've had folks tell me that archetypal ECS implementations use an AoS memory layout in order to make iteration faster (where A, B, and C are component values):

ABCABCABCABC

I have no doubt a sparse set is worse for iteration, as it involves accessing non-sequentially into the underlying dense arrays of the sparse set (from what I understand.) However, I find the archetypal storage pattern most have settled on (AoS memory layout) to be a strange choice. The other choice is an SoA memory layout:

AAAA
BBBB
CCCC

My understanding from data oriented design (primarily from Andrew Kelley's talk) is that due to struct padding and alignment SoA is in fact better as it reduces the size of data (up to nearly half, IIRC) and that ensures more actually ends up in CPU cache despite accessing distinct arrays (which apparently CPUs are quite efficient at.)

Obviously, I have no benchmarks, and so making such a claim is super naive. However, if true, it means that our memory layout is not just more CPU cache efficient but also largely eliminates the typically increased cost of adding/removing components with archetypal storage: others pay to copy every single entity when adding/removing a component, we don't. We only pay to allocate space for the new component. We don't pay to copy anything. Of course, in our case adding/removing a component to sparse storage is still cheaper: effectively a hashmap insert for affected entities only, rather than allocating an entire array of size len(entities).

An additional advantage of this, is that even when iterating over every entity your intent is often not to access every component. For example, a physics system may access multiple components but will not be interested in rendering/game-logic components and those will "push" data we care about out of the limited cache space.

I'm poking Bevy ECS authors about this to see how they think about this: https://discord.com/channels/691052431525675048/742569353878437978/936125050095034418

Future

Major things still not implemented here include:

  • Multi-threading
  • Querying, iterating
  • "Indexes"
    • Graph relations index: e.g. parent-child entity relations for a DOM / UI / scene graph.
    • Spatial index: "give me all entities within 5 units distance from (x, y, z)"
    • Generic index: "give me all entities where arbitraryFunction(e) returns true"

Signed-off-by: Stephen Gutekanst [email protected]

  • By selecting this checkbox, I agree to license my contributions to this project under the license(s) described in the LICENSE file, and I have the right to do so or have received permission to do so by an employer or client I am producing work for whom has this right.

:: Limitations of our ECS

Previously, we had thought about our ECS in terms of archetypes defined at compile time (effectively arrays
of archetype structs with comptime defined fields as components.) I believe that this is likely *the most
efficient* way that one could ever represent entities. However, it comes with many limitations, namely that:

You have to define which components your entity will have _at compile time_: with our implementation,
adding/removing components to an entity at runtime was not possible (although declaring components at comptime
that had optional _values_ at runtime was). This is contradictory with some goals that we have:

* The ability to add/remove components at runtime:
    * In an editor for the game engine, e.g. adding a Physics component or similar to see how it behaves.
    * In a code file as part of Zig hot code swapping in the future, adding an arbitrary component to an entity
      while your game is running.
    * In more obscure cases: adding components at runtime as part of loading a config file, in response to network
      operations, etc.

:: Investigating sparse sets

To find the best way to solve this, I did begin to investigate sparse sets which I saw mentioned in various contexts
with ECS implementations. My understanding is that many ECS implementations utilize sparse sets to store a relation
between an entity ID and the dense arrays of components associated with it. My understanding is that sparse sets
often imply storing components as distinct dense arrays (e.g. an array of physics component values, an array of weapon
component values, etc.) and then using the sparse set to map entity IDs -> indexes within those dense component arrays,
`weapon_components[weapons_sparse_set[entityID]]` is effectively used to lookup an entity's weapon component value,
because not every entity is guaranteed to have the same components and so `weapon_components[entityID]` is not possible.

This of course introduces overhead, not only due to two arrays needed to lookup a component's value, but also because
you may now be accessing `weapon_components` values non-sequentially which can easily introduce CPU cache misses. And
so I began to think about how to reconcile the comptime-component-definition archetype approach I had written before
and this sparse set approach that seems to be popular among other ECS implementations.

:: Thinking in terms of databases

What helped me was thinking about an ECS in terms of databases, where tables represent a rather arbitrary "type" of
entity, rows represent entities (of that type) themselves, and the columns represent component values. This makes a lot
of sense to me, and can be implemented at runtime easily to allow adding/removing "columns" (components) to an entity.

The drawback of this database model made the benefit of sparse sets obvious: If I have a table representing monster
entities, and add a Weapon component to one monster - every monster must now pay the cost of storing such a component
as we've introduced a column, whether they intend to store a value there or not. In this context, having a way to
separately store components and associate them with an entity via a sparse set is nice: you pay a bit more to iterate
over such components (because they are not stored as dense arrays), but you only pay the cost of storing them for
entities that actually intend to use them. In fact, iteration could be faster due to not having to skip over "empty"
column values.

So this was the approach I implemented here:

* `Entities` is a database of tables.
    * It's a hashmap of table names (entity type names) to tables (`EntityTypeStorage`).
    * An "entity type" is some arbitrary type of entity _likely to have the same components_. It's optimized for that.
      But unlike an "archetype", adding/removing ocmponents does not change the type - it just adds/removes a new column
      (array) of data.
    * You would use just one set of these for any entities that would pass through the same system. e.g. one of these
      for all 3D objects, one for all 2D objects, one for UI components. Or one for all three.
* `EntityTypeStorage` is a table, whose rows are entities and columns are components.
    * It's a hashmap of component names -> `ComponentStorage(T)`
    * Adding/removing a component is as simple as adding/removing a hashmap entry.
* `ComponentStorage(T)` is one of two things:
    * (default) a dense array of component values, making it quite optimal for iterating over.
    * (optional) a sparsely stored map of (row ID) -> (component value).
* `EntityID` thus becomes a simple 32-bit row ID + a 16-bit table ID, and it's globally unique within a set of `Entities`.
    * Also enables O(1) entity ID lookups, effectively `entities.tables[tableID].rows[rowID]`

:: Benefits

::: Faster "give me all entities with components (T, U, V) queries"

One nice thing about this approach is that to answer a query like "give me all entities with a 'weapon' component", we can
reduce the search space dramatically right off the bat due to the entity types: an `EntityTypeStorage` has fast access to
the set of components all entities within it may have set. Now, not all of them will have such a component, but _most of
them will_. We just "know" that without doing any computations, our data is structured to hint this to us. And this makes
sense logically, because most entities are similar: buttons, ogre monsters, players, etc. are often minor variations of
something, not a truly unique type of entity with 100% random components.

::: Shared component values

In addition to having sparse storage for `entity ID -> component value` relations, we can _also_ offer a third type of
storage: shared storage. Because we allow the user to arbitrarily define entity types, we can offer to store components
at the entity type (table) level: pay to store the component only once, not per-entity. This seems quite useful (and perhaps
even unique to our ECS? I'd be curious to hear if others offer this!)

For example, if you want to have all entities of type "monster" share the same `Renderer` component value for example,
we simply elevate the storage of that component value to the `EntityTypeStorage` / as part of the table itself, not as a column
or sparse relation. This is a mere `component name -> component value` map. There is no `entity ID -> component value`
relationship involved here, we just "know" that every entity of the "monster" entity type has that component value.

::: Runtime/editor introspection

This is not a benefit of thinking in terms of databases, but this implementation opens the possibility for runtime (future editor)
manipulation & introspection:

* Adding/removing components to an entity at runtime
* Iterating all entity types within a world
    * Iterating all entities of a given type
        * Iterating all possibly-stored components for entities of this type
        * Iterating all entities of this type
            * Iterating all components of this entity (future)
* Converting from sparse -> dense storage at runtime

:: A note about Bevy/EnTT

After writing this, and the above commit message, I got curious how Bevy/EnTT handle this. Do they do something similar?

I found [Bevy has hybrid component storage (pick between dense and sparse)](https://bevyengine.org/news/bevy-0-5/#hybrid-component-storage-the-solution)
which appears to be more clearly specified in [this linked PR](bevyengine/bevy#1525) which also indicates:

> hecs, legion, flec, and Unity DOTS are all "archetypal ecs-es".
> Shipyard and EnTT are "sparse set ecs-es".

:: Is our archetypal memory layout better than other ECS implementations?

One notable difference is that Bevy states about Archetypal ECS:

> Comes at the cost of more expensive add/remove operations for an Entity's components, because all components need
> to be copied to the new archetype's "table"

I've seen this stated elsewhere, outside of Bevy, too. I've had folks tell me that archetypal ECS implementations
use an AoS memory layout in order to make iteration faster (where `A`, `B`, and `C` are component values):

```
ABCABCABCABC
```

I have no doubt a sparse set is worse for iteration, as it involves accessing non-sequentially into the underlying dense
arrays of the sparse set (from what I understand.) However, I find the archetypal storage pattern most have settled on
(AoS memory layout) to be a strange choice. The other choice is an SoA memory layout:

```
AAAA
BBBB
CCCC
```

My understanding from data oriented design (primarily from Andrew Kelley's talk) is that due to struct padding and alignment
SoA is in fact better as it reduces the size of data (up to nearly half, IIRC) and that ensures more actually ends up in CPU
cache despite accessing distinct arrays (which apparently CPUs are quite efficient at.)

Obviously, I have no benchmarks, and so making such a claim is super naive. However, if true, it means that our memory layout
is not just more CPU cache efficient but also largely eliminates the typically increased cost of adding/removing components
with archetypal storage: others pay to copy every single entity when adding/removing a component, we don't. We only pay to
allocate space for the new component. We don't pay to copy anything. Of course, in our case adding/removing a component to
sparse storage is still cheaper: effectively a hashmap insert for affected entities only, rather than allocating an entire
array of size `len(entities)`.

An additional advantage of this, is that even when iterating over every entity your intent is often not to access every component.
For example, a physics system may access multiple components but will not be interested in rendering/game-logic components and
those will "push" data we care about out of the limited cache space.

:: Future

Major things still not implemented here include:

* Multi-threading
* Querying, iterating
* "Indexes"
    * Graph relations index: e.g. parent-child entity relations for a DOM / UI / scene graph.
    * Spatial index: "give me all entities within 5 units distance from (x, y, z)"
    * Generic index: "give me all entities where arbitraryFunction(e) returns true"

Signed-off-by: Stephen Gutekanst <[email protected]>
@emidoots
Copy link
Member Author

Is our archetypal memory layout better than other ECS implementations?

And the answer is: Bevy does NOT use an AoS memory layout. The difference between what they're doing and what I've done here is as follows:

  • Bevy: When you add a component C to an entity which currently has (A, B), that entity "moves" from the old (A, B) archetype table to the new archetype table (A, B, C). If you plan to add a component to 1,000 entities of the same archetype, that would involve copying 1,000 entities from the old to new table. Their table has Vector<A>, Vector<B>, and Vector<C> distinct vectors effectively (SoA).
  • What I did here:
    • We represent columns as distinct vectors Vector<?T> and currently pay the cost of that optional bit (but I had plans to remove this with a bitmask)
    • When you add a new component to an entity in ours, the entity doesn't move from one table to another. Instead, the table itself adds a new column (if it didn't already exist) in preparation of the other entities of this type also having that component.

This tradeoff seems to mostly be complexity (Bevy seems simpler), cost of adding a component to few entities vs. thousands (memcpy vs. alloc), and potential for overlapping entities within the same archetype:

Does Bevy provide any optimization (pools or sorting like EnTT or something) for this case: say you want to query for entities with components (A, B, C), where C is a boolean which represents whether an entity is a monster or player. You only want to iterate player entities (e.g. to send network packets for them at a faster frequency than monster entities or something like that). Obviously you could iterate every entity of that archetype table (players+monsters) and check the boolean component to skip over monsters; curious if there's any additional query optimization here, or this isn't a common use case/problem?

which they intend to solve with indexes:

Not yet. This falls under the big umbrella of "indexes", which are probably two or three major ECS features away

seems reasonable.

@emidoots emidoots merged commit c2c4335 into main Jan 28, 2022
@emidoots emidoots deleted the sg/ecs-take-2 branch January 28, 2022 05:54
emidoots added a commit that referenced this pull request Mar 19, 2022
In the past:

* #156 was the initial ECS implementation detailed in https://devlog.hexops.com/2022/lets-build-ecs-part-1
* #157 was the second major redesign in which we:
    * Eliminated major limitations (e.g. inability to add/remove components at runtime)
    * Investigated sparse sets
    * Began thinking in terms of databases
    * Enabled runtime introspection

Our second revision of the ECS, however, still had _archetypes_ exposed as a public-facing
user concern. When a new component was added to an entity, say a weapon, the table storing
entities of that archetype changed to effectively have a new column `?Weapon` with a null
value for _all existing entities of that archetype_. We can say that our ECS had archetypes
as a user-facing concern AND this made performance worse: when iterating all entities with
a weapon, we needed to check if the component value was `null` or not because every column
was `?Weapon` instead of a guaranteed non-null value like `Weapon`. This was a key learning
that I got from [discussing ECS tradeoffs with the Bevy team](#157 (comment)).

This third revision of our ECS has some big benefits:

* Entities are now just IDs proper, you can add/remove arbitrary components at runtime.
    * You don't have an "entity which always belongs to one archetype table which changes"
    * Rather, you have an "entity of one archetype" and adding a component means that entity _moves_ from one archetype table to another.
    * Archetypes are now an implementation detail, not something you worry about as a consumer of the API.
* Performance
    * We benefit from the fact that we no longer need check if a component on an entity is `null` or not.
* Introspection
    * Previously iterating the component names/values an entity had was not possible, now it is.
* Querying & multi-threading
    * Very very early stages into this, but we now have a general plan for how querying will work and multi-threading.
    * Effectively, it will look much like interfacing with a database: you have a connection (we call it an adapter)
      and you can ask for information through that. More work to be done here.
* Systems, we now have a (very) basic starting point for how systems will work.

Some examples of how the API looks today:

* https://github.com/hexops/mach/blob/979240135bbe12d32760eed9f29f9795ead3c340/ecs/src/main.zig#L49
* https://github.com/hexops/mach/blob/979240135bbe12d32760eed9f29f9795ead3c340/ecs/src/entities.zig#L625-L656

Much more work to do, I will do a blog post detailing this step-by-step first though.

Signed-off-by: Stephen Gutekanst <[email protected]>
emidoots added a commit that referenced this pull request Mar 19, 2022
In the past:

* #156 was the initial ECS implementation detailed in https://devlog.hexops.com/2022/lets-build-ecs-part-1
* #157 was the second major redesign in which we:
    * Eliminated major limitations (e.g. inability to add/remove components at runtime)
    * Investigated sparse sets
    * Began thinking in terms of databases
    * Enabled runtime introspection

Our second revision of the ECS, however, still had _archetypes_ exposed as a public-facing
user concern. When a new component was added to an entity, say a weapon, the table storing
entities of that archetype changed to effectively have a new column `?Weapon` with a null
value for _all existing entities of that archetype_. We can say that our ECS had archetypes
as a user-facing concern AND this made performance worse: when iterating all entities with
a weapon, we needed to check if the component value was `null` or not because every column
was `?Weapon` instead of a guaranteed non-null value like `Weapon`. This was a key learning
that I got from [discussing ECS tradeoffs with the Bevy team](#157 (comment)).

This third revision of our ECS has some big benefits:

* Entities are now just IDs proper, you can add/remove arbitrary components at runtime.
    * You don't have an "entity which always belongs to one archetype table which changes"
    * Rather, you have an "entity of one archetype" and adding a component means that entity _moves_ from one archetype table to another.
    * Archetypes are now an implementation detail, not something you worry about as a consumer of the API.
* Performance
    * We benefit from the fact that we no longer need check if a component on an entity is `null` or not.
* Introspection
    * Previously iterating the component names/values an entity had was not possible, now it is.
* Querying & multi-threading
    * Very very early stages into this, but we now have a general plan for how querying will work and multi-threading.
    * Effectively, it will look much like interfacing with a database: you have a connection (we call it an adapter)
      and you can ask for information through that. More work to be done here.
* Systems, we now have a (very) basic starting point for how systems will work.

Some examples of how the API looks today:

* https://github.com/hexops/mach/blob/979240135bbe12d32760eed9f29f9795ead3c340/ecs/src/main.zig#L49
* https://github.com/hexops/mach/blob/979240135bbe12d32760eed9f29f9795ead3c340/ecs/src/entities.zig#L625-L656

Much more work to do, I will do a blog post detailing this step-by-step first though.

Signed-off-by: Stephen Gutekanst <[email protected]>
@emidoots emidoots added the object object system label Aug 6, 2022
emidoots added a commit to hexops-graveyard/mach-ecs that referenced this pull request Apr 5, 2023
In the past:

* hexops/mach#156 was the initial ECS implementation detailed in https://devlog.hexops.com/2022/lets-build-ecs-part-1
* hexops/mach#157 was the second major redesign in which we:
    * Eliminated major limitations (e.g. inability to add/remove components at runtime)
    * Investigated sparse sets
    * Began thinking in terms of databases
    * Enabled runtime introspection

Our second revision of the ECS, however, still had _archetypes_ exposed as a public-facing
user concern. When a new component was added to an entity, say a weapon, the table storing
entities of that archetype changed to effectively have a new column `?Weapon` with a null
value for _all existing entities of that archetype_. We can say that our ECS had archetypes
as a user-facing concern AND this made performance worse: when iterating all entities with
a weapon, we needed to check if the component value was `null` or not because every column
was `?Weapon` instead of a guaranteed non-null value like `Weapon`. This was a key learning
that I got from [discussing ECS tradeoffs with the Bevy team](hexops/mach#157 (comment)).

This third revision of our ECS has some big benefits:

* Entities are now just IDs proper, you can add/remove arbitrary components at runtime.
    * You don't have an "entity which always belongs to one archetype table which changes"
    * Rather, you have an "entity of one archetype" and adding a component means that entity _moves_ from one archetype table to another.
    * Archetypes are now an implementation detail, not something you worry about as a consumer of the API.
* Performance
    * We benefit from the fact that we no longer need check if a component on an entity is `null` or not.
* Introspection
    * Previously iterating the component names/values an entity had was not possible, now it is.
* Querying & multi-threading
    * Very very early stages into this, but we now have a general plan for how querying will work and multi-threading.
    * Effectively, it will look much like interfacing with a database: you have a connection (we call it an adapter)
      and you can ask for information through that. More work to be done here.
* Systems, we now have a (very) basic starting point for how systems will work.

Some examples of how the API looks today:

* https://github.com/hexops/mach/blob/979240135bbe12d32760eed9f29f9795ead3c340/ecs/src/main.zig#L49
* https://github.com/hexops/mach/blob/979240135bbe12d32760eed9f29f9795ead3c340/ecs/src/entities.zig#L625-L656

Much more work to do, I will do a blog post detailing this step-by-step first though.

Signed-off-by: Stephen Gutekanst <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
object object system
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant