Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep/add the aggregate in the indentity map after saving #78

Closed
m-steinmann opened this issue May 29, 2018 · 2 comments
Closed

Keep/add the aggregate in the indentity map after saving #78

m-steinmann opened this issue May 29, 2018 · 2 comments

Comments

@m-steinmann
Copy link

If you call AggregateRepository::saveAggregateRoot, the aggregate got saved and then removed from the identiy map.
I see no reason to do this, because all new events are popped from the stream when saving. One of the main advantages of event sourcing is, that it is append only and because of that its easy to cache.

So e.g. you have a domain event, that triggers some action, lets say on a external API and based on the result it will raise a command on the same aggregate, that caused the event.
Now the aggregate would have to replay the event stream again. In worst case this could happend a few times.

Instead of removing an aggregate from the identity map i would consider its better to add it on creation and keep it on the identity map if its already exists.

IMHO the way the identiy map is implemented right now, it should be without any use.

@codeliner
Copy link
Member

We've added an option to disable identity map: #77
and consider removing it completely with next major release.

Aggregates are only cached for multiple reads. In the past it was used to manage a unit of work internally. But we've removed the unit of work to support non-transactional event store implementations. So yes, the identity map is not really used in most cases.

Instead of removing an aggregate from the identity map i would consider its better to add it on creation and keep it on the identity map if its already exists.

If you have a long-running PHP process and keep aggregates in memory without reloading events those aggregates will miss newer events written to the stream by concurrent processes. This would cause weird bugs.
If you want to keep an aggregate in memory and let it process multiple commands without refreshing event history then you have to do it on your end. You can extend from the aggregate repository and store aggregates in your own identity map.

@m-steinmann
Copy link
Author

Thank your for the fast answer. I really forgot to take long running processes into consideration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants