Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use current structure instead of output to update previous structure #483

Merged
merged 3 commits into from
Jan 6, 2022

Conversation

samwaseda
Copy link
Member

@samwaseda samwaseda commented Dec 17, 2021

The issue is discussed in #481. Now I also did a performance test and got the following results:

Unknown

Unknown-1

This means in particular that after 1000 steps (since each iteration contains 10 steps) the new implementation is going to be around 30 times faster than the old one.

closes #481

@coveralls
Copy link

coveralls commented Dec 17, 2021

Pull Request Test Coverage Report for Build 1602851941

  • 4 of 4 (100.0%) changed or added relevant lines in 1 file are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.01%) to 70.075%

Totals Coverage Status
Change from base Build 1581509622: -0.01%
Covered Lines: 11563
Relevant Lines: 16501

💛 - Coveralls

Copy link
Member

@liamhuber liamhuber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm. That timing plot in the pr description is a beautiful thing. @raynol-dsouza i guess this is very nice news for every Protocol that exploits lammps

Copy link
Contributor

@pmrv pmrv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I must admit, I don't fully understand how this changes the timing, but it seems to work and results in less code, so 👍.

@samwaseda
Copy link
Member Author

I must admit, I don't fully understand how this changes the timing, but it seems to work and results in less code, so 👍.

It's the output values (output.indices etc.) that uses GenericInteractiveOutput._lst_from_cache. That's why.

@niklassiemer
Copy link
Member

I am not at all into this so this might be completely off. However, do I understand correctly that this PR just uses the last structure which had been set to the job as the last structure? Is it not also possible to generate multiple structures per run call? Then we would ignore them wouldn't we?

@samwaseda
Copy link
Member Author

I am not at all into this so this might be completely off. However, do I understand correctly that this PR just uses the last structure which had been set to the job as the last structure? Is it not also possible to generate multiple structures per run call? Then we would ignore them wouldn't we?

The structure comparison is done in order to see which values have changed between two runs. So as long as it allows to see which values have to be updated before the next run, it doesn't matter how many structures there are.

Copy link
Member

@jan-janssen jan-janssen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The interactive mode is divided in two parts, a beginners mode job.structure = new_structure and an experts mode job.interactive_positions_setter(new_positions). The expert mode is designed for efficiency and the beginners mode is designed for reliability. With the current changes to the beginners mode the structure of the job object job.structure is no longer the same as the structure in the interactive job - for example when running lammps with calc_minimize() in interactive mode. I think this is a rather risky modification.

@samwaseda
Copy link
Member Author

samwaseda commented Dec 20, 2021

The interactive mode is divided in two parts, a beginners mode job.structure = new_structure and an experts mode job.interactive_positions_setter(new_positions). The expert mode is designed for efficiency and the beginners mode is designed for reliability. With the current changes to the beginners mode the structure of the job object job.structure is no longer the same as the structure in the interactive job - for example when running lammps with calc_minimize() in interactive mode. I think this is a rather risky modification.

Yeah ok I remember the discussions. But that means in calc_minimize, we always have to update at least the positions, and if the pressure is specified also the cell, correct? Doesn't it mean it makes more sense to update them all the time anyway?

@samwaseda
Copy link
Member Author

@jan-janssen I added the changes we discussed yesterday.

@@ -156,7 +156,7 @@ def interactive_index_organizer(self):
self.interactive_indices_setter(self._structure_current.indices)

def interactive_cell_organizer(self):
if not np.allclose(
if self._generic_input["calc_mode"] != 'static' or not np.allclose(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not necessary for calc_minimize() only calc_minimize(pressure=0) right? And analogously it is not necessary for nvt and nve but npt.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the information about nvt, nve etc. included in the input?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we could go ahead with this version for now, right? I don't think communicating the cell size would significantly change something.

@samwaseda samwaseda merged commit f91b0da into master Jan 6, 2022
@delete-merged-branch delete-merged-branch bot deleted the accelerate_interactive branch January 6, 2022 14:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Why does interactive LAMMPS become slower?
6 participants