You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have played a bit with combining our new ideas regarding a node-based pyiron with handling files for codes such as Lammps that rely on file input and output. The example works, but I have many ideas how to extend and improve it. To keep it simple I didn't
try to extrect single self-contained pyiron functions (which we should do) but used the full pyiron stack.
Below is the Markdown version of my JupyterNotebook :
Toy implementation and first tests of a File Object Class for workflows
frompathlibimportPathimportos
Create file data type/object
importosfrompathlibimportPathclassDirectoryObject:
def__init__(self, directory):
self.directory=Path(directory)
self.create()
defcreate(self):
ifnotself.directory.exists():
self.directory.mkdir(parents=True)
print(f"Directory '{self.directory}' created successfully.")
else:
print(f"Directory '{self.directory}' already exists.")
defdelete(self):
ifself.directory.exists():
# Remove all files within the directoryforfileinos.listdir(self.directory):
file_path=self.directory/fileprint (file, file_path.is_file(), file_path.is_dir())
iffile_path.is_file():
file_path.unlink()
print(f"File '{file_path}' deleted successfully.")
self.directory.rmdir()
print(f"Directory '{self.directory}' deleted successfully.")
else:
print(f"Directory '{self.directory}' does not exist.")
deflist_files(self):
ifself.directory.exists():
files=os.listdir(self.directory)
iffiles:
print(f"Files in directory '{self.directory}':")
forfileinfiles:
print(file)
else:
print(f"No files found in directory '{self.directory}'.")
else:
print(f"Directory '{self.directory}' does not exist.")
def__len__(self):
files= []
ifself.directory.exists():
files=os.listdir(self.directory)
returnlen(files)
def__repr__(self):
returnf"DirectoryObject(directory='{self.directory}' with {len(self)} files)"
# Example usagedirectory_handler=DirectoryObject('WorkingDir_new')
directory_handler.list_files()
Note: For the toy model I use pyiron to generate the structure.inp and potential files. For a more practical application/implementation this should be replaced by nodes that directly translate e.g. the pyiron atomic structure into a Lammps input.ini file.
classWorkflowResources:
def__init__(self, working_directory, executable, job_name, project):
self.working_directory=working_directoryself.executable=executableself.job_name=job_nameself.project=project@dask.delayed# does not work with daskdeflammps_setup(structure, project_path='.', job_name='job'):
frompyironimportProjectpr=Project(project_path)
lammps=pr.create.job.Lammps(job_name)
lammps.structure=structurelist_pot=lammps.list_potentials()
lammps.potential=list_pot[0]
lammps.write_input()
returnWorkflowResources(working_directory=lammps.working_directory,
executable=lammps.executable,
job_name=job_name,
project=pr)
Note: The following statement fails in dask when implementing multiple returns (can then be only resolved via getitem)
The above warning occurs in setup_lammps and is likely related to not closing the database connection in our pyiron job object. Check how this could be done.
Visualize workflow graph
plot.dask
plot.visualize(engine="cytoscape")
Missing in dask/present implementation:
access to input after initialization
construction of explicit workflows
what is a good syntax to access input of nodes, subnotes etc. in a workflow
The text was updated successfully, but these errors were encountered:
Thanks @JNmpi for writing this down! Especially for using the python markdown.
I'm a bit confused about the distinction between DirectoryObject and FileObject, especially by the fact that the path is defined in both of them independently. I would have suggested: path is only stored in DirectoryObject, and FileObject can write files only when it's attached to a DirectoryObject, so that a single DirectoryObject contains multiple FileObject.
I have played a bit with combining our new ideas regarding a node-based pyiron with handling files for codes such as Lammps that rely on file input and output. The example works, but I have many ideas how to extend and improve it. To keep it simple I didn't
try to extrect single self-contained pyiron functions (which we should do) but used the full pyiron stack.
Below is the Markdown version of my JupyterNotebook :
Toy implementation and first tests of a File Object Class for workflows
Create file data type/object
Create file data type/object
Application examples
ListFileObject
Create example workflow for Lammps
Create some of the Lammps files using pyiron
Note: For the toy model I use pyiron to generate the structure.inp and potential files. For a more practical application/implementation this should be replaced by nodes that directly translate e.g. the pyiron atomic structure into a Lammps input.ini file.
Note: The following statement fails in dask when implementing multiple returns (can then be only resolved via getitem)
It is therefore better to introduce a complex data object
Construct and run workflow
The above warning occurs in setup_lammps and is likely related to not closing the database connection in our pyiron job object. Check how this could be done.
Visualize workflow graph
Missing in dask/present implementation:
The text was updated successfully, but these errors were encountered: