Skip to content

Adam-Dvorak1/pypsa-in-prime

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Running PyPSA-Eur-Sec in PRIME

This repository includes instructions and tricks to run the PyPSA-Eur-Sec model on the cluster computer PRIME.

Its main purpose is to help master and PhD students install the packages and run simulations with PyPSA-Eur-Sec.

If you encounter a problem (and hopefully also a solution), please, edit this README file with the solution so that other students can also benefit.

The content of this document is structured as follows:
1 General information about PyPSA-Eur-Sec
2 Getting on to the cluster
3 Setting up the cluster
4 Extra stuff that will make your life easier

General information about PyPSA-Eur-Sec

PyPSA-Eur-Sec is a model of the European energy sector including sector coupling. The model is built with the open source python module PyPSA. The PyPSA-Eur-Sec model builds on an older model PyPSA-Eur of the European electricity network, without sector coupling. PyPSA-Eur is therefore included as part of PyPSA-Eur-Sec. PyPSA-Eur-Sec also uses the module Technology-data to get data on the energy system. Technology-Data is a repository including costs, efficiencies, lifetimes, etc for different technologies.

PyPSA-Eur-Sec uses the Snakemake workflow management system to run simulations. By using Snakemake simulations, python files can be run with a Snakemake command without having to open them. Snakemake automatically runs all the needed python scripts for a given simulation. The simulations are configured in the config.yaml file. The Snakemake workflow is structured in the SNAKEFILE. Step 10 shows how to run the simmulation with Snakemake.

There is a distribution list where PyPSA-related problems (and solutions) are discussed. You can ask questions here if you have troubles with the model.

There is also documentation for PyPSA, PyPSA-Eur and PyPSA-Eur-Sec. This repository does not substitute any of the previous information and it only focuses on issues related to running PyPSA-Eur-Sec in PRIME.

NOTE: In order to set up anaconda, python, and pypsa, these instructions and the tutorial for the course project in RES could be useful.

This video provides a nice introduction to PyPSA-Eur.

Getting on to the cluster

1. Get access to Prime

To use the PRIME cluster, first you need to get a user. Write an email to Søren Madsen with [email protected]

2. Connect with ssh

You can connect to the cluster through the terminal, e.g.

ssh [email protected]

The main way of interacting with the cluster will be through a terminal where you have run the ssh command to connect to prime. A more modern way of interacting with the cluster is by using the program VSCode as shown in step 22

3. Useful commands

Some useful commands to use in the cluster are described in the labbook.

4. Moving files to the cluster

If you are using Windows, WinSCP can be useful to copy folders to/from the cluster. Alternatively, use FileZilla on Windows, OSX or Linux. This makes moving files on the cluster much easier as you would otherwise have to use commands in the terminal to move files.

5. VPN

To connect to the cluster you need to be connected to the university network, so if you are at home you need to use the VPN (The VPN only works for employee's and PhD students. Master students need to be on university network to connect to the cluster)

Setting up the cluster

The following commands must be run on the cluster. Log in to the cluster as shown in step 2

6. Installing anaconda

You will need to have installed anaconda/miniconda in your home directory at the cluster. Follow the guide at anaconda/miniconda.

7 Installing PyPSA-Eur-Sec

You need to install PyPSA-Eur-Sec in the cluster. There are two approaches for this:

7.a Installing PyPSA-Eur-Sec from

You can install PyPSA-Eur-Sec following the instructions. Installation may take a while.

7.b Install by forking (A bit more advanced)

You can also fork the repositories pypsa-eur, technology_data, and pypsa-eur-sec on your Github, and clone them to your repository on Prime. This allows you to apply source control with Git (This is easily done in VSC. See step 22).

7.c Get databundle from zenodo if wget does not work

Install zenodo-get with the command:

pip install zenodo-get

Then retrieve the repository with:

zenodo_get 10.5281/zenodo.5824485

8. Installing the anaconda environment

You will need to have an environment with all the necessary packages. The envoronment includes snakemake which is a very useful way of dealing with parallelized jobs in the cluster. To install all the packages that you need create the environment use the 'environment.yaml' file provided in pypsa-eur. This step may take several minutes. On the cluster change directories to the pypsa-eur folder in a terminal and type the following commands:

.../pypsa-eur % conda env create -f envs/environment.yaml

Activate the environment by typing

.../pypsa-eur % conda activate pypsa-eur

Everytime you log in to the cluster you must activate the envirionment again. The active environment will be shown in parenthesis in your terminal.

(pypsa-eur) [marta@fe1 ~]$

9. Install gurobi

Install the optimization software Gurobi in the environment by running the command

conda install -c gurobi gurobi

10. Configure SNAKEMAKE

In the folder '/PRIME_cluster' of this repository, there are two additional files needed to use snakemake in the PRIME.

First, you might want to clone this repository:

git clone https://github.com/martavp/pypsa-in-prime.git

Copy the files 'cluster.yaml' and 'snakemake_cluster' to the directory '.../pypsa-eur-sec/' in your folders in the cluster.

Then, to run your simulations using Snakemake, you only need to write the following instruction in the command line (jobs identify the number of jobs that you want to parallelize if you send more that one job simultaneously).

./snakemake_cluster --jobs 5

11. Permission

It is possible that you need to give execution permissions to snakemake_cluster, you can do it typing in the terminal.

chmod u+x snakemake_cluster

12. Log files

Create a directory 'logs/cluster", as indicated in the file 'cluster.yaml'. This is where the logs and error files will be saved. Make sure that a folder 'logs/cluster' also exists in 'pypsa-eur/logs/cluster'.

13. Memory allocation

Check that the variable names in 'snakemake_cluster' comply with the variable names in your Snakefile. In particular, check that the memory attribution (mem_mb) is the same in both files or correct if necessary. If any of the rule in 'pypsa-eur/Snakefile' is missing 'resources: mem_mb=' add it or substitute 'mem' by 'mem_mb'. 17-feb 2022: I (Ebbe) added a snakefile for pypsa-eur 0.4.0 in the folder PRIME_cluster in which the 'resources: mem_mb' is now defined in all rules.

14. Setting up Gurobi in the cluster

On the PRIME-cluster, Gurobi needs to be pointed in the right direction as to where to look for packages and licenses. The first step is to add the following lines to the end of the file '.bashrc' located in /home/(AU-ID), as indicated in the Gurobi guide:

export GUROBI_HOME="/home/com/meenergy/gurobi651/linux64"

export PATH="${PATH}:${GUROBI_HOME}/bin"

export LD_LIBRARY_PATH="${GUROBI_HOME}/lib"

Additionally, the following line should be added at the end of the file '.bashrc':

export GRB_LICENSE_FILE="$GUROBI_HOME/gurobi.lic"

This points Gurobi to the cluster-license. Note that an academic license used locally on a computer is unsuitable for use on the cluster, and will result in a failed simulation.

15. Solution to "Solver (gurobi) returned non-zero return code (127)"

A change needs to be made to the file 'gurobi.sh' located in /home/(AU-ID)/anaconda3/envs/(pypsa-eur_environment_name)/bin/gurobi.sh . The last line of this shell script needs to point to 'python2.7', regardless of what Python version is used in the pypsa-eur environment in your local folder. Thus, the last line of 'gurobi.sh' needs to be:

$PYTHONHOME/bin/python2.7 "$@"

Make sure to restart the terminal for these modifications to take effect.

16. Solution to "memory error".

The config file should include a path to a folder where the temporal files during the solving of the network are saved. Best practice is to use the scratch memory:

tmpdir: "scratch/$SLURM_JOB_ID"

Another option is to use your home folder:

tmpdir: "/home/marta/tmp"

If this path is not specified, the default is to use the directory where the script is being executed which can cause errors due to not enough space in PRIME.

18. Memory resources

I (Marta) have manually increased the resources in rule build_renewable_profiles to speed up that rule in the cluster.

resources: mem=ATLITE_NPROCESSES * 50000

19. Using PyPSA-Eur

If you are using pypsa-eur independently of pypsa-eur-sec, to make sure that pypsa-eur gets to the final networks (with the solution), a rule all needs to be added to the Snakefile. In practice, this means adding the following text:

rule all:

input:

   expand("results/networks/elec_s_{simpl}_{clusters}_ec_l{ll}_{opts}.nc",
           **config['scenario'])

Extra stuff that will make your life easier

20. Terminal multiplexer (optional, but useful)

If you get disconnected or close your terminal your execution ends. If you want to simulate over an extended period of time this needs to be obmitted. What you need to use is a terminal multiplexer. In the following there are listed two alternatives.

20a. GNU Screen

This is the easier choice as it is already installed. Starting Named Session by typing:

screen -S session_name You can detach from the screen session at any time by typing: Ctrl+a d This means that it will run in the background. To resume your screen session use the following command: screen -r To find the session ID list the current running screen sessions with: screen -ls

20b. TMUX

TMUX is another terminal multiplexer but needs to be installed first. ATTENTION: The execution of the workflow in a tmux-window defined in the Snakefile may result in the 'solve_network' rule to fail. This is different from system to system, but if it occurs, it can be solved by executing the rule 'solve_network' outside tmux.
When

./snakemake_cluster --jobs 1

is executed in the cluster, the workflow in the 'Snakefile' starts. The DAG of jobs will be built, and depending on how many jobs have been allowed to run in parallel with the execution command above, one or more jobs will be submitted to the cluster with their own unique job-ID. If one chooses to close the terminal window in which the cluster is accessed, only the submitted jobs will be executed. For example if the rule 'cluster_network' is the last job to be submitted, this job will finish, but the subsequent rules will not. Because of this, one must wait until the final rule, 'solve_network', has been submitted if one wants to for instance log off the cluster.
A solution for this, that allows for inputting the executing command and immediately logging off the cluster, is a terminal multiplexer called 'tmux'. This allows for having multiple windows in the same terminal window. To install it, make sure to have an Anaconda environment active in the cluster terminal window, and execute the following:

conda install -c conda-forge tmux

Next, a new tmux-session can be created by executing:

tmux new -s type_session_name_here

In this session, 'snakemake_cluster' can then be executed. When the workflow is running, the tmux-session can then be detached from, i.e. return to the normal PRIME-cluster window, by typing 'Ctrl+B', to get the attention of tmux, relatively quickly followed by typing 'd'. To reattach, execute the following in the terminal:

tmux a -t type_session_name_here

If one has forgotten the name of the session when trying to reattach, simply execute:

tmux

To get a list of the created sessions. The tmux commands described here, as well as many other neat ones, can be found in this article.

21. Environment file that works for Mac (17/5-2021)

This environment file (./environments/environment_pypsa_eur_macos.yml) works for pypsa-eur-sec on MacOs. It may also work on other systems (not testet).

22. VS Code

VS Code must be installed on your local computer, not on the cluster

Visual Studio Code is a handy tool when working on the cluster. It allows you to have your file explorer, python editor, and terminal in one window. Install the Remote - SSH extension to connect with PRIME.

If you experience issues with connecting VScode to prime, try setting the option "Remote.SSH: Lockfiles in Tmp" to true (check the box).

To commit from your prime repository to your github, go to the source control and give your commit a name and press ctrl + enter. If you want the commit to be pushed automatically, after having committed, go to settings --> Remote [SSH: prime.eng.au.dk] --> Git --> Post Commit Command --> change "none" to "push"

23. Avoid entering password when connecting to PRIME

On your local computer:

Generate a ssh key by running:

(Local path) > ssh-keygen

Press Enter for default key name. Then Enter for no password, and then Enter again to confirm. A password key is created under Local path in the file "id_rsa.pub". Coppy the key to the cluster by running the folling command:

ssh-copy-id -i ~/.ssh/id_rsa.pub prime.eng.au.dk

Enter password when prompted.

24. 2021/08/31 As of today, I (Marta) have everything running on the cluster nicely with the following versions:

pypsa=0.18.0; pypsa-eur=0.3.0, pypsa-eur-sec=0.5.0, technology-data=0.2.0. In case someone needs a reference of a compatible setup of packages.

25. Example

For the ones who have just started using the PRIME cluster with only one rule in the Snakefile, but wants to run in parallel with e.g. a range of different inputs, I have added a simple example of how this can be done in the folder 'cluster_test'. You can modify the python_script and the Snakefile to match it to your application.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%