Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug when using Planck.ini #154

Open
PoulinV opened this issue Feb 12, 2025 · 2 comments
Open

Bug when using Planck.ini #154

PoulinV opened this issue Feb 12, 2025 · 2 comments

Comments

@PoulinV
Copy link

PoulinV commented Feb 12, 2025

Hello!
With a colleague we are trying to use cosmosis for analyzing planck and kids/DES data.

We have installed cosmosis following instructions for Installing manually on clusters and supercomputers

We are having an issue when running with planck data, e.g.


cosmosis examples/planck.ini


Setting up pipeline from parameter file examples/planck.ini
-----------------------------------------------------------

Setting up module consistency
------------------------------

Setting up module camb
-----------------------

Setting up module planck
-------------------------
Looking for clik Planck likelihood file 1: likelihood/planck2018/baseline/plc_3.0/hi_l/plik_lite/plik_lite_v22_TT.clik
 Initializing Planck likelihood, version Plik_v22_cmbonly_like                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
Fatal Python error: Segmentation fault

Current thread 0x00007fad570124c0 (most recent call first):
  File "/pbs/home/v/vpoulin/py39/lib64/python3.9/site-packages/cosmosis/runtime/module.py", line 204 in setup_functions
  File "/pbs/home/v/vpoulin/py39/lib64/python3.9/site-packages/cosmosis/runtime/module.py", line 231 in setup
  File "/pbs/home/v/vpoulin/py39/lib64/python3.9/site-packages/cosmosis/runtime/pipeline.py", line 464 in setup
  File "/pbs/home/v/vpoulin/py39/lib64/python3.9/site-packages/cosmosis/runtime/pipeline.py", line 778 in __init__
  File "/pbs/home/v/vpoulin/py39/lib64/python3.9/site-packages/cosmosis/main.py", line 288 in run_cosmosis
  File "/pbs/home/v/vpoulin/py39/lib64/python3.9/site-packages/cosmosis/main.py", line 560 in main
  File "/pbs/home/v/vpoulin/py39/bin/cosmosis", line 4 in <module>
Segmentation fault (core dumped)

sometimes we get alternatively:


> cosmosis examples/planck.ini
Setting up pipeline from parameter file examples/planck.ini
-----------------------------------------------------------

Setting up module consistency
------------------------------

Setting up module camb
-----------------------

Setting up module planck

Looking for clik Planck likelihood file 1: likelihood/planck2018/baseline/plc_3.0/hi_l/plik_lite/plik_lite_v22_TT.clik
 Initializing Planck likelihood, version Plik_v22_cmbonly_like                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               
 ** On entry to DPOTRF parameter number  2 had an illegal value

A quck google search suggests it indicates a potential issue with lapack, but we were not able to find the problem.

However running the simall likelihood (i.e. lowl EE) works fine. We were also to run without issue other likelihoods like KIDS and DES.

any help would be greatly appreciated!

cheers
Vivian

@joezuntz
Copy link
Owner

Yes - this usually means inconsistent lapack versions were used in scipy and the F90 lapack libraries used . It can be quite hard to fix. You can double check by running with the --segfaults flag, which may give more info, depending on your system.

Recently I've found that the conda version of the code increasingly works on clusters. You could try that also. It will very likely be fine for single-node jobs. If you don't need multi-node ones that may be enough, and if not there may be some ways around it depending on your MPI installation - let me know.

Cheers,
Joe

@PoulinV
Copy link
Author

PoulinV commented Feb 12, 2025

The conda installation did work! I'll try with that and get back to you if we have issues running with mpi. thanks for the suggestion!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants