Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Execution Block ID uid://A001/X15a0/Xca Sgr_A_st_h_03_TM1 #142

Open
13 of 15 tasks
Tracked by #179
keflavich opened this issue May 18, 2022 · 10 comments
Open
13 of 15 tasks
Tracked by #179

Execution Block ID uid://A001/X15a0/Xca Sgr_A_st_h_03_TM1 #142

keflavich opened this issue May 18, 2022 · 10 comments
Assignees
Labels
Delivered Done! EB Execution Block Minor calibration issues Minor cal problems to be fixed at a later date TM1

Comments

@keflavich
Copy link
Contributor

keflavich commented May 18, 2022

Sgr_A_st_h_03_TM1
uid://A001/X15a0/Xca

Product Links:

Reprocessed Product Links:

@keflavich keflavich added EB Execution Block TM1 labels May 18, 2022
@d-l-walker
Copy link
Contributor

The actual data and final images look good here, but there are a few areas in the calibration that I feel should have been addressed during QA2. I've listed some of the more obvious ones below. I think they're relatively minor issues in the end, but should probably still be flagged out. It'd be great if someone else could check through this to get another opinion(s).

Stage 7 (Tsys): Some examples of contamination in the Tsys data in the science SPWs (spikes that coincide with the frequency range of the science SPWs which are shown as coloured bars at the bottom). This is almost always present in our data, but is usually flagged out during QA2.

tsys-uid___A002_Xf96bbc_X3e74 ms-summary spw19
tsys-uid___A002_Xf934b1_X30dd ms-summary spw21

Stage 9 (WVR): DA42 seems to have some phase jumps in the first scan.

uid___A002_Xf934b1_X30dd ms phase_offset antDA42 spw33

Stage 19 (applycal): Absorption and phase spread for phase calibrator J1744. This is a persistent issue for this calibrator and is usually flagged before delivery.

uid___A002_Xf934b1_X30dd ms-J1744-3116-spw29-PHASE-amp_vs_freq-XX_YY
uid___A002_Xf934b1_X30dd ms-J1744-3116-spw29-PHASE-phase_vs_freq-XX_YY

@d-l-walker
Copy link
Contributor

As mentioned earlier, the calibration here could be improved by adding a few more flags. However, the final calibration looks reasonable, and the images look good. I think we can safely leave this for now, and consider re-running the PL with additional flags at a later date. I will take responsibility for that.

Other than this, the only outstanding issues relate to the size mitigation:

Once these are resolved, this should be good to go.

@d-l-walker d-l-walker added the Minor calibration issues Minor cal problems to be fixed at a later date label Jul 27, 2022
@d-l-walker
Copy link
Contributor

d-l-walker commented Jul 27, 2022

Latest PL run (Pipeline Run 20220622T040354) failed at cube imaging stage, with errors:

  • Error! Sgr_A_star/TARGET/spw25 clean error: Error in running Major Cycle : One or more of the cube section failed in de/gridding. (Same error for all SPWs.)
  • No image details found despite existing imaging targets. Please check for cleaning errors.

@d-l-walker
Copy link
Contributor

d-l-walker commented Sep 7, 2022

SPW 33 was recleaned on HPG, but it diverged. I'll have a go at re-doing this with a higher cyclefactor.

Screenshot 2022-09-07 at 14 03 48

@d-l-walker d-l-walker added the Needs Reimaging: Divergence Needs reimaging b/c of divergence label Sep 7, 2022
d-l-walker pushed a commit to d-l-walker/reduction_ACES that referenced this issue Nov 16, 2022
@d-l-walker
Copy link
Contributor

@keflavich the cleaning parameters have been updated in #299. Once this PR has been merged, the cleaning for this region should be executed again to undo cube size mitigation and avoid divergence in SPW 33.

keflavich pushed a commit that referenced this issue Nov 16, 2022
@keflavich
Copy link
Contributor Author

Reimaging has been started. spw35 finished waaaay too fast...

@keflavich
Copy link
Contributor Author

Very odd. spw35 reported success even though it was an extreme failure - this is a constant annoyance with CASA, since it doesn't use python's traceback system properly.

The failure was:

2022-11-16 14:18:08     INFO    task_tclean::SynthesisDeconvolver::setupDeconvolution   Set Deconvolution Options for [/blue/adamginsburg/adamginsburg/ACES/workdir/h_spw35_cube_TM1_A001_X15a0_Xca/uid___A001_X15a0_Xca.s38_0.Sgr_A_star_sci.spw35.cube.I.iter1] : hogbom
2022-11-16 14:18:08     WARN    tclean::::casa  Memory available 65536000 kB is very close to amount of required memory 291070433 kB
2022-11-16 14:18:08     INFO    task_tclean::SynthesisImager::executeMajorCycle         ----------------------------------------------------------- Run Major Cycle 1 -------------------------------------
2022-11-16 14:18:09     INFO    task_tclean::SynthesisImagerVi2::nSubCubeFitInMemory    Required memory: 1998 GB. Available mem.: 62.5 GB (rc, mem. fraction: 80%, memory: 62.5) => Subcubes: 32. Processes on node: 1.
2022-11-16 14:18:09     SEVERE  tclean::::casa  Task tclean raised an exception of class RuntimeError with the following message: Error in running Major Cycle : Cannot open existing image : /blue/adamginsburg/adamginsburg/ACES/workdir/h_spw35_cube_TM1_A001_X15a0_Xca/uid___A001_X15a0_Xca.s38_0.Sgr_A_star_sci.spw35.cube.I.iter1.model : Error in opening Image : /blue/adamginsburg/adamginsburg/ACES/workdir/h_spw35_cube_TM1_A001_X15a0_Xca/uid___A001_X15a0_Xca.s38_0.Sgr_A_star_sci.spw35.cube.I.iter1.model
2022-11-16 14:18:09     INFO    tclean::::casa  Task tclean complete. Start time: 2022-11-16 09:18:03.543897 End time: 2022-11-16 09:18:08.980429
2022-11-16 14:18:09     INFO    tclean::::casa  ##### End Task: tclean               #####

I've never seen this before - the error message doesn't tell us clearly what's going on, but I suspect there was a filesystem issue of some sort? This is particularly scary, because it did end up creating entirely useless (corrupt) .image files

@keflavich keflavich added the NeedsImagingQAAfterReimaging Has been reimaged, but noone has looked at the results yet. Are they better? label Feb 15, 2023
@d-l-walker
Copy link
Contributor

d-l-walker commented Mar 15, 2023

All cubes have been imaged at full spectral resolution, and SPW 33 doesn't seem to have diverged this time. Marking this one as done. (Though note that we still have the calibration issues that we're investigating more generally ...)

@d-l-walker d-l-walker added Done! and removed Needs Reimaging: Divergence Needs reimaging b/c of divergence Needs Reimaging: Spectral Resolution size mitig Needs to be reimaged with full spectral resolution Needs Reimaging: Missing SPW size mitigation Needs to be reimaged without size mitigation NeedsImagingQAAfterReimaging Has been reimaged, but noone has looked at the results yet. Are they better? labels Mar 15, 2023
@nbudaiev
Copy link
Contributor

nbudaiev commented Feb 28, 2024

QA - Line contamination in continuum images from high/low frequencies

Looks okay, maybe slight contamination in spw25_27.

uid___A001_X15a0_Xca s36_0 Sgr_A_star_sci spw33_35 cont I iter1 image tt0-uid___A001_X15a0_Xca s36_0 Sgr_A_star_sci oldhigh_spw33_35 cont I iter1 image tt0-uid___A001_X15a0_Xca s36_0 Sgr_A_star_sci sp-2024-02-28-13-36-02

uid___A001_X15a0_Xca s38_0 Sgr_A_star_sci spw35 cube I iter1 image pbcor statcont contsub_diagnostic_spectra

@nbudaiev
Copy link
Contributor

nbudaiev commented Apr 3, 2024

QA - Line contamination in continuum images from high/low frequencies (compared againts v1.1)

Looks great.

uid___A001_X15a0_Xca s36_0 Sgr_A_star_sci spw33_35 cont I iter1 image tt0 pbcor-uid___A001_X15a0_Xca s36_0 Sgr_A_star_sci v1 1_20240314_high_spw33_35 cont I iter1 image tt0 pbcor-uid___A001_X15a0_Xca -2024-04-03-00-44-43

uid___A001_X15a0_Xca s38_0 Sgr_A_star_sci spw35 cube I iter1 image_diagnostic_spectra

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Delivered Done! EB Execution Block Minor calibration issues Minor cal problems to be fixed at a later date TM1
Projects
None yet
Development

No branches or pull requests

3 participants