-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use IMEX PD-ARS scheme for radiation update #448
Conversation
First attempt. With a22=0 and a32=0, it successfully reduces to SSP-RK2 scheme. However, with a22=1 and 0<a32<=0.5 (IMEX PD-ARS), the RadShock and Coupling test failed.
the Asymptotic Marshak test. - Fixed a problem in implementing IMEX PD-ARS: Egas and Fgas should update by 0.4 dt Src instead of dt Src in the first stage. - Add RadhydroPulse test problem
for more information, see https://pre-commit.ci
@BenWibking Can you check and make sure that the setup of the marshak_asymptotic test and its analytic solution are correct? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clang-tidy made some suggestions
There were too many comments to post at once. Showing the first 10 out of 14. Check the log or trigger a new build to see more.
I took the parameters and solution from this paper: https://ui.adsabs.harvard.edu/abs/2008JCoPh.227.7561M/abstract. When I tested this problem when I wrote the original version, there was an issue with the accuracy of the Marshak boundary condition. As I noted in the commented in the code, I had to modify the Riemann solver to exactly enforce the boundary condition at the interface. So this is probably still necessary. Another option would be to derive a characteristic boundary condition and apply that to the ghost cells. |
OK, I'll take a look at that and see if it's the issue of the boundary condition. |
@BenWibking I am not able to follow the Asymptotic Marshak test problem and see how it is consistent with the setup in McClarren+2008 (sec 9.2). Can you help write down the initial conditions and boundary conditions of this test problem? Let's use this shared markdown document: https://hackmd.io/@chongchonghe/S1K-lWXVp/edit . I'd also like to write down the parameters and solutions of all the test problems in this document, to make the Test Problems page on the Quokka webpage complete. |
It is precisely the setup described in section 9.2. I don't know how to make it any clearer. |
The same Marshak wave problem is considered in section 8.2 of this paper: https://ui.adsabs.harvard.edu/abs/2019JCoPh.393..258L/abstract. They lag the opacities, computing them at a special temperature Something like this might be necessary for the Marshak wave problem. However, I am not sure whether it can be generalized to 2D or 3D. |
Ok, I've re-run the asymptotic Marshak wave with this PR. The issue is almost certainly caused by the enormous variation of the opacity within a given cell near the wave front. Unfortunately, it turns out it can be shown that a piecewise-constant basis for the temperature field when solving the full transport equation does not, in general, reproduce the asymptotic diffusion limit (Smedley-Stevenson et al. 2015). At sufficiently high resolution (thousands of cells), it does converge to the correct solution (although the convergence of the wavefront position is painfully slow). I would propose that we do one of the following:
|
With a different boundary condition (E = a T^4 and F = 0, isotropic as stated in sec 9.2 of McClarren+08), the numeric solver converges to analytic solution slightly faster, but it's still impractically slow. Could the error come from the approximation on the flux-mean opacity? Currently they all opacities equal to the Planck mean opacity. The wave front is where the flux is the largest and most variant. However, since kappa is not a function of nu, probably this is okay. I find it hard to believe that the issue is on the steep change of opacity, because the error starts to show up at x=0.1 where the slope on the T curve is not huge. I think we should either find a way to resolve this consistently without special treatment of wave front. Otherwise, for now, the third option makes most sense because we also need to consider the practicality of our implementation. In real application of the code to, say ISM around YSO, to do any of these reconstruction, we have to do it to all the cells since it's hard to define a subregion to do this particular implementation. |
I also find that, with n=100, the wave travels 15% faster than analytical solution from t=1e-8 s to 10e-8 s. |
The temperature reconstruction would have to be done for all cells. Most radiation transport codes use a linear nodal DG spatial discretization (with mass lumping) in order to avoid this issue. I think that's the most correct way to handle it, but it is significantly more complicated than a finite volume method, especially with AMR. You can verify whether the issue is due to the opacity variation by choosing a different power law index and plugging it into the equation in Zeldovich and Razier and numerically integrating the resulting ODE. Then you can run the test with that opacity. |
I think I've figured out how to generalize it to 2D/3D. You can reconstruct |
I think the reason it worked with SDC but not with IMEX is that in SDC, the transport step is iterated until it is consistent with the matter-radiation exchange step. However, in the IMEX method, the optical depth correction in the Riemann solver is always based on the lagged opacities. This explains why the IMEX method appears to converge as the CFL number approaches zero, even at fixed spatial resolution (and even without the special opacities). I propose that we wrap the second stage of the IMEX method in a Picard iteration that is continued until the cell-average flux-mean opacities (as used in the Riemann solver) stop changing. My guess is that this will make everything work. This kind of fixed-point iteration is done in, e.g. the PeleC combustion code in order to tightly couple the advection terms with the reactions (https://amrex-combustion.github.io/PeleC/Algorithms.html#standard-time-advance). This makes the time integration an implicit trapezoidal rule method. |
OK, I'll look into that. The variable-kappa radiation pulse (in the extreme diffusion limit) failed as well. What I expect is Figure 6 (green curve) of CASTRO III but what I get is as follows. The diffusion is too slow and there is (odd-even?) instability at the center. Similarly to the Marshak Asymptotic test, this fails because of the fast change of opacity at the wavefront. |
That is interesting. I don't know why the instability appears here, but is fixed in the other tests. It might be the case that the instability goes away once the variable opacity is treated correctly. Do you get a solution closer to the correct answer with |
I tried cfl = 0.4 (half the original) and it didn't make any difference. I didn't try smaller cfl because it will cause something like "assertion fail: N_subcycle < N_max", but I guess I can increase N_max manually. I'll try it later. |
That sounds like a bug in the subcycling code... can you open an issue for that? |
This is very interesting and I'll try it and see if that fixes the problem. Let me try to write down the equations and see if that makes sense. The SSP-RK2 method goes and the IMEX PD-ARS method goes I have used the simple case where Then, here is a modified version of the ARS method that iterates at the last step. We begin with the stage one of the ARS method Then, we iterate through @BenWibking Let me know if this makes sense to you. This is probably easy to code so I'm gonna do it right now and then think about other improvements. I reckon that at some point we want to cycle the transport equations along with matter-radiation exchange since the latter is the bottleneck anyway. |
Yes, this makes sense to me. The opacity is not necessarily the optimal quantity to check to determine whether it has converged - that is just a guess on my part. My guess is that the opacity is a good quantity to check for convergence because that's what causes the inaccuracy. But it might also be useful to check how much the radiation energy density changes between iterations, or some combination. |
@BenWibking
This will cause invalid state in the radiation. I'll need to fix it later. However, there are invalid state even with diffusive fluxes correction, anyway. You can see that by running the current PD-ARS branch with debugging enabled. I figured out this from Section 3.1 of https://ui.adsabs.harvard.edu/abs/2022MNRAS.512.1499R (the paper with an equation you used to fix the odd-even instability problem). They mentioned that the diffusive correction should be switched off at high optical depth. I guess this is because you get accurate diffusive result at high optical depth anyway, so you don't need correction. You must understand this more than I do. Anyway, removing the routine mentioned above is just a temporary solution as I will definitely introduce other problems in other tests. We have to think about a smarter way to reconcile optically thin and optically thick regimes. Perhaps instead of I'll update this PR shortly. |
for more information, see https://pre-commit.ci
/azp run clang-tidy-review |
No pipelines are associated with this pull request. |
What's happening with clang-tidy-review? |
It's not controlled by the magic comments. It should re-run automatically. It's reporting a failure because of an unresolved warning that you can see here: https://github.com/quokka-astro/quokka/actions/runs/7108990100/job/19354774401?pr=448#step:4:1382
|
/azp run |
Azure Pipelines successfully started running 4 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 4 pipeline(s). |
All tests passed. Should be ready to merge now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a major performance issue, which is that you do fill the ghost cells 4 times for the radiation update. This will have a severe negative performance impact at scale. It should not be necessary, at least in principle, to fill the ghost cells after the RK stage. Unless there's something I'm missing, the radiation-matter exchange should only take place in the valid (non-ghost) cells, and should not need information from adjacent cells.
There are some minor code style comments that I made.
/azp run |
Azure Pipelines successfully started running 4 pipeline(s). |
You are totally right about the boundary condition. I removed two of the fill_boundary_condition. The other two before each RK stage should be necessary. I resolved all your comments. |
Except for the one unused function, everything looks good to me. Just waiting for the tests to pass, then I'll approve. |
/azp run |
Azure Pipelines successfully started running 4 pipeline(s). |
Description
This PR replaces the radiation time integration scheme from SSP-RK2 + implicit solver to the IMEX PD-ARS scheme. It also removes diffusive fluxes from the predictor and flux addition steps of SSP-RK2. These changes resolve issues with the asymptotic Marshak test. The PR also standardizes the odd-even instability fix criteria. Testing shows correct result for the Marshak Asymptotic test.
Notes:
IMEX_a22 = 0.0; IMEX_a32 = 0.0;
in radiation_system.hpp.i+j+k
being odd.Related issues
N/A
Checklist
Before this pull request can be reviewed, all of these tasks should be completed. Denote completed tasks with an
x
inside the square brackets[ ]
in the Markdown source below:/azp run
.