-
Notifications
You must be signed in to change notification settings - Fork 126
Immersed boundaries integration with IGR #1095
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds IBM ↔ IGR integration: new subroutine Changes
Sequence Diagram(s)sequenceDiagram
participant Solver as IGR Solver
participant Jac as Jacobian (jac / jac_old)
participant IBM as m_ibm (s_interpolate_sigma_igr / s_interpolate_image_point)
participant GPU as GPU-parallel loops
Note over Solver: main IGR iterative solve
Solver->>IBM: call s_interpolate_image_point(..., optional q_cons_vf)
IBM-->>Solver: interpolated image-point data
alt ib (IBM) active — post-iteration
Solver->>Jac: zero jac at IB markers (parallel)
Solver->>IBM: call s_interpolate_sigma_igr(jac)
IBM->>GPU: perform GPU-parallel interpolation & assemble q_cons_vf / sigma
GPU->>Jac: update jac (ghost/image points)
alt Jacobi solver
Solver->>Jac: copy jac → jac_old (parallel)
end
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High-level Suggestion
The implementation for moving immersed boundaries with IGR is incomplete, as indicated by a !TEMPORARY code comment. This should be either fully implemented or disabled with a runtime check to avoid incorrect simulations. [High-level, importance: 9]
Solution Walkthrough:
Before:
! in s_ibm_correct_state
...
! !TEMPORARY, NEED TO FIX FOR MOVING IGR
if (.not. igr) then
if (patch_ib(patch_id)%moving_ibm <= 1) then
q_prim_vf(E_idx)%sf(j, k, l) = pres_IP
else
! Logic for moving IBM pressure calculation
...
end if
end if
! When igr is true, pressure is not set for the ghost point.
...
After:
! in s_ibm_correct_state
...
! set the pressure
if (igr .and. patch_ib(patch_id)%moving_ibm > 1) then
! Option A: Add a runtime error to prevent invalid simulations
call s_throw_error("Moving IBM with IGR is not yet supported.")
else
! Option B: Fully implement the feature
if (patch_ib(patch_id)%moving_ibm <= 1) then
q_prim_vf(E_idx)%sf(j, k, l) = pres_IP
else
! Correctly implement moving IBM pressure calculation for IGR
...
end if
end if
...
| if (num_fluids == 1) then | ||
| alpha_rho_IP(1) = alpha_rho_IP(1) + coeff*q_cons_vf(contxb)%sf(i, j, k) | ||
| alpha_IP(1) = alpha_IP(1) + coeff*1._wp | ||
| else | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do l = 1, num_fluids - 1 | ||
| alpha_rho_IP(l) = alpha_rho_IP(l) + coeff*q_cons_vf(l)%sf(i, j, k) | ||
| alpha_IP(l) = alpha_IP(l) + coeff*q_cons_vf(E_idx + l)%sf(i, j, k) | ||
| alphaSum = alphaSum + q_cons_vf(E_idx + l)%sf(i, j, k) | ||
| end do | ||
| alpha_rho_IP(num_fluids) = alpha_rho_IP(num_fluids) + coeff*q_cons_vf(num_fluids)%sf(i, j, k) | ||
| alpha_IP(num_fluids) = alpha_IP(num_fluids) + coeff*(1._wp - alphaSum) | ||
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Suggestion: Reset alphaSum to zero for each interpolation stencil point to fix an incorrect volume fraction calculation. [possible issue, importance: 9]
| if (num_fluids == 1) then | |
| alpha_rho_IP(1) = alpha_rho_IP(1) + coeff*q_cons_vf(contxb)%sf(i, j, k) | |
| alpha_IP(1) = alpha_IP(1) + coeff*1._wp | |
| else | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do l = 1, num_fluids - 1 | |
| alpha_rho_IP(l) = alpha_rho_IP(l) + coeff*q_cons_vf(l)%sf(i, j, k) | |
| alpha_IP(l) = alpha_IP(l) + coeff*q_cons_vf(E_idx + l)%sf(i, j, k) | |
| alphaSum = alphaSum + q_cons_vf(E_idx + l)%sf(i, j, k) | |
| end do | |
| alpha_rho_IP(num_fluids) = alpha_rho_IP(num_fluids) + coeff*q_cons_vf(num_fluids)%sf(i, j, k) | |
| alpha_IP(num_fluids) = alpha_IP(num_fluids) + coeff*(1._wp - alphaSum) | |
| end if | |
| if (num_fluids == 1) then | |
| alpha_rho_IP(1) = alpha_rho_IP(1) + coeff*q_cons_vf(contxb)%sf(i, j, k) | |
| alpha_IP(1) = alpha_IP(1) + coeff*1._wp | |
| else | |
| alphaSum = 0._wp | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do l = 1, num_fluids - 1 | |
| alpha_rho_IP(l) = alpha_rho_IP(l) + coeff*q_cons_vf(l)%sf(i, j, k) | |
| alpha_IP(l) = alpha_IP(l) + coeff*q_cons_vf(E_idx + l)%sf(i, j, k) | |
| alphaSum = alphaSum + q_cons_vf(E_idx + l)%sf(i, j, k) | |
| end do | |
| alpha_rho_IP(num_fluids) = alpha_rho_IP(num_fluids) + coeff*q_cons_vf(num_fluids)%sf(i, j, k) | |
| alpha_IP(num_fluids) = alpha_IP(num_fluids) + coeff*(1._wp - alphaSum) | |
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch.
| if (ib) then | ||
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | ||
| do l = 0, p | ||
| do k = 0, n | ||
| do j = 0, m | ||
| if (ib_markers%sf(j, k, l) /= 0) then | ||
| jac(j, k, l) = 0._wp | ||
| end if | ||
| end do | ||
| end do | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| call s_update_igr(jac_sf) | ||
| ! If using Jacobi Iter, we need to update jac_old again | ||
| if (igr_iter_solver == 1) then | ||
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | ||
| do l = idwbuff(3)%beg, idwbuff(3)%end | ||
| do k = idwbuff(2)%beg, idwbuff(2)%end | ||
| do j = idwbuff(1)%beg, idwbuff(1)%end | ||
| jac_old(j, k, l) = jac(j, k, l) | ||
| end do | ||
| end do | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| end if | ||
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Update jac_old for the Gauss-Seidel solver (igr_iter_solver == 2) in addition to the Jacobi solver to ensure correct convergence checks. [possible issue, importance: 8]
| if (ib) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = 0, p | |
| do k = 0, n | |
| do j = 0, m | |
| if (ib_markers%sf(j, k, l) /= 0) then | |
| jac(j, k, l) = 0._wp | |
| end if | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| call s_update_igr(jac_sf) | |
| ! If using Jacobi Iter, we need to update jac_old again | |
| if (igr_iter_solver == 1) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = idwbuff(3)%beg, idwbuff(3)%end | |
| do k = idwbuff(2)%beg, idwbuff(2)%end | |
| do j = idwbuff(1)%beg, idwbuff(1)%end | |
| jac_old(j, k, l) = jac(j, k, l) | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end if | |
| if (ib) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = 0, p | |
| do k = 0, n | |
| do j = 0, m | |
| if (ib_markers%sf(j, k, l) /= 0) then | |
| jac(j, k, l) = 0._wp | |
| end if | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| call s_update_igr(jac_sf) | |
| ! If using Jacobi Iter, we need to update jac_old again. | |
| ! This is also needed for Gauss-Seidel convergence check. | |
| if (igr_iter_solver == 1 .or. igr_iter_solver == 2) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = idwbuff(3)%beg, idwbuff(3)%end | |
| do k = idwbuff(2)%beg, idwbuff(2)%end | |
| do j = idwbuff(1)%beg, idwbuff(1)%end | |
| jac_old(j, k, l) = jac(j, k, l) | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not relevant.
| subroutine s_interpolate_image_point(q_prim_vf, gp, alpha_rho_IP, alpha_IP, & | ||
| pres_IP, vel_IP, c_IP, r_IP, v_IP, pb_IP, & | ||
| mv_IP, nmom_IP, pb_in, mv_in, presb_IP, massv_IP) | ||
| mv_IP, nmom_IP, pb_in, mv_in, presb_IP, massv_IP, q_cons_vf) | ||
| $:GPU_ROUTINE(parallelism='[seq]') | ||
| type(scalar_field), & | ||
| dimension(sys_size), & | ||
| intent(IN) :: q_prim_vf !< Primitive Variables | ||
| type(scalar_field), optional, & | ||
| dimension(sys_size), & | ||
| intent(IN) :: q_cons_vf !< Conservative Variables | ||
| real(stp), optional, dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(IN) :: pb_in, mv_in | ||
| type(ghost_point), intent(IN) :: gp | ||
| real(wp), intent(INOUT) :: pres_IP | ||
| real(wp), dimension(3), intent(INOUT) :: vel_IP | ||
| real(wp), intent(INOUT) :: c_IP | ||
| real(wp), dimension(num_fluids), intent(INOUT) :: alpha_IP, alpha_rho_IP | ||
| real(wp), optional, dimension(:), intent(INOUT) :: r_IP, v_IP, pb_IP, mv_IP | ||
| real(wp), optional, dimension(:), intent(INOUT) :: nmom_IP | ||
| real(wp), optional, dimension(:), intent(INOUT) :: presb_IP, massv_IP | ||
| integer :: i, j, k, l, q !< Iterator variables | ||
| integer :: i1, i2, j1, j2, k1, k2 !< Iterator variables | ||
| real(wp) :: coeff | ||
| real(wp) :: alphaSum | ||
| real(wp) :: pres, dyn_pres, pres_mag, T | ||
| real(wp) :: rhoYks(1:num_species) | ||
| real(wp) :: rho_K, gamma_K, pi_inf_K, qv_K | ||
| real(wp), dimension(num_fluids) :: alpha_K, alpha_rho_K | ||
| real(wp), dimension(2) :: Re_K | ||
| i1 = gp%ip_grid(1); i2 = i1 + 1 | ||
| j1 = gp%ip_grid(2); j2 = j1 + 1 | ||
| k1 = gp%ip_grid(3); k2 = k1 + 1 | ||
| if (p == 0) then | ||
| k1 = 0 | ||
| k2 = 0 | ||
| end if | ||
| alpha_rho_IP = 0._wp | ||
| alpha_IP = 0._wp | ||
| pres_IP = 0._wp | ||
| vel_IP = 0._wp | ||
| pres = 0._wp | ||
| if (surface_tension) c_IP = 0._wp | ||
| if (bubbles_euler) then | ||
| r_IP = 0._wp | ||
| v_IP = 0._wp | ||
| if (.not. polytropic) then | ||
| mv_IP = 0._wp | ||
| pb_IP = 0._wp | ||
| end if | ||
| end if | ||
| if (qbmm) then | ||
| nmom_IP = 0._wp | ||
| if (.not. polytropic) then | ||
| presb_IP = 0._wp | ||
| massv_IP = 0._wp | ||
| end if | ||
| end if | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do i = i1, i2 | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do j = j1, j2 | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do k = k1, k2 | ||
| coeff = gp%interp_coeffs(i - i1 + 1, j - j1 + 1, k - k1 + 1) | ||
| pres_IP = pres_IP + coeff* & | ||
| q_prim_vf(E_idx)%sf(i, j, k) | ||
| if (igr) then | ||
| ! For IGR, we will need to perform operations on | ||
| ! the conservative variables instead | ||
| alphaSum = 0._wp | ||
| dyn_pres = 0._wp | ||
| if (num_fluids == 1) then | ||
| alpha_rho_K(1) = q_cons_vf(1)%sf(i, j, k) | ||
| alpha_K(1) = 1._wp | ||
| else | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do l = 1, num_fluids - 1 | ||
| alpha_rho_K(l) = q_cons_vf(l)%sf(i, j, k) | ||
| alpha_K(l) = q_cons_vf(E_idx + l)%sf(i, j, k) | ||
| end do | ||
| alpha_rho_K(num_fluids) = q_cons_vf(num_fluids)%sf(i, j, k) | ||
| alpha_K(num_fluids) = 1._wp - sum(alpha_K(1:num_fluids - 1)) | ||
| end if | ||
| if (model_eqns /= 4) then | ||
| if (elasticity) then | ||
| ! call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, alpha_K, & | ||
| ! alpha_rho_K, Re_K, G_K, Gs_vc) | ||
| else | ||
| call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, & | ||
| alpha_K, alpha_rho_K, Re_K) | ||
| end if | ||
| end if | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do q = momxb, momxe | ||
| vel_IP(q + 1 - momxb) = vel_IP(q + 1 - momxb) + coeff* & | ||
| q_prim_vf(q)%sf(i, j, k) | ||
| end do | ||
| rho_K = max(rho_K, sgm_eps) | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do l = contxb, contxe | ||
| alpha_rho_IP(l) = alpha_rho_IP(l) + coeff* & | ||
| q_prim_vf(l)%sf(i, j, k) | ||
| alpha_IP(l) = alpha_IP(l) + coeff* & | ||
| q_prim_vf(advxb + l - 1)%sf(i, j, k) | ||
| end do | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do l = momxb, momxe | ||
| if (model_eqns /= 4) then | ||
| dyn_pres = dyn_pres + 5.e-1_wp*q_cons_vf(l)%sf(i, j, k) & | ||
| *q_cons_vf(l)%sf(i, j, k)/rho_K | ||
| end if | ||
| end do | ||
| pres_mag = 0._wp | ||
| call s_compute_pressure(q_cons_vf(E_idx)%sf(i, j, k), & | ||
| q_cons_vf(alf_idx)%sf(i, j, k), & | ||
| dyn_pres, pi_inf_K, gamma_K, rho_K, & | ||
| qv_K, rhoYks, pres, T, pres_mag=pres_mag) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Add a check to ensure the optional q_cons_vf argument is provided to s_interpolate_image_point when igr is enabled, and stop with an error if it is missing. [possible issue, importance: 7]
| subroutine s_interpolate_image_point(..., presb_IP, massv_IP, q_cons_vf) | |
| ... | |
| if (igr .and. .not. present(q_cons_vf)) then | |
| error stop 'q_cons_vf must be provided when igr is enabled' | |
| end if | |
| if (igr) then | |
| alphaSum = 0._wp | |
| ... | |
| alpha_rho_K(1) = q_cons_vf(1)%sf(i, j, k) | |
| ... | |
| else | |
| ... | |
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not relevant.
Nitpicks 🔍
|
src/simulation/m_igr.fpp
Outdated
|
|
||
| use m_boundary_common | ||
|
|
||
| use m_ibm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Suggestion: The PR adds a broad use m_ibm statement which imports the entire module namespace; importing all symbols increases risk of name collisions and makes it unclear which symbols are used by this file. Restrict the import to only the symbols actually required (for example ib, ib_markers, s_update_igr) to avoid accidental symbol conflicts and improve code clarity. [possible bug]
Severity Level: Critical 🚨
| use m_ibm | |
| use m_ibm, only: ib, ib_markers, s_update_igr |
Why it matters? ⭐
Restricting module imports with "only:" is a good Fortran practice and improves clarity. In this file the newly added code uses ib, ib_markers and s_update_igr from m_ibm; switching to "use m_ibm, only: ib, ib_markers, s_update_igr" reduces namespace pollution and the risk of accidental name clashes. This is stylistic but sensible and safe, provided you list every symbol from m_ibm that this module actually uses.
Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** src/simulation/m_igr.fpp
**Line:** 18:18
**Comment:**
*Possible Bug: The PR adds a broad `use m_ibm` statement which imports the entire module namespace; importing all symbols increases risk of name collisions and makes it unclear which symbols are used by this file. Restrict the import to only the symbols actually required (for example `ib`, `ib_markers`, `s_update_igr`) to avoid accidental symbol conflicts and improve code clarity.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good suggestion.
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/simulation/m_ibm.fpp (1)
891-1083: Addpresent(q_cons_vf)guard to IGR branch ins_interpolate_image_pointThe
if (igr)branch at line 963 dereferencesq_cons_vfwithout checking if it is present. While the call site ins_ibm_correct_stateonly passesq_cons_vf=in the explicitelse if (igr)branch, the mutually-exclusive if-else-if structure means that when bothbubbles_euler=.true.andigr=.true.are set, the call enters thebubbles_eulerbranch without passingq_cons_vf, but the subroutine still executes theif (igr)block internally with an absent optional argument. The Fortran standard prohibits accessing an omitted optional argument, creating undefined behavior on lines 972–981 and 995–996.Change line 963 from:
if (igr) thento:
if (igr .and. present(q_cons_vf)) thenOptionally add
@:ASSERT(.not. igr .or. present(q_cons_vf), 'IGR requires q_cons_vf in s_interpolate_image_point')near the top of the subroutine for early detection of unsupported configurations.
🧹 Nitpick comments (2)
src/simulation/m_igr.fpp (1)
18-18: New dependency onm_ibmlooks reasonable but widens couplingBringing
m_ibmintom_igrto accessib_markersands_update_igris consistent with the new IBM–IGR workflow and doesn’t introduce a circular dependency (m_ibm doesn’tuse m_igr). Just be aware this further couples the IGR solver to IBM internals; if you intends_update_igrto stay IBM-internal, consider exposing only what’s needed via anonly:list on theusestatement in a follow‑up.src/simulation/m_ibm.fpp (1)
315-328: Pressure setting skipped for IGR is fine but note the TODO for moving IGRThe pressure assignment block:
! !TEMPORARY, NEED TO FIX FOR MOVING IGR if (.not. igr) then if (patch_ib(patch_id)%moving_ibm <= 1) then q_prim_vf(E_idx)%sf(j,k,l) = pres_IP else ... end if end ifcorrectly avoids reusing primitive-variable pressure logic for IGR cases, which now build ghost states from conservative variables. The in-code TODO acknowledges that moving IBM + IGR will eventually need a dedicated treatment; for static or currently tested configurations the guard is appropriate.
I’d just suggest turning this into a more explicit FIXME (or opening a tracked issue) so the moving‑IGR pressure model doesn’t get forgotten.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/simulation/m_ibm.fpp(11 hunks)src/simulation/m_igr.fpp(2 hunks)toolchain/mfc/case_validator.py(0 hunks)
💤 Files with no reviewable changes (1)
- toolchain/mfc/case_validator.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
🧠 Learnings (9)
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_igr.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_igr.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: cubic · AI code reviewer
🔇 Additional comments (1)
src/simulation/m_ibm.fpp (1)
269-309: Index formulations for species volume fractions are correctly synchronizedThe code correctly uses both index forms interchangeably: IGR writes to
q_cons_vf(E_idx + q)while non-IGR writes toq_prim_vf(advxb + q - 1), and sinceadvxb = E_idx + 1, both expressions reference identical array positions. The conservative variable assignments stay in sync across all branches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 issues found across 3 files
Prompt for AI agents (all 4 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/simulation/m_ibm.fpp">
<violation number="1" location="src/simulation/m_ibm.fpp:162">
P2: GPU data mapping concern: `jac_sf` is accessed within the GPU parallel kernel but is not explicitly included in data clauses (copyin/copy/update). This could result in accessing host memory from device code or using stale data. Verify that `jac_sf%sf` is properly mapped to the device before this kernel executes.</violation>
<violation number="2" location="src/simulation/m_ibm.fpp:240">
P2: Inconsistent loop bounds: zeroing `jac` for IB markers uses `0..m, 0..n, 0..p` ranges, but copying to `jac_old` uses `idwbuff(...)%beg:...%end`. If `idwbuff` extends beyond the computational domain (e.g., includes ghost/halo cells), this may leave ghost regions of `jac_old` inconsistent after IB updates.</violation>
<violation number="3" location="src/simulation/m_ibm.fpp:291">
P1: The optional `q_cons_vf` argument is accessed without a `present(q_cons_vf)` guard when `igr` is true. If this subroutine is ever called with `igr` enabled but without passing `q_cons_vf`, this will cause undefined behavior. Consider adding `if (igr .and. .not. present(q_cons_vf))` error handling or restructuring the logic.</violation>
<violation number="4" location="src/simulation/m_ibm.fpp:982">
P1: Variables `rho_K`, `gamma_K`, `pi_inf_K`, and `qv_K` will be uninitialized when both `elasticity` and `igr` are true. The elasticity case is commented out, but these variables are still used in subsequent calculations (`max(rho_K, sgm_eps)`, division by `rho_K`, and `s_compute_pressure` call). Either uncomment and fix the elasticity call, or add a runtime check to prevent this code path.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
|
|
||
| ! At all ghost points, use its image point to interpolate sigma | ||
| if (num_gps > 0) then | ||
| $:GPU_PARALLEL_LOOP(private='[i, j, k, l, j1, j2, k1, k2, l1, l2, r, s, t, gp, coeff, jac_IP]') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: GPU data mapping concern: jac_sf is accessed within the GPU parallel kernel but is not explicitly included in data clauses (copyin/copy/update). This could result in accessing host memory from device code or using stale data. Verify that jac_sf%sf is properly mapped to the device before this kernel executes.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/simulation/m_ibm.fpp, line 162:
<comment>GPU data mapping concern: `jac_sf` is accessed within the GPU parallel kernel but is not explicitly included in data clauses (copyin/copy/update). This could result in accessing host memory from device code or using stale data. Verify that `jac_sf%sf` is properly mapped to the device before this kernel executes.</comment>
<file context>
@@ -150,6 +150,44 @@ contains
+
+ ! At all ghost points, use its image point to interpolate sigma
+ if (num_gps > 0) then
+ $:GPU_PARALLEL_LOOP(private='[i, j, k, l, j1, j2, k1, k2, l1, l2, r, s, t, gp, coeff, jac_IP]')
+ do i = 1, num_gps
+ jac_IP = 0._wp
</file context>
✅ Addressed in 489bc7c
| if (.not. igr) then | ||
| ! set the Moving IBM interior Pressure Values | ||
| $:GPU_PARALLEL_LOOP(private='[i,j,k]', copyin='[E_idx]', collapse=3) | ||
| do l = 0, p |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Inconsistent loop bounds: zeroing jac for IB markers uses 0..m, 0..n, 0..p ranges, but copying to jac_old uses idwbuff(...)%beg:...%end. If idwbuff extends beyond the computational domain (e.g., includes ghost/halo cells), this may leave ghost regions of jac_old inconsistent after IB updates.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/simulation/m_ibm.fpp, line 240:
<comment>Inconsistent loop bounds: zeroing `jac` for IB markers uses `0..m, 0..n, 0..p` ranges, but copying to `jac_old` uses `idwbuff(...)%beg:...%end`. If `idwbuff` extends beyond the computational domain (e.g., includes ghost/halo cells), this may leave ghost regions of `jac_old` inconsistent after IB updates.</comment>
<file context>
@@ -196,18 +234,20 @@ contains
+ if (.not. igr) then
+ ! set the Moving IBM interior Pressure Values
+ $:GPU_PARALLEL_LOOP(private='[i,j,k]', copyin='[E_idx]', collapse=3)
+ do l = 0, p
+ do k = 0, n
+ do j = 0, m
</file context>
| q_prim_vf(q)%sf(j, k, l) = alpha_rho_IP(q) | ||
| q_prim_vf(advxb + q - 1)%sf(j, k, l) = alpha_IP(q) | ||
| end do | ||
| if (igr) then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P1: The optional q_cons_vf argument is accessed without a present(q_cons_vf) guard when igr is true. If this subroutine is ever called with igr enabled but without passing q_cons_vf, this will cause undefined behavior. Consider adding if (igr .and. .not. present(q_cons_vf)) error handling or restructuring the logic.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/simulation/m_ibm.fpp, line 291:
<comment>The optional `q_cons_vf` argument is accessed without a `present(q_cons_vf)` guard when `igr` is true. If this subroutine is ever called with `igr` enabled but without passing `q_cons_vf`, this will cause undefined behavior. Consider adding `if (igr .and. .not. present(q_cons_vf))` error handling or restructuring the logic.</comment>
<file context>
@@ -239,34 +279,52 @@ contains
- q_prim_vf(q)%sf(j, k, l) = alpha_rho_IP(q)
- q_prim_vf(advxb + q - 1)%sf(j, k, l) = alpha_IP(q)
- end do
+ if (igr) then
+ if (num_fluids == 1) then
+ q_cons_vf(1)%sf(j, k, l) = alpha_rho_IP(1)
</file context>
| ! call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, alpha_K, & | ||
| ! alpha_rho_K, Re_K, G_K, Gs_vc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P1: Variables rho_K, gamma_K, pi_inf_K, and qv_K will be uninitialized when both elasticity and igr are true. The elasticity case is commented out, but these variables are still used in subsequent calculations (max(rho_K, sgm_eps), division by rho_K, and s_compute_pressure call). Either uncomment and fix the elasticity call, or add a runtime check to prevent this code path.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/simulation/m_ibm.fpp, line 982:
<comment>Variables `rho_K`, `gamma_K`, `pi_inf_K`, and `qv_K` will be uninitialized when both `elasticity` and `igr` are true. The elasticity case is commented out, but these variables are still used in subsequent calculations (`max(rho_K, sgm_eps)`, division by `rho_K`, and `s_compute_pressure` call). Either uncomment and fix the elasticity call, or add a runtime check to prevent this code path.</comment>
<file context>
@@ -887,31 +958,96 @@ contains
+ end if
+ if (model_eqns /= 4) then
+ if (elasticity) then
+! call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, alpha_K, &
+! alpha_rho_K, Re_K, G_K, Gs_vc)
+ else
</file context>
| ! call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, alpha_K, & | |
| ! alpha_rho_K, Re_K, G_K, Gs_vc) | |
| call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, alpha_K, & | |
| alpha_rho_K, Re_K, G_K, Gs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/simulation/m_ibm.fpp (1)
898-927: Uninitialized mixture variables whenigrandelasticityare both trueThe elasticity branch in the IGR path of
s_interpolate_image_pointremains commented out (lines 988–989), leavingrho_K,gamma_K,pi_inf_K, andqv_Kuninitialized when bothigrandelasticityare enabled. These uninitialized variables are then used in:
rho_K = max(rho_K, sgm_eps)(line 997)- Dynamic pressure accumulation with division by
rho_K(lines 1003–1006)s_compute_pressurecall passingpi_inf_K,gamma_K,rho_K,qv_K(lines 1008–1012)This produces undefined behavior and garbage values for pressure and velocity at IBM ghost points.
Enable the elasticity path and declare the missing variables
G_K(scalar) andGs(array of sizenum_fluids) to match the pattern used in other routines likem_sim_helpers.fpp.
🧹 Nitpick comments (2)
src/simulation/m_ibm.fpp (2)
244-257: Moving-IBM interior pressure is now explicitly disabled for IGRGating the “moving IBM interior pressure” initialization with
if (.not. igr)makes the current limitation (moving IBM + IGR not supported) explicit and prevents that heuristic from running under IGR.This is reasonable as an interim step; just ensure this limitation is documented or guarded elsewhere (e.g., in case setup/validator) so users don’t assume full moving-IBM support with IGR.
898-905: Add guard for optionalq_cons_vfwhen IGR is enabledThe subroutine
s_interpolate_image_pointdeclaresq_cons_vfas optional but dereferences it unconditionally inside theif (igr)block without checkingpresent(). While the current only IGR call site (s_ibm_correct_state) always suppliesq_cons_vf, this is fragile. Future callers might invoke this routine under IGR without providingq_cons_vf, causing undefined behavior.Add a guard at the routine entry:
if (igr .and. .not. present(q_cons_vf)) then call s_mpi_abort('s_interpolate_image_point: q_cons_vf must be present when igr is enabled') end ifThis hardens the API boundary and prevents silent errors.
Also applies to: 970-987
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/simulation/m_ibm.fpp(11 hunks)src/simulation/m_igr.fpp(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
🧠 Learnings (10)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: cubic · AI code reviewer
🔇 Additional comments (2)
src/simulation/m_igr.fpp (1)
373-398: IBM post-processing ofjacandjac_oldlooks consistentZeroing
jacinside IBM-marked cells over0:m,0:n,0:p, then callings_update_igr(jac_sf)and finally re-syncingjac_oldfor Jacobi (igr_iter_solver == 1) matches the allocation/usage patterns ofjac/jac_oldand the IBM ghost-point interpolation.No further issues spotted in this block.
src/simulation/m_ibm.fpp (1)
289-307: IGR-specific conservative updates ins_ibm_correct_statelook coherentFor
igrruns:
- The image-point interpolation now fills
alpha_rho_IP/alpha_IPvia the IGR path ins_interpolate_image_point.- You correctly map those into
q_cons_vf(single- and multi-fluid) at the ghost points, instead of going throughq_prim_vf.- The later
.not. igrblock that resets continuity and advective variables is skipped, so the IGR-consistent values aren’t overwritten.This matches the new conservative-variable-based design; no issues spotted.
Also applies to: 390-397
src/simulation/m_ibm.fpp
Outdated
| !> Interpolates sigma from the m_igr module at all ghost points | ||
| !! @param jac_sf Sigma, Entropic pressure | ||
| subroutine s_update_igr(jac_sf) | ||
| type(scalar_field), dimension(1), intent(inout) :: jac_sf | ||
| integer :: j, k, l, r, s, t, i | ||
| integer :: j1, j2, k1, k2, l1, l2 | ||
| real(wp) :: coeff, jac_IP | ||
| type(ghost_point) :: gp | ||
|
|
||
| ! At all ghost points, use its image point to interpolate sigma | ||
| if (num_gps > 0) then | ||
| $:GPU_PARALLEL_LOOP(private='[i, j, k, l, j1, j2, k1, k2, l1, l2, r, s, t, gp, coeff, jac_IP]') | ||
| do i = 1, num_gps | ||
| jac_IP = 0._wp | ||
| gp = ghost_points(i) | ||
| r = gp%loc(1) | ||
| s = gp%loc(2) | ||
| t = gp%loc(3) | ||
|
|
||
| j1 = gp%ip_grid(1); j2 = j1 + 1 | ||
| k1 = gp%ip_grid(2); k2 = k1 + 1 | ||
| l1 = gp%ip_grid(3); l2 = l1 + 1 | ||
|
|
||
| if (p == 0) then | ||
| l1 = 0 | ||
| l2 = 0 | ||
| end if | ||
|
|
||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do l = l1, l2 | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do k = k1, k2 | ||
| $:GPU_LOOP(parallelism='[seq]') | ||
| do j = j1, j2 | ||
| coeff = gp%interp_coeffs(j - j1 + 1, k - k1 + 1, l - l1 + 1) | ||
| jac_IP = jac_IP + coeff*jac_sf(1)%sf(j, k, l) | ||
| end do | ||
| end do | ||
| end do | ||
| jac_sf(1)%sf(r, s, t) = jac_IP | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| end if | ||
| end subroutine s_update_igr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tighten 2D handling in s_update_igr to avoid reading an unused ip_grid(3)
The new 2D guard (if (p == 0) then l1 = 0; l2 = 0) correctly prevents looping over an undefined k-stencil, but gp%ip_grid(3) is still read unconditionally before being overridden. In 2D runs, that component is never set by s_compute_image_points, so this is technically an uninitialized read even though the value is not used to form loop bounds.
You can avoid touching ip_grid(3) entirely when p == 0:
Suggested change
- j1 = gp%ip_grid(1); j2 = j1 + 1
- k1 = gp%ip_grid(2); k2 = k1 + 1
- l1 = gp%ip_grid(3); l2 = l1 + 1
-
- if (p == 0) then
- l1 = 0
- l2 = 0
- end if
+ j1 = gp%ip_grid(1); j2 = j1 + 1
+ k1 = gp%ip_grid(2); k2 = k1 + 1
+ if (p == 0) then
+ l1 = 0
+ l2 = 0
+ else
+ l1 = gp%ip_grid(3)
+ l2 = l1 + 1
+ end if🤖 Prompt for AI Agents
In src/simulation/m_ibm.fpp around lines 153-196, the code reads gp%ip_grid(3)
unconditionally which is uninitialized in 2D (p==0); change the order so
ip_grid(1) and ip_grid(2) are read as now, but only read gp%ip_grid(3) and set
l1,l2 from it when p /= 0; when p == 0 set l1 = 0 and l2 = 0 without touching
ip_grid(3). Ensure any uses of l1/l2 (loops and interp_coeffs indexing) still
work by guaranteeing l1/l2 are defined in both branches.
src/simulation/m_igr.fpp
Outdated
|
|
||
| use m_boundary_common | ||
|
|
||
| use m_ibm, only: ib, ib_markers, s_update_igr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix use m_ibm ONLY-list: ib is not exported here and likely conflicts with m_global_parameters
m_ibm (as shown) does not declare an ib symbol, while m_igr already has ib via use m_global_parameters. Importing ib from m_ibm will either fail to compile or create an ambiguous symbol if ib is also exported from m_global_parameters.
Rely on ib from m_global_parameters and restrict the m_ibm import to the IBM-specific data and routine:
Suggested change
- use m_ibm, only: ib, ib_markers, s_update_igr
+ use m_ibm, only: ib_markers, s_update_igr🤖 Prompt for AI Agents
In src/simulation/m_igr.fpp around line 18, the "use m_ibm, only: ib,
ib_markers, s_update_igr" clause is wrong because m_ibm does not export "ib"
(and "ib" is already provided by m_global_parameters); remove "ib" from the
only-list so you only import the IBM-specific symbols (e.g., "ib_markers,
s_update_igr") and rely on the existing "ib" from m_global_parameters to avoid
compile errors or symbol ambiguity.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1095 +/- ##
==========================================
- Coverage 44.08% 43.84% -0.25%
==========================================
Files 71 71
Lines 20332 20364 +32
Branches 1981 1991 +10
==========================================
- Hits 8963 8928 -35
- Misses 10236 10296 +60
- Partials 1133 1140 +7 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
src/simulation/m_ibm.fpp
Outdated
| !> Interpolates sigma from the m_igr module at all ghost points | ||
| !! @param jac_sf Sigma, Entropic pressure | ||
| subroutine s_update_igr(jac_sf) | ||
| type(scalar_field), dimension(1), intent(inout) :: jac_sf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The frontier error might go away if you replace jac_sf here with the non-pointer variable jac from m_igr
src/simulation/m_igr.fpp
Outdated
| end do | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| call s_update_igr(jac_sf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See previous comment. Passing by jac rather than jac_sf is probably preferred. I only made the jac_sf variable so that I could reuse MPI routines elsewhere in this module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, got it
|
@Cowsreal It would be nice to see a convergence result for IGR vs WENO. I.e., run the supersonic flow over a cylinder case at a moderately high resolution with WENO5, then run IGR at three or four resolutions approaching that moderately high resolution, and compare the surface pressure along the same axis. Depending on how automated/quick this is, it might be good to see the same for the airfoil (or do the airfoil instead. Since they're NACA profiles, you could compute the pressure lift/drag and compare it to published values). |
Yes, ok that would be cool to find a real study and compare it. I'll try to do that. |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High-level Suggestion
The PR currently omits pressure calculations for moving immersed boundaries when IGR is enabled, marked by a !TEMPORARY comment. This functionality should be implemented to complete the feature. [High-level, importance: 8]
Solution Walkthrough:
Before:
subroutine s_ibm_correct_state(...)
...
! !TEMPORARY, NEED TO FIX FOR MOVING IGR
if (.not. igr) then
if (patch_ib(patch_id)%moving_ibm <= 1) then
! Set pressure for static IBM
q_prim_vf(E_idx)%sf(j, k, l) = pres_IP
else
! Set pressure for moving IBM
...
end if
end if
...
! For IGR, energy is set later from interpolated values,
! but without the moving boundary pressure correction.
q_cons_vf(E_idx)%sf(j, k, l) = gamma*pres_IP + pi_inf + dyn_pres
...
end subroutine
After:
subroutine s_ibm_correct_state(...)
...
if (.not. igr) then
if (patch_ib(patch_id)%moving_ibm <= 1) then
q_prim_vf(E_idx)%sf(j, k, l) = pres_IP
else
! Set pressure for moving IBM
...
end if
else ! Handle IGR case
if (patch_ib(patch_id)%moving_ibm > 1) then
! Implement moving boundary pressure correction for IGR
! This might involve adjusting pres_IP before it's used
! to calculate the energy for the ghost point.
pres_IP = pres_IP / (1._wp - ... )
end if
end if
...
q_cons_vf(E_idx)%sf(j, k, l) = gamma*pres_IP + pi_inf + dyn_pres
...
end subroutine
| if (p == 0) then | ||
| l1 = 0 | ||
| l2 = 0 | ||
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: In s_update_igr, import the global parameter p from m_global_parameters to fix a compilation error. [possible issue, importance: 9]
| if (p == 0) then | |
| l1 = 0 | |
| l2 = 0 | |
| end if | |
| subroutine s_update_igr(jac_sf) | |
| use m_global_parameters, only: p, wp | |
| type(scalar_field), dimension(1), intent(inout) :: jac_sf | |
| integer :: j, k, l, r, s, t, i | |
| integer :: j1, j2, k1, k2, l1, l2 | |
| real(wp) :: coeff, jac_IP | |
| type(ghost_point) :: gp | |
| ! At all ghost points, use its image point to interpolate sigma | |
| if (num_gps > 0) then | |
| $:GPU_PARALLEL_LOOP(private='[i, j, k, l, j1, j2, k1, k2, l1, l2, r, s, t, gp, coeff, jac_IP]') | |
| do i = 1, num_gps | |
| jac_IP = 0._wp | |
| gp = ghost_points(i) | |
| r = gp%loc(1) | |
| s = gp%loc(2) | |
| t = gp%loc(3) | |
| j1 = gp%ip_grid(1); j2 = j1 + 1 | |
| k1 = gp%ip_grid(2); k2 = k1 + 1 | |
| l1 = gp%ip_grid(3); l2 = l1 + 1 | |
| if (p == 0) then | |
| l2 = l1 | |
| end if | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do l = l1, l2 | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do k = k1, k2 | |
| $:GPU_LOOP(parallelism='[seq]') | |
| do j = j1, j2 | |
| coeff = gp%interp_coeffs(j - j1 + 1, k - k1 + 1, l - l1 + 1) | |
| jac_IP = jac_IP + coeff*jac_sf(1)%sf(j, k, l) | |
| end do | |
| end do | |
| end do | |
| jac_sf(1)%sf(r, s, t) = jac_IP | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end subroutine s_update_igr |
| if (ib) then | ||
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | ||
| do l = 0, p | ||
| do k = 0, n | ||
| do j = 0, m | ||
| if (ib_markers%sf(j, k, l) /= 0) then | ||
| jac(j, k, l) = 0._wp | ||
| end if | ||
| end do | ||
| end do | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| call s_update_igr(jac_sf) | ||
| ! If using Jacobi Iter, we need to update jac_old again | ||
| if (igr_iter_solver == 1) then | ||
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | ||
| do l = idwbuff(3)%beg, idwbuff(3)%end | ||
| do k = idwbuff(2)%beg, idwbuff(2)%end | ||
| do j = idwbuff(1)%beg, idwbuff(1)%end | ||
| jac_old(j, k, l) = jac(j, k, l) | ||
| end do | ||
| end do | ||
| end do | ||
| $:END_GPU_PARALLEL_LOOP() | ||
| end if | ||
| end if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: In s_igr_iterative_solve, when updating jac_old for the Jacobi solver, use loop bounds 0:m, 0:n, 0:p to match the bounds used when modifying jac, ensuring consistency. [general, importance: 6]
| if (ib) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = 0, p | |
| do k = 0, n | |
| do j = 0, m | |
| if (ib_markers%sf(j, k, l) /= 0) then | |
| jac(j, k, l) = 0._wp | |
| end if | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| call s_update_igr(jac_sf) | |
| ! If using Jacobi Iter, we need to update jac_old again | |
| if (igr_iter_solver == 1) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = idwbuff(3)%beg, idwbuff(3)%end | |
| do k = idwbuff(2)%beg, idwbuff(2)%end | |
| do j = idwbuff(1)%beg, idwbuff(1)%end | |
| jac_old(j, k, l) = jac(j, k, l) | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| end if | |
| end if | |
| if (ib) then | |
| $:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3) | |
| do l = 0, p | |
| do k = 0, n | |
| do j = 0, m | |
| if (ib_markers%sf(j, k, l) /= 0) then | |
| jac(j, k, l) = 0._wp | |
| end if | |
| end do | |
| end do | |
| end do | |
| $:END_GPU_PARALLEL_LOOP() | |
| call s_update_igr(jac_sf) | |
| ! If using Jacobi Iter, we need to update jac_old again | |
| if (igr_iter_solver == 1) then | |
| !$acc parallel loop collapse(3) private(j,k,l) copyin(jac) copyout(jac_old) | |
| do l = 0, p | |
| do k = 0, n | |
| do j = 0, m | |
| jac_old(j, k, l) = jac(j, k, l) | |
| end do | |
| end do | |
| end do | |
| end if | |
| end if |
src/simulation/m_ibm.fpp
Outdated
| !> Interpolates sigma from the m_igr module at all ghost points | ||
| !! @param jac_sf Sigma, Entropic pressure | ||
| subroutine s_update_igr(jac_sf) | ||
| type(scalar_field), dimension(1), intent(inout) :: jac_sf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Suggestion: In s_update_igr, declare the dummy argument jac_sf as an assumed-shape array (dimension(:)) instead of a fixed-size array (dimension(1)). [possible issue, importance: 7]
| type(scalar_field), dimension(1), intent(inout) :: jac_sf | |
| type(scalar_field), dimension(:), intent(inout) :: jac_sf |
| if (igr) then | ||
| ! For IGR, we will need to perform operations on | ||
| ! the conservative variables instead |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: In s_interpolate_image_point, add a check to ensure q_cons_vf is present before using it inside the if (igr) block, as it is an optional argument. [possible issue, importance: 8]
| if (igr) then | |
| ! For IGR, we will need to perform operations on | |
| ! the conservative variables instead | |
| if (igr) then | |
| if (.not. present(q_cons_vf)) then | |
| stop 'q_cons_vf is required for IGR interpolation' | |
| end if | |
| ! For IGR, we will need to perform operations on | |
| ! the conservative variables instead |
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
src/simulation/m_ibm.fpp (2)
153-196: Avoid reading unusedip_grid(3)in 2D ins_interpolate_sigma_igrIn 2D (
p == 0),gp%ip_grid(3)is not populated bys_compute_image_points, yet it’s still read before being overridden:l1 = gp%ip_grid(3); l2 = l1 + 1 if (p == 0) then l1 = 0 l2 = 0 end ifRestructure to only access
ip_grid(3)whenp /= 0:Proposed change
- j1 = gp%ip_grid(1); j2 = j1 + 1 - k1 = gp%ip_grid(2); k2 = k1 + 1 - l1 = gp%ip_grid(3); l2 = l1 + 1 - - if (p == 0) then - l1 = 0 - l2 = 0 - end if + j1 = gp%ip_grid(1); j2 = j1 + 1 + k1 = gp%ip_grid(2); k2 = k1 + 1 + if (p == 0) then + l1 = 0 + l2 = 0 + else + l1 = gp%ip_grid(3) + l2 = l1 + 1 + end ifThis keeps the current behavior while eliminating an unnecessary read of a potentially undefined component in 2D.
898-905: IGR path ins_interpolate_image_point: uninitialized mixture vars with elasticity, and brittle use of optionalq_cons_vfTwo issues in the new IGR branch need tightening:
Elasticity + IGR leaves mixture variables uninitialized (real bug)
In theigrblock:if (model_eqns /= 4) then if (elasticity) then! call s_convert_species_to_mixture_variables_acc(...)
else
call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, &
alpha_K, alpha_rho_K, Re_K)
end if
end ifrho_K = max(rho_K, sgm_eps)
...
call s_compute_pressure(..., pi_inf_K, gamma_K, rho_K, qv_K, ...)When `elasticity` is true, the conversion call is commented out, so `rho_K`, `gamma_K`, `pi_inf_K`, and `qv_K` are never set before use. This is undefined behavior for any elastic IGR case. You likely want to mirror the non‑IGR path and actually call the elastic variant here, e.g.: <details> <summary>Suggested fix for elasticity branch</summary> ```diff - if (model_eqns /= 4) then - if (elasticity) then -! call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, alpha_K, & -! alpha_rho_K, Re_K, G_K, Gs_vc) - else - call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, & - alpha_K, alpha_rho_K, Re_K) - end if - end if + if (model_eqns /= 4) then + if (elasticity) then + call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, & + alpha_K, alpha_rho_K, Re_K, G_K, Gs) + else + call s_convert_species_to_mixture_variables_acc(rho_K, gamma_K, pi_inf_K, qv_K, & + alpha_K, alpha_rho_K, Re_K) + end if + end if(Adjust the final arguments to match the actual elastic variant’s signature used elsewhere in this module.)
Optional
q_cons_vfis dereferenced unconditionally whenigris true
Theigrbranch assumesq_cons_vfis present:if (igr) then ... alpha_rho_K(1) = q_cons_vf(1)%sf(i, j, k) ... q_cons_vf(E_idx + l)%sf(i, j, k) ... end ifbut
q_cons_vfis declaredoptional. Right now,s_ibm_correct_statepassesq_cons_vfonly in theelse if (igr)branch; other call sites (bubbles/qbmm) do not. If IGR is ever combined with those modes, this routine will execute withigr == .true.and an absentq_cons_vf, causing undefined behavior.To make this robust:
- Either make
q_cons_vfa required argument (simplest and safest given all current usages are in IBM/IGR), or- Add an explicit guard and treat missing
q_cons_vfas a configuration error, e.g. check at the caller level (host side) thatigrimpliesq_cons_vfis supplied before entering the GPU loop.In any case, relying on a global
igrflag with an optional argument here is fragile and easy to break with future call sites.Also applies to: 921-927, 970-1053
src/simulation/m_igr.fpp (1)
18-18: Fixuse m_ibmONLY-list to avoid invalid/ambiguousibimport
m_ibmdoes not exportib, and this module already getsibfromm_global_parameters. Keepingibin the ONLY-list will either fail compilation or create ambiguity. Restrict the import to IBM-specific symbols:Proposed change
- use m_ibm, only: ib, ib_markers, s_interpolate_sigma_igr + use m_ibm, only: ib_markers, s_interpolate_sigma_igr
🧹 Nitpick comments (1)
src/simulation/m_ibm.fpp (1)
289-309: IGR conservative updates at ghost points look consistent; moving‑IB+IGR remains a known TODO
- The new
else if (igr)branch routes image-point interpolation through the conservativeq_cons_vfpath and writes backalpha_rho_IP/alpha_IPintoq_cons_vfat ghost points. This avoids reusing the primitive-variable path and is coherent with howq_cons_vf(E_idx)is formed later.- The
.not. igrguards around primitive pressure assignment and continuity/adv updates prevent double-writing in the IGR case.The
!TEMPORARY, NEED TO FIX FOR MOVING IGRcomment still signals that moving immersed boundaries with IGR are not fully supported; that’s an existing limitation rather than a new regression, but worth keeping documented in user-facing docs or validators.Also applies to: 390-397, 322-335
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/simulation/m_ibm.fpp(11 hunks)src/simulation/m_igr.fpp(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with$:GPU_ROUTINE(function_name='...', parallelism='[seq]')immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros fromsrc/common/include/parallel_macros.fppinstead
Wrap tight loops with$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')macro; addcollapse=nfor safe nested loop merging
Declare loop-local variables withprivate='[...]'in GPU parallel loop macros
Allocate large arrays withmanagedor move them into a persistent$:GPU_ENTER_DATA(...)region at start-up
Do not placestoporerror stopinside device code
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
🧠 Learnings (12)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules
Applied to files:
src/simulation/m_igr.fppsrc/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Avoid `goto` statements (except unavoidable legacy); avoid global state (`COMMON`, `save`)
Applied to files:
src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `wp` (working precision) parameter from `m_precision_select` instead of hardcoded precision like `real*8`
Applied to files:
src/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: Code Cleanliness Check
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Oak Ridge | Frontier (gpu-acc)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Oak Ridge | Frontier (cpu)
- GitHub Check: Oak Ridge | Frontier (gpu-omp)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Georgia Tech | Phoenix (gpu-omp)
- GitHub Check: Georgia Tech | Phoenix (gpu-acc)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Georgia Tech | Phoenix (cpu)
- GitHub Check: cubic · AI code reviewer
- GitHub Check: Build & Publish
🔇 Additional comments (2)
src/simulation/m_igr.fpp (1)
373-398: IBM–IGR post-solve integration looks consistentZeroing
jacat IB-marked cells, then callings_interpolate_sigma_igr(jac)and refreshingjac_oldfor Jacobi in GPU-parallel loops is consistent with existing patterns (correct privates, collapse, and domains). This is a reasonable place in the iteration loop to apply the IBM correction.Please sanity-check a 2D IBM+IGR case (p = 0) to confirm
s_interpolate_sigma_igrgives expected ghost-point sigma after this block.src/simulation/m_ibm.fpp (1)
244-257: GPU loop for non‑IGR interior pressure initialization is now well-formedThe
.not. igrGPU loop correctly marks IB cells inq_prim_vf(E_idx)and now lists all loop indices (j,k,l) as private, matching the project’s GPU macro pattern and avoiding races.
|
the results make look the same but we need to be sure (1) it's taking the IGR code path (not WENO). are the results identical? and (2) we need to see it working for a 'real' case. This case has a discontinuity in the initial condition at the airfoil location that causes spurious artifacts, it seems. we should see real vortex shedding |
|
the AI comments seem relevant, or at least several of them, as well |
User description
User description
User description
Description
Added immersed boundaries compatibility with IGR. I deleted the relevant prohibition inside the Python case checker.
Type of change
Please delete options that are not relevant.
Scope
If you cannot check the above box, please split your PR into multiple PRs that each have a common goal.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce.
Please also list any relevant details for your test configuration
Plots of mean difference between IGR5 and WENO5 on airfoil contour
Figure: This shows the average of pressure differences over the airfoil contour for all time. (i.e., (mean(p1(s) - p2(s)))(t))

Figure: This shows the average of density differences over the airfoil contour for all time. (i.e., (mean(rho1(s) - rho2(s)))(t))

Movies of pressure profile around airfoil contour
IGR5 results
presContour.mp4
WENO5 results
presContour.mp4
Movies of pressure
Airfoil, IGR5
airfoil_500_500.mp4
Airfoil, WENO5
airfoil_500_500.mp4
Square and Circle have viscosity with Re = 25000
Circle, IGR5
pres.mp4
Circle, WENO5
pres.mp4
Square, IGR5
pres.mp4
Square, WENO5
pres.mp4
Test Configuration:
Checklist
./mfc.sh formatbefore committing my codeIf your code changes any code source files (anything in
src/simulation)To make sure the code is performing as expected on GPU devices, I have:
PR Type
Enhancement
Description
Integrate immersed boundary method with IGR solver
Add
s_update_igrsubroutine for ghost point interpolationModify state correction to handle IGR-specific variable updates
Remove IBM-IGR incompatibility restriction from validator
Diagram Walkthrough
File Walkthrough
m_ibm.fpp
Integrate IBM with IGR solver capabilitiessrc/simulation/m_ibm.fpp
s_update_igrsubroutine to interpolate sigma at ghost pointsusing image point data
s_ibm_correct_stateto conditionally skip interior pressureinitialization when IGR is enabled
for IGR instead of primitive variables
variables at ghost points
calculations when IGR is active
s_interpolate_image_pointsignature to accept optionalconservative variables parameter
conservative variables and mixture properties
m_igr.fpp
Integrate IBM updates into IGR solver loopsrc/simulation/m_igr.fpp
use m_ibmmodule import for IBM integrations_update_igrjac_oldfor Jacobi iteration method whenIBM is active
case_validator.py
Remove IGR-IBM incompatibility restrictiontoolchain/mfc/case_validator.py
ibandigroptions
CodeAnt-AI Description
Support immersed boundary method (IBM) when running the IGR solver
What Changed
Impact
✅ Run IGR on cases with immersed boundaries✅ Correct pressure and velocity at immersed ghost points✅ Fewer pre-run configuration blocks for IB+IGR💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.
PR Type
Enhancement
Description
Integrate immersed boundaries (IBM) with IGR method
Add sigma interpolation at ghost points via
s_update_igrConditionally handle IBM state updates based on IGR flag
Remove IGR-IBM incompatibility restriction from validator
Diagram Walkthrough
File Walkthrough
m_ibm.fpp
IBM-IGR integration with sigma interpolationsrc/simulation/m_ibm.fpp
s_update_igrsubroutine to interpolate sigma at ghost pointsusing image point coefficients
s_ibm_correct_stateto conditionally skip interior pressureinitialization when IGR is enabled
s_interpolate_image_pointto accept optional conservativevariables and handle IGR-specific interpolation logic
fractions at ghost points
conditionals
q_cons_vfparameter from "PrimitiveVariables" to "Conservative Variables"
m_igr.fpp
IGR solver integration with IBM ghost pointssrc/simulation/m_igr.fpp
m_ibm) withib,ib_markers, ands_update_igrloop
s_update_igrjac_oldafter sigma interpolation when using Jacobi iterationsolver
case_validator.py
Remove IGR-IBM incompatibility restrictiontoolchain/mfc/case_validator.py
boundary method
ibvariable declaration and associated validation check