diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 6d1bcee2db..d4a33ea46a 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -73,8 +73,8 @@ src/EXTRA-COMPUTE/compute_stress_mop*.* @RomainVermorel src/EXTRA-COMPUTE/compute_born_matrix.* @Bibobu @athomps src/EXTRA-DUMP/dump_extxyz.* @fxcoudert src/EXTRA-FIX/fix_deform_pressure.* @jtclemm -src/EXTRA-PAIR/pair_dispersion_d3.* @soniasolomoni @arthurfl -src/EXTRA-PAIR/d3_parameters.h @soniasolomoni @arthurfl +src/EXTRA-PAIR/pair_dispersion_d3.* @soniasalomoni @arthurfl +src/EXTRA-PAIR/d3_parameters.h @soniasalomoni @arthurfl src/MISC/*_tracker.* @jtclemm src/MC/fix_gcmc.* @athomps src/MC/fix_sgcmc.* @athomps diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 891d1ad5e5..46f172202f 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -31,19 +31,12 @@ __ ## How Can I Contribute? -There are several ways how you can actively contribute to the LAMMPS project: you can discuss compiling and using LAMMPS, and solving LAMMPS related problems with other LAMMPS users on the lammps-users mailing list or the forum, you can report bugs or suggest enhancements by creating issues on GitHub (or posting them to the lammps-users mailing list or posting in the LAMMPS Materials Science Discourse forum), and you can contribute by submitting pull requests on GitHub or e-mail your code +There are several ways how you can actively contribute to the LAMMPS project: you can discuss compiling and using LAMMPS, and solving LAMMPS related problems with other LAMMPS users in the forum, you can report bugs or suggest enhancements by creating issues on GitHub (or posting in the LAMMPS Materials Science Discourse forum), and you can contribute by submitting pull requests on GitHub or e-mail your code to one of the [LAMMPS core developers](https://www.lammps.org/authors.html). As you may see from the aforementioned developer page, the LAMMPS software package includes the efforts of a very large number of contributors beyond the principal authors and maintainers. ### Discussing How To Use LAMMPS -The LAMMPS mailing list is hosted at SourceForge. The mailing list began in 2005, and now includes tens of thousands of messages in thousands of threads. LAMMPS developers try to respond to posted questions in a timely manner, but there are no guarantees. Please consider that people live in different timezone and may not have time to answer e-mails outside of their work hours. -You can post to list by sending your email to lammps-users at lists.sourceforge.net (no subscription required), but before posting, please read the [mailing list guidelines](https://www.lammps.org/guidelines.html) to maximize your chances to receive a helpful response. - -Anyone can browse/search previous questions/answers in the archives. You do not have to subscribe to the list to post questions, receive answers (to your questions), or browse/search the archives. You **do** need to subscribe to the list if you want emails for **all** the posts (as individual messages or in digest form), or to answer questions yourself. Feel free to sign up and help us out! Answering questions from fellow LAMMPS users is a great way to pay back the community for providing you a useful tool for free, and to pass on the advice you have received yourself to others. It improves your karma and helps you understand your own research better. - -If you post a message and you are a subscriber, your message will appear immediately. If you are not a subscriber, your message will be moderated, which typically takes one business day. Either way, when someone replies the reply will usually be sent to both, your personal email address and the mailing list. When replying to people, that responded to your post to the list, please always included the mailing list in your replies (i.e. use "Reply All" and **not** "Reply"). Responses will appear on the list in a few minutes, but it can take a few hours for postings and replies to show up in the SourceForge archive. Sending replies also to the mailing list is important, so that responses are archived and people with a similar issue can search for possible solutions in the mailing list archive. - -The LAMMPS Materials Science Discourse forum was created recently to facilitate discussion not just about LAMMPS and as part of a larger effort towards building a materials science community. The forum contains a read-only sub-category with the continually updated mailing list archive, so you won't miss anything by joining only the forum and not the mailing list. +The LAMMPS Materials Science Discourse forum was created to facilitate discussion not just about LAMMPS and as part of a larger effort towards building a materials science community. The forum contains a read-only sub-category with the LAMMPS mailing list archive. ### Reporting Bugs @@ -52,14 +45,14 @@ While developers writing code for LAMMPS are careful to test their code, LAMMPS When you click on the green "New Issue" button, you will be provided with a text field, where you can enter your message. That text field with contain a template with several headlines and some descriptions. Keep the headlines that are relevant to your reported potential bug and replace the descriptions with the information as suggested by the descriptions. You can also attach small text files (please add the file name extension `.txt` or it will be rejected), images, or small compressed text files (using gzip, do not use RAR or 7-ZIP or similar tools that are uncommon outside of Windows machines). In many cases, bugs are best illustrated by providing a small input deck (do **not** attach your entire production input, but remove everything that is not required to reproduce the issue, and scale down your system size, that the resulting calculation runs fast and can be run on small desktop quickly). -To be able to submit an issue on GitHub, you have to register for an account (for GitHub in general). If you do not want to do that, or have other reservations against submitting an issue there, you can - as an alternative and in decreasing preference - either send an e-mail to the lammps-users mailing list, the original authors of the feature that you suspect to be affected, or one or more of the core LAMMPS developers. +To be able to submit an issue on GitHub, you have to register for an account (for GitHub in general). If you do not want to do that, or have other reservations against submitting an issue there, you can - as an alternative and in decreasing preference - either send an e-mail to the original authors of the feature that you suspect to be affected, or to developers@lammps.org or directly to one or more of the core LAMMPS developers. ### Suggesting Enhancements The LAMMPS developers welcome suggestions for enhancements or new features. These should be submitted using the [GitHub Issue Tracker](https://github.com/lammps/lammps/issues) of the LAMMPS project. This is particularly recommended, when you plan to implement the feature or enhancement yourself, as this allows to coordinate in case there are other similar or conflicting ongoing developments. The LAMMPS developers will review your submission and consider implementing it. Whether this will actually happen depends on many factors: how difficult it would be, how much effort it would take, how many users would benefit from it, how well the individual developer would understand the underlying physics of the feature, and whether this is a feature that would fit into a software like LAMMPS, or would be better implemented as a separate tool. Because of these factors, it matters how well the suggested enhancement is formulated and the overall benefit is argued convincingly. -To be able to submit an issue on GitHub, you have to register for an account (for GitHub in general). If you do not want to do that, or have other reservations against submitting an issue there, you can - as an alternative - send an e-mail to the lammps-users mailing list. +To be able to submit an issue on GitHub, you have to register for an account (for GitHub in general). ### Contributing Code diff --git a/.github/release_steps.md b/.github/release_steps.md index 1ffd3cb291..4ecd05d043 100644 --- a/.github/release_steps.md +++ b/.github/release_steps.md @@ -104,13 +104,13 @@ with a future release) from the `lammps-static` folder. rm -rf release-packages mkdir release-packages cd release-packages -wget https://download.lammps.org/static/fedora41_musl.sif -apptainer shell fedora41_musl.sif +wget https://download.lammps.org/static/fedora41_musl_mingw.sif +apptainer shell fedora41_musl_mingw.sif git clone -b release --depth 10 https://github.com/lammps/lammps.git lammps-release cmake -S lammps-release/cmake -B build-release -G Ninja -D CMAKE_INSTALL_PREFIX=$PWD/lammps-static -D CMAKE_TOOLCHAIN_FILE=/usr/musl/share/cmake/linux-musl.cmake -C lammps-release/cmake/presets/most.cmake -C lammps-release/cmake/presets/kokkos-openmp.cmake -D DOWNLOAD_POTENTIALS=OFF -D BUILD_MPI=OFF -D BUILD_TESTING=OFF -D CMAKE_BUILD_TYPE=Release -D PKG_ATC=ON -D PKG_AWPMD=ON -D PKG_MANIFOLD=ON -D PKG_MESONT=ON -D PKG_MGPT=ON -D PKG_ML-PACE=ON -D PKG_ML-RANN=ON -D PKG_MOLFILE=ON -D PKG_PTM=ON -D PKG_QTB=ON -D PKG_SMTBQ=ON cmake --build build-release --target all cmake --build build-release --target install -/usr/musl/bin/x86_64-linux-musl-strip lammps-static/bin/* +/usr/musl/bin/x86_64-linux-musl-strip -g lammps-static/bin/* tar -czvvf ../lammps-linux-x86_64-4Feb2025.tar.gz lammps-static exit # fedora 41 container cd .. @@ -204,7 +204,7 @@ cd .. rm -r release-packages ``` -#### Build Multi-arch App-bundle for macOS +#### Build Multi-arch App-bundle with GUI for macOS Building app-bundles for macOS is not as easily automated and portable as some of the other steps. It requires a machine actually running @@ -251,7 +251,7 @@ attached to the GitHub release page. We are currently building the application images on macOS 12 (aka Monterey). -#### Build Linux x86_64 binary tarball on Ubuntu 20.04LTS +#### Build Linux x86_64 binary tarball with GUI on Ubuntu 20.04LTS While the flatpak Linux version uses portable runtime libraries provided by the flatpak environment, we also build regular Linux executables that diff --git a/.github/workflows/check-cpp23.yml b/.github/workflows/check-cpp23.yml index 2cd53f2208..dfda1a4da8 100644 --- a/.github/workflows/check-cpp23.yml +++ b/.github/workflows/check-cpp23.yml @@ -1,4 +1,4 @@ -# GitHub action to build LAMMPS on Linux with gcc and C++23 +# GitHub action to build LAMMPS on Linux with gcc or clang and C++23 name: "Check for C++23 Compatibility" on: @@ -11,17 +11,25 @@ on: workflow_dispatch: +concurrency: + group: ${{ github.event_name }}-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: ${{github.event_name == 'pull_request'}} + jobs: build: name: Build with C++23 support enabled if: ${{ github.repository == 'lammps/lammps' }} runs-on: ubuntu-latest + strategy: + max-parallel: 2 + matrix: + idx: [ gcc, clang ] env: CCACHE_DIR: ${{ github.workspace }}/.ccache steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 @@ -29,8 +37,11 @@ jobs: run: | sudo apt-get update sudo apt-get install -y ccache \ - libeigen3-dev \ + clang \ libcurl4-openssl-dev \ + libeigen3-dev \ + libfftw3-dev \ + libomp-dev \ mold \ mpi-default-bin \ mpi-default-dev \ @@ -44,8 +55,8 @@ jobs: uses: actions/cache@v4 with: path: ${{ env.CCACHE_DIR }} - key: linux-cpp23-ccache-${{ github.sha }} - restore-keys: linux-cpp23-ccache- + key: linux-cpp23-ccache-${{ matrix.idx }}-${{ github.sha }} + restore-keys: linux-cpp23-ccache-${{ matrix.idx }} - name: Building LAMMPS via CMake shell: bash @@ -58,14 +69,14 @@ jobs: cmake -S cmake -B build \ -C cmake/presets/most.cmake \ -C cmake/presets/kokkos-openmp.cmake \ + -C cmake/presets/${{ matrix.idx }}.cmake \ -D CMAKE_CXX_STANDARD=23 \ - -D CMAKE_CXX_COMPILER=g++ \ - -D CMAKE_C_COMPILER=gcc \ -D CMAKE_CXX_COMPILER_LAUNCHER=ccache \ -D CMAKE_C_COMPILER_LAUNCHER=ccache \ -D CMAKE_BUILD_TYPE=Debug \ -D CMAKE_CXX_FLAGS_DEBUG="-Og -g" \ -D DOWNLOAD_POTENTIALS=off \ + -D FFT=KISS \ -D BUILD_MPI=on \ -D BUILD_SHARED_LIBS=on \ -D BUILD_TOOLS=off \ diff --git a/.github/workflows/check-vla.yml b/.github/workflows/check-vla.yml index 94e367be33..b08985442f 100644 --- a/.github/workflows/check-vla.yml +++ b/.github/workflows/check-vla.yml @@ -21,7 +21,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml index c7dd945f5f..ce12e94158 100644 --- a/.github/workflows/codeql-analysis.yml +++ b/.github/workflows/codeql-analysis.yml @@ -25,7 +25,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 diff --git a/.github/workflows/compile-msvc.yml b/.github/workflows/compile-msvc.yml index 7560bc0549..5e525678ac 100644 --- a/.github/workflows/compile-msvc.yml +++ b/.github/workflows/compile-msvc.yml @@ -25,7 +25,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 diff --git a/.github/workflows/coverity.yml b/.github/workflows/coverity.yml index 2691b9e895..86a881094d 100644 --- a/.github/workflows/coverity.yml +++ b/.github/workflows/coverity.yml @@ -12,11 +12,11 @@ jobs: if: ${{ github.repository == 'lammps/lammps' }} runs-on: ubuntu-latest container: - image: lammps/buildenv:ubuntu20.04 + image: lammps/buildenv:ubuntu22.04 steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 @@ -58,9 +58,7 @@ jobs: -D BUILD_OMP=on \ -D BUILD_SHARED_LIBS=on \ -D LAMMPS_SIZES=SMALLBIG \ - -D LAMMPS_EXCEPTIONS=off \ - -D PKG_ATC=on \ - -D PKG_AWPMD=on \ + -D DOWNLOAD_POTENTIALS=off \ -D PKG_H5MD=on \ -D PKG_INTEL=on \ -D PKG_LATBOLTZ=on \ diff --git a/.github/workflows/full-regression.yml b/.github/workflows/full-regression.yml index a6b5353b9b..317b716377 100644 --- a/.github/workflows/full-regression.yml +++ b/.github/workflows/full-regression.yml @@ -23,7 +23,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 show-progress: false diff --git a/.github/workflows/lammps-gui-flatpak.yml b/.github/workflows/lammps-gui-flatpak.yml index d7dc602476..8fe50f170b 100644 --- a/.github/workflows/lammps-gui-flatpak.yml +++ b/.github/workflows/lammps-gui-flatpak.yml @@ -20,7 +20,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 @@ -42,11 +42,11 @@ jobs: - name: Build flatpak run: | mkdir flatpack-state - sed -i -e 's/branch:.*/branch: develop/' tools/lammps-gui/org.lammps.lammps-gui.yml + sed -i -e 's/branch:.*/branch: develop/' cmake/packaging/org.lammps.lammps-gui.yml flatpak-builder --force-clean --verbose --repo=flatpak-repo \ --install-deps-from=flathub --state-dir=flatpak-state \ --user --ccache --default-branch=${{ github.ref_name }} \ - flatpak-build tools/lammps-gui/org.lammps.lammps-gui.yml + flatpak-build cmake/packaging/org.lammps.lammps-gui.yml flatpak build-bundle --runtime-repo=https://flathub.org/repo/flathub.flatpakrepo \ --verbose flatpak-repo LAMMPS-Linux-x86_64-GUI.flatpak \ org.lammps.lammps-gui ${{ github.ref_name }} diff --git a/.github/workflows/quick-regression.yml b/.github/workflows/quick-regression.yml index 88794bfa0a..a640b5d8c0 100644 --- a/.github/workflows/quick-regression.yml +++ b/.github/workflows/quick-regression.yml @@ -27,7 +27,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 0 show-progress: false diff --git a/.github/workflows/style-check.yml b/.github/workflows/style-check.yml index e3567140fb..12b4c6e2ab 100644 --- a/.github/workflows/style-check.yml +++ b/.github/workflows/style-check.yml @@ -23,7 +23,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 1 diff --git a/.github/workflows/unittest-arm64.yml b/.github/workflows/unittest-arm64.yml index 094c5fb0c1..d0038d73c8 100644 --- a/.github/workflows/unittest-arm64.yml +++ b/.github/workflows/unittest-arm64.yml @@ -21,7 +21,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 diff --git a/.github/workflows/unittest-linux.yml b/.github/workflows/unittest-linux.yml index ce98fcea35..e79c715d13 100644 --- a/.github/workflows/unittest-linux.yml +++ b/.github/workflows/unittest-linux.yml @@ -25,7 +25,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 diff --git a/.github/workflows/unittest-macos.yml b/.github/workflows/unittest-macos.yml index 0d478a9d6b..e11c4e6ba4 100644 --- a/.github/workflows/unittest-macos.yml +++ b/.github/workflows/unittest-macos.yml @@ -25,7 +25,7 @@ jobs: steps: - name: Checkout repository - uses: actions/checkout@v4 + uses: actions/checkout@v5 with: fetch-depth: 2 diff --git a/SECURITY.md b/SECURITY.md index 1664dde169..a5b4195230 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -50,3 +50,23 @@ and then merged to `stable` and published as "updates". For a new stable release the `stable` branch is updated to the corresponding state of the `release` branch and a new stable tag is applied in addition to the release tag. + +# Integrity of Downloaded Archives + +For *all* files that can be downloaded from the "lammps.org" web server +we provide SHA-256 checksum data in files named SHA256SUM. These +checksums can be used to validate the integrity of the downloaded +archives. Please note that we also use symbolic links to point to +the latest or stable releases and the checksums for those files + *will* change (and so their checksums) because the symbolic links +will be updated for new releases. + +# Immutable GitHub Releases + +Starting with LAMMPS version 10 Sep 2025 the LAMMPS releases published +on GitHub are configured as `immutable`. This means that after the +release is published the release tag cannot be changed or any of the +uploaded assets, i.e. the source tarball, the static Linux executable +tarball and the pre-compiled packages of LAMMPS with LAMMPS-GUI included. +GitHub will generate a release attestation JSON file which can be +used to verify the integrity of the files provided with the release. diff --git a/bench/log.15Jul25.chain.fixed.g++.1 b/bench/log.15Jul25.chain.fixed.g++.1 new file mode 100644 index 0000000000..e7abdcb14f --- /dev/null +++ b/bench/log.15Jul25.chain.fixed.g++.1 @@ -0,0 +1,95 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# FENE beadspring benchmark + +units lj +atom_style bond +special_bonds fene + +read_data data.chain +Reading data file ... + orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) + 1 by 1 by 1 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + scanning bonds ... + 1 = max bonds/atom + orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) + 1 by 1 by 1 MPI processor grid + reading bonds ... + 31680 bonds +Finding 1-2 1-3 1-4 neighbors ... + special bond factors lj: 0 1 1 + special bond factors coul: 0 1 1 + 2 = max # of 1-2 neighbors + 2 = max # of special neighbors + special bonds CPU = 0.002 seconds + read_data CPU = 0.098 seconds + +neighbor 0.4 bin +neigh_modify every 1 delay 1 + +bond_style fene +bond_coeff 1 30.0 1.5 1.0 1.0 + +pair_style lj/cut 1.12 +pair_modify shift yes +pair_coeff 1 1 1.0 1.0 1.12 + +fix 1 all nve +fix 2 all langevin 1.0 1.0 10.0 904297 + +thermo 100 +timestep 0.012 + +run 100 +Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule +WARNING: Communication cutoff 1.52 is shorter than a bond length based estimate of 1.855. This may lead to errors. (src/comm.cpp:743) +Neighbor list info ... + update: every = 1 steps, delay = 1 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 1.52 + ghost atom cutoff = 1.52 + binsize = 0.76, bins = 45 45 45 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/cut, perpetual + attributes: half, newton on + pair build: half/bin/newton + stencil: half/bin/3d + bin: standard +WARNING: Communication cutoff 1.52 is shorter than a bond length based estimate of 1.855. This may lead to errors. (src/comm.cpp:743) +Per MPI rank memory allocation (min/avg/max) = 13.2 | 13.2 | 13.2 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 0.97029772 0.44484087 20.494523 22.394765 4.6721833 + 100 0.9729966 0.4361122 20.507698 22.40326 4.6548819 +Loop time of 0.531 on 1 procs for 100 steps with 32000 atoms + +Performance: 195254.376 tau/day, 188.324 timesteps/s, 6.026 Matom-step/s +99.6% CPU use with 1 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.092788 | 0.092788 | 0.092788 | 0.0 | 17.47 +Bond | 0.021754 | 0.021754 | 0.021754 | 0.0 | 4.10 +Neigh | 0.27771 | 0.27771 | 0.27771 | 0.0 | 52.30 +Comm | 0.0088421 | 0.0088421 | 0.0088421 | 0.0 | 1.67 +Output | 9.1455e-05 | 9.1455e-05 | 9.1455e-05 | 0.0 | 0.02 +Modify | 0.12461 | 0.12461 | 0.12461 | 0.0 | 23.47 +Other | | 0.005205 | | | 0.98 + +Nlocal: 32000 ave 32000 max 32000 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Nghost: 9493 ave 9493 max 9493 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Neighs: 155873 ave 155873 max 155873 min +Histogram: 1 0 0 0 0 0 0 0 0 0 + +Total # of neighbors = 155873 +Ave neighs/atom = 4.8710312 +Ave special neighs/atom = 1.98 +Neighbor list builds = 25 +Dangerous builds = 0 +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.chain.fixed.g++.4 b/bench/log.15Jul25.chain.fixed.g++.4 new file mode 100644 index 0000000000..f412a6c883 --- /dev/null +++ b/bench/log.15Jul25.chain.fixed.g++.4 @@ -0,0 +1,95 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# FENE beadspring benchmark + +units lj +atom_style bond +special_bonds fene + +read_data data.chain +Reading data file ... + orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) + 1 by 2 by 2 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + scanning bonds ... + 1 = max bonds/atom + orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) + 1 by 2 by 2 MPI processor grid + reading bonds ... + 31680 bonds +Finding 1-2 1-3 1-4 neighbors ... + special bond factors lj: 0 1 1 + special bond factors coul: 0 1 1 + 2 = max # of 1-2 neighbors + 2 = max # of special neighbors + special bonds CPU = 0.001 seconds + read_data CPU = 0.081 seconds + +neighbor 0.4 bin +neigh_modify every 1 delay 1 + +bond_style fene +bond_coeff 1 30.0 1.5 1.0 1.0 + +pair_style lj/cut 1.12 +pair_modify shift yes +pair_coeff 1 1 1.0 1.0 1.12 + +fix 1 all nve +fix 2 all langevin 1.0 1.0 10.0 904297 + +thermo 100 +timestep 0.012 + +run 100 +Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule +WARNING: Communication cutoff 1.52 is shorter than a bond length based estimate of 1.855. This may lead to errors. (src/comm.cpp:743) +Neighbor list info ... + update: every = 1 steps, delay = 1 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 1.52 + ghost atom cutoff = 1.52 + binsize = 0.76, bins = 45 45 45 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/cut, perpetual + attributes: half, newton on + pair build: half/bin/newton + stencil: half/bin/3d + bin: standard +WARNING: Communication cutoff 1.52 is shorter than a bond length based estimate of 1.855. This may lead to errors. (src/comm.cpp:743) +Per MPI rank memory allocation (min/avg/max) = 4.779 | 4.78 | 4.78 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 0.97029772 0.44484087 20.494523 22.394765 4.6721833 + 100 0.97145835 0.43803883 20.502691 22.397872 4.626988 +Loop time of 0.141838 on 4 procs for 100 steps with 32000 atoms + +Performance: 730973.705 tau/day, 705.029 timesteps/s, 22.561 Matom-step/s +99.6% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.023828 | 0.023917 | 0.024042 | 0.1 | 16.86 +Bond | 0.0054718 | 0.0055341 | 0.0056027 | 0.1 | 3.90 +Neigh | 0.072672 | 0.07269 | 0.072704 | 0.0 | 51.25 +Comm | 0.0057901 | 0.00599 | 0.006123 | 0.2 | 4.22 +Output | 3.2532e-05 | 4.1552e-05 | 5.0617e-05 | 0.0 | 0.03 +Modify | 0.031418 | 0.031534 | 0.031657 | 0.0 | 22.23 +Other | | 0.002133 | | | 1.50 + +Nlocal: 8000 ave 8030 max 7974 min +Histogram: 1 0 0 1 0 1 0 0 0 1 +Nghost: 4177 ave 4191 max 4160 min +Histogram: 1 0 0 0 1 0 0 1 0 1 +Neighs: 38995.8 ave 39169 max 38852 min +Histogram: 1 0 0 1 1 0 0 0 0 1 + +Total # of neighbors = 155983 +Ave neighs/atom = 4.8744688 +Ave special neighs/atom = 1.98 +Neighbor list builds = 25 +Dangerous builds = 0 +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.chain.scaled.g++.4 b/bench/log.15Jul25.chain.scaled.g++.4 new file mode 100644 index 0000000000..f465efa7f8 --- /dev/null +++ b/bench/log.15Jul25.chain.scaled.g++.4 @@ -0,0 +1,95 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-784-g8c564460e6-modified) + using 1 OpenMP thread(s) per MPI task +# FENE beadspring benchmark + +units lj +atom_style bond +special_bonds fene + +read_data data.chain +Reading data file ... + orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) + 1 by 2 by 2 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + scanning bonds ... + 1 = max bonds/atom + orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) + 1 by 2 by 2 MPI processor grid + reading bonds ... + 31680 bonds +Finding 1-2 1-3 1-4 neighbors ... + special bond factors lj: 0 1 1 + special bond factors coul: 0 1 1 + 2 = max # of 1-2 neighbors + 2 = max # of special neighbors + special bonds CPU = 0.001 seconds + read_data CPU = 0.085 seconds + +neighbor 0.4 bin +neigh_modify every 1 delay 1 + +bond_style fene +bond_coeff 1 30.0 1.5 1.0 1.0 + +pair_style lj/cut 1.12 +pair_modify shift yes +pair_coeff 1 1 1.0 1.0 1.12 + +fix 1 all nve +fix 2 all langevin 1.0 1.0 10.0 904297 + +thermo 100 +timestep 0.012 + +run 100 +Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule +WARNING: Communication cutoff 1.52 is shorter than a bond length based estimate of 1.855. This may lead to errors. (src/comm.cpp:743) +Neighbor list info ... + update: every = 1 steps, delay = 1 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 1.52 + ghost atom cutoff = 1.52 + binsize = 0.76, bins = 45 45 45 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/cut, perpetual + attributes: half, newton on + pair build: half/bin/newton + stencil: half/bin/3d + bin: standard +WARNING: Communication cutoff 1.52 is shorter than a bond length based estimate of 1.855. This may lead to errors. (src/comm.cpp:743) +Per MPI rank memory allocation (min/avg/max) = 4.779 | 4.78 | 4.78 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 0.97029772 0.44484087 20.494523 22.394765 4.6721833 + 100 0.97145835 0.43803883 20.502691 22.397872 4.626988 +Loop time of 0.14297 on 4 procs for 100 steps with 32000 atoms + +Performance: 725185.191 tau/day, 699.446 timesteps/s, 22.382 Matom-step/s +99.5% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.023853 | 0.023965 | 0.02413 | 0.1 | 16.76 +Bond | 0.0054567 | 0.0055557 | 0.0056918 | 0.1 | 3.89 +Neigh | 0.073792 | 0.073802 | 0.073811 | 0.0 | 51.62 +Comm | 0.0055967 | 0.0059012 | 0.0060867 | 0.3 | 4.13 +Output | 3.2282e-05 | 3.7126e-05 | 4.2381e-05 | 0.0 | 0.03 +Modify | 0.031379 | 0.03148 | 0.031647 | 0.1 | 22.02 +Other | | 0.002229 | | | 1.56 + +Nlocal: 8000 ave 8030 max 7974 min +Histogram: 1 0 0 1 0 1 0 0 0 1 +Nghost: 4177 ave 4191 max 4160 min +Histogram: 1 0 0 0 1 0 0 1 0 1 +Neighs: 38995.8 ave 39169 max 38852 min +Histogram: 1 0 0 1 1 0 0 0 0 1 + +Total # of neighbors = 155983 +Ave neighs/atom = 4.8744688 +Ave special neighs/atom = 1.98 +Neighbor list builds = 25 +Dangerous builds = 0 +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.chute.fixed.g++.1 b/bench/log.15Jul25.chute.fixed.g++.1 new file mode 100644 index 0000000000..621d4624e1 --- /dev/null +++ b/bench/log.15Jul25.chute.fixed.g++.1 @@ -0,0 +1,89 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# LAMMPS benchmark of granular flow +# chute flow of 32000 atoms with frozen base at 26 degrees + +units lj +atom_style sphere +boundary p p fs +newton off +comm_modify vel yes + +read_data data.chute +Reading data file ... + orthogonal box = (0 0 0) to (40 20 37.2886) + 1 by 1 by 1 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + read_data CPU = 0.102 seconds + +pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0 +pair_coeff * * + +neighbor 0.1 bin +neigh_modify every 1 delay 0 + +timestep 0.0001 + +group bottom type 2 +912 atoms in group bottom +group active subtract all bottom +31088 atoms in group active +neigh_modify exclude group bottom bottom + +fix 1 all gravity 1.0 chute 26.0 +fix 2 bottom freeze +fix 3 active nve/sphere + +compute 1 all erotate/sphere +thermo_style custom step atoms ke c_1 vol +thermo_modify norm no +thermo 100 + +run 100 +Generated 0 of 1 mixed pair_coeff terms from geometric mixing rule +Neighbor list info ... + update: every = 1 steps, delay = 0 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 1.1 + ghost atom cutoff = 1.1 + binsize = 0.55, bins = 73 37 68 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair gran/hooke/history, perpetual + attributes: half, newton off, size, history + pair build: half/size/bin/atomonly/newtoff + stencil: full/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 23.37 | 23.37 | 23.37 Mbytes + Step Atoms KinEng c_1 Volume + 0 32000 784139.13 1601.1263 29833.783 + 100 32000 784292.08 1571.0968 29834.707 +Loop time of 0.155391 on 1 procs for 100 steps with 32000 atoms + +Performance: 5560.184 tau/day, 643.540 timesteps/s, 20.593 Matom-step/s +99.6% CPU use with 1 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.099082 | 0.099082 | 0.099082 | 0.0 | 63.76 +Neigh | 0.013743 | 0.013743 | 0.013743 | 0.0 | 8.84 +Comm | 0.0042572 | 0.0042572 | 0.0042572 | 0.0 | 2.74 +Output | 0.00017358 | 0.00017358 | 0.00017358 | 0.0 | 0.11 +Modify | 0.033446 | 0.033446 | 0.033446 | 0.0 | 21.52 +Other | | 0.00469 | | | 3.02 + +Nlocal: 32000 ave 32000 max 32000 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Nghost: 5463 ave 5463 max 5463 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Neighs: 115133 ave 115133 max 115133 min +Histogram: 1 0 0 0 0 0 0 0 0 0 + +Total # of neighbors = 115133 +Ave neighs/atom = 3.5979062 +Neighbor list builds = 2 +Dangerous builds = 0 +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.chute.fixed.g++.4 b/bench/log.15Jul25.chute.fixed.g++.4 new file mode 100644 index 0000000000..b3295c8bb1 --- /dev/null +++ b/bench/log.15Jul25.chute.fixed.g++.4 @@ -0,0 +1,89 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# LAMMPS benchmark of granular flow +# chute flow of 32000 atoms with frozen base at 26 degrees + +units lj +atom_style sphere +boundary p p fs +newton off +comm_modify vel yes + +read_data data.chute +Reading data file ... + orthogonal box = (0 0 0) to (40 20 37.2886) + 2 by 1 by 2 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + read_data CPU = 0.071 seconds + +pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0 +pair_coeff * * + +neighbor 0.1 bin +neigh_modify every 1 delay 0 + +timestep 0.0001 + +group bottom type 2 +912 atoms in group bottom +group active subtract all bottom +31088 atoms in group active +neigh_modify exclude group bottom bottom + +fix 1 all gravity 1.0 chute 26.0 +fix 2 bottom freeze +fix 3 active nve/sphere + +compute 1 all erotate/sphere +thermo_style custom step atoms ke c_1 vol +thermo_modify norm no +thermo 100 + +run 100 +Generated 0 of 1 mixed pair_coeff terms from geometric mixing rule +Neighbor list info ... + update: every = 1 steps, delay = 0 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 1.1 + ghost atom cutoff = 1.1 + binsize = 0.55, bins = 73 37 68 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair gran/hooke/history, perpetual + attributes: half, newton off, size, history + pair build: half/size/bin/atomonly/newtoff + stencil: full/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 10.59 | 10.59 | 10.6 Mbytes + Step Atoms KinEng c_1 Volume + 0 32000 784139.13 1601.1263 29833.783 + 100 32000 784292.08 1571.0968 29834.707 +Loop time of 0.0451259 on 4 procs for 100 steps with 32000 atoms + +Performance: 19146.451 tau/day, 2216.024 timesteps/s, 70.913 Matom-step/s +99.1% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.02312 | 0.023875 | 0.024935 | 0.5 | 52.91 +Neigh | 0.0047685 | 0.0048664 | 0.0049412 | 0.1 | 10.78 +Comm | 0.0039881 | 0.0041343 | 0.0042858 | 0.2 | 9.16 +Output | 5.0517e-05 | 6.8193e-05 | 8.5122e-05 | 0.0 | 0.15 +Modify | 0.0088343 | 0.0089082 | 0.0089893 | 0.1 | 19.74 +Other | | 0.003274 | | | 7.26 + +Nlocal: 8000 ave 8008 max 7992 min +Histogram: 2 0 0 0 0 0 0 0 0 2 +Nghost: 2439 ave 2450 max 2428 min +Histogram: 2 0 0 0 0 0 0 0 0 2 +Neighs: 29500.5 ave 30488 max 28513 min +Histogram: 2 0 0 0 0 0 0 0 0 2 + +Total # of neighbors = 118002 +Ave neighs/atom = 3.6875625 +Neighbor list builds = 2 +Dangerous builds = 0 +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.chute.scaled.g++.4 b/bench/log.15Jul25.chute.scaled.g++.4 new file mode 100644 index 0000000000..cb3472af18 --- /dev/null +++ b/bench/log.15Jul25.chute.scaled.g++.4 @@ -0,0 +1,89 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-784-g8c564460e6-modified) + using 1 OpenMP thread(s) per MPI task +# LAMMPS benchmark of granular flow +# chute flow of 32000 atoms with frozen base at 26 degrees + +units lj +atom_style sphere +boundary p p fs +newton off +comm_modify vel yes + +read_data data.chute +Reading data file ... + orthogonal box = (0 0 0) to (40 20 37.2886) + 2 by 1 by 2 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + read_data CPU = 0.076 seconds + +pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0 +pair_coeff * * + +neighbor 0.1 bin +neigh_modify every 1 delay 0 + +timestep 0.0001 + +group bottom type 2 +912 atoms in group bottom +group active subtract all bottom +31088 atoms in group active +neigh_modify exclude group bottom bottom + +fix 1 all gravity 1.0 chute 26.0 +fix 2 bottom freeze +fix 3 active nve/sphere + +compute 1 all erotate/sphere +thermo_style custom step atoms ke c_1 vol +thermo_modify norm no +thermo 100 + +run 100 +Generated 0 of 1 mixed pair_coeff terms from geometric mixing rule +Neighbor list info ... + update: every = 1 steps, delay = 0 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 1.1 + ghost atom cutoff = 1.1 + binsize = 0.55, bins = 73 37 68 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair gran/hooke/history, perpetual + attributes: half, newton off, size, history + pair build: half/size/bin/atomonly/newtoff + stencil: full/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 10.59 | 10.59 | 10.6 Mbytes + Step Atoms KinEng c_1 Volume + 0 32000 784139.13 1601.1263 29833.783 + 100 32000 784292.08 1571.0968 29834.707 +Loop time of 0.0434391 on 4 procs for 100 steps with 32000 atoms + +Performance: 19889.900 tau/day, 2302.072 timesteps/s, 73.666 Matom-step/s +99.2% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.021299 | 0.022774 | 0.024215 | 0.8 | 52.43 +Neigh | 0.0046727 | 0.0047926 | 0.0048429 | 0.1 | 11.03 +Comm | 0.0037461 | 0.003889 | 0.0040378 | 0.2 | 8.95 +Output | 5.0456e-05 | 7.1489e-05 | 9.1084e-05 | 0.0 | 0.16 +Modify | 0.0087177 | 0.0087334 | 0.008752 | 0.0 | 20.10 +Other | | 0.003179 | | | 7.32 + +Nlocal: 8000 ave 8008 max 7992 min +Histogram: 2 0 0 0 0 0 0 0 0 2 +Nghost: 2439 ave 2450 max 2428 min +Histogram: 2 0 0 0 0 0 0 0 0 2 +Neighs: 29500.5 ave 30488 max 28513 min +Histogram: 2 0 0 0 0 0 0 0 0 2 + +Total # of neighbors = 118002 +Ave neighs/atom = 3.6875625 +Neighbor list builds = 2 +Dangerous builds = 0 +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.eam.fixed.g++.1 b/bench/log.15Jul25.eam.fixed.g++.1 new file mode 100644 index 0000000000..db601ecfeb --- /dev/null +++ b/bench/log.15Jul25.eam.fixed.g++.1 @@ -0,0 +1,91 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# bulk Cu lattice + +variable x index 1 +variable y index 1 +variable z index 1 + +variable xx equal 20*$x +variable xx equal 20*1 +variable yy equal 20*$y +variable yy equal 20*1 +variable zz equal 20*$z +variable zz equal 20*1 + +units metal +atom_style atomic + +lattice fcc 3.615 +Lattice spacing in x,y,z = 3.615 3.615 3.615 +region box block 0 ${xx} 0 ${yy} 0 ${zz} +region box block 0 20 0 ${yy} 0 ${zz} +region box block 0 20 0 20 0 ${zz} +region box block 0 20 0 20 0 20 +create_box 1 box +Created orthogonal box = (0 0 0) to (72.3 72.3 72.3) + 1 by 1 by 1 MPI processor grid +create_atoms 1 box +Created 32000 atoms + using lattice units in orthogonal box = (0 0 0) to (72.3 72.3 72.3) + create_atoms CPU = 0.004 seconds + +pair_style eam +pair_coeff 1 1 Cu_u3.eam +Reading eam potential file Cu_u3.eam with DATE: 2007-06-11 + +velocity all create 1600.0 376847 loop geom + +neighbor 1.0 bin +neigh_modify every 1 delay 5 check yes + +fix 1 all nve + +timestep 0.005 +thermo 50 + +run 100 +Neighbor list info ... + update: every = 1 steps, delay = 5 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 5.95 + ghost atom cutoff = 5.95 + binsize = 2.975, bins = 25 25 25 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair eam, perpetual + attributes: half, newton on + pair build: half/bin/atomonly/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 16.83 | 16.83 | 16.83 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 1600 -113280 0 -106662.09 18703.573 + 50 781.69049 -109873.35 0 -106640.13 52273.088 + 100 801.832 -109957.3 0 -106640.77 51322.821 +Loop time of 2.34899 on 1 procs for 100 steps with 32000 atoms + +Performance: 18.391 ns/day, 1.305 hours/ns, 42.572 timesteps/s, 1.362 Matom-step/s +99.6% CPU use with 1 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 2.0045 | 2.0045 | 2.0045 | 0.0 | 85.33 +Neigh | 0.3191 | 0.3191 | 0.3191 | 0.0 | 13.58 +Comm | 0.0084135 | 0.0084135 | 0.0084135 | 0.0 | 0.36 +Output | 0.00019136 | 0.00019136 | 0.00019136 | 0.0 | 0.01 +Modify | 0.012899 | 0.012899 | 0.012899 | 0.0 | 0.55 +Other | | 0.003925 | | | 0.17 + +Nlocal: 32000 ave 32000 max 32000 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Nghost: 19909 ave 19909 max 19909 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Neighs: 1.20778e+06 ave 1.20778e+06 max 1.20778e+06 min +Histogram: 1 0 0 0 0 0 0 0 0 0 + +Total # of neighbors = 1207784 +Ave neighs/atom = 37.74325 +Neighbor list builds = 13 +Dangerous builds = 0 +Total wall time: 0:00:02 diff --git a/bench/log.15Jul25.eam.fixed.g++.4 b/bench/log.15Jul25.eam.fixed.g++.4 new file mode 100644 index 0000000000..c408513747 --- /dev/null +++ b/bench/log.15Jul25.eam.fixed.g++.4 @@ -0,0 +1,91 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# bulk Cu lattice + +variable x index 1 +variable y index 1 +variable z index 1 + +variable xx equal 20*$x +variable xx equal 20*1 +variable yy equal 20*$y +variable yy equal 20*1 +variable zz equal 20*$z +variable zz equal 20*1 + +units metal +atom_style atomic + +lattice fcc 3.615 +Lattice spacing in x,y,z = 3.615 3.615 3.615 +region box block 0 ${xx} 0 ${yy} 0 ${zz} +region box block 0 20 0 ${yy} 0 ${zz} +region box block 0 20 0 20 0 ${zz} +region box block 0 20 0 20 0 20 +create_box 1 box +Created orthogonal box = (0 0 0) to (72.3 72.3 72.3) + 1 by 2 by 2 MPI processor grid +create_atoms 1 box +Created 32000 atoms + using lattice units in orthogonal box = (0 0 0) to (72.3 72.3 72.3) + create_atoms CPU = 0.001 seconds + +pair_style eam +pair_coeff 1 1 Cu_u3.eam +Reading eam potential file Cu_u3.eam with DATE: 2007-06-11 + +velocity all create 1600.0 376847 loop geom + +neighbor 1.0 bin +neigh_modify every 1 delay 5 check yes + +fix 1 all nve + +timestep 0.005 +thermo 50 + +run 100 +Neighbor list info ... + update: every = 1 steps, delay = 5 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 5.95 + ghost atom cutoff = 5.95 + binsize = 2.975, bins = 25 25 25 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair eam, perpetual + attributes: half, newton on + pair build: half/bin/atomonly/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 7.382 | 7.382 | 7.382 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 1600 -113280 0 -106662.09 18703.573 + 50 781.69049 -109873.35 0 -106640.13 52273.088 + 100 801.832 -109957.3 0 -106640.77 51322.821 +Loop time of 0.632615 on 4 procs for 100 steps with 32000 atoms + +Performance: 68.288 ns/day, 0.351 hours/ns, 158.074 timesteps/s, 5.058 Matom-step/s +99.4% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.53032 | 0.53166 | 0.5323 | 0.1 | 84.04 +Neigh | 0.083688 | 0.083977 | 0.084342 | 0.1 | 13.27 +Comm | 0.010172 | 0.010735 | 0.011792 | 0.6 | 1.70 +Output | 6.7649e-05 | 7.0092e-05 | 7.305e-05 | 0.0 | 0.01 +Modify | 0.0040301 | 0.0041093 | 0.0042195 | 0.1 | 0.65 +Other | | 0.002059 | | | 0.33 + +Nlocal: 8000 ave 8008 max 7993 min +Histogram: 2 0 0 0 0 0 0 0 1 1 +Nghost: 9130.25 ave 9138 max 9122 min +Histogram: 2 0 0 0 0 0 0 0 0 2 +Neighs: 301946 ave 302392 max 301360 min +Histogram: 1 0 0 0 1 0 0 0 1 1 + +Total # of neighbors = 1207784 +Ave neighs/atom = 37.74325 +Neighbor list builds = 13 +Dangerous builds = 0 +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.eam.scaled.g++.4 b/bench/log.15Jul25.eam.scaled.g++.4 new file mode 100644 index 0000000000..f0862e055d --- /dev/null +++ b/bench/log.15Jul25.eam.scaled.g++.4 @@ -0,0 +1,91 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-784-g8c564460e6-modified) + using 1 OpenMP thread(s) per MPI task +# bulk Cu lattice + +variable x index 1 +variable y index 1 +variable z index 1 + +variable xx equal 20*$x +variable xx equal 20*2 +variable yy equal 20*$y +variable yy equal 20*2 +variable zz equal 20*$z +variable zz equal 20*1 + +units metal +atom_style atomic + +lattice fcc 3.615 +Lattice spacing in x,y,z = 3.615 3.615 3.615 +region box block 0 ${xx} 0 ${yy} 0 ${zz} +region box block 0 40 0 ${yy} 0 ${zz} +region box block 0 40 0 40 0 ${zz} +region box block 0 40 0 40 0 20 +create_box 1 box +Created orthogonal box = (0 0 0) to (144.6 144.6 72.3) + 2 by 2 by 1 MPI processor grid +create_atoms 1 box +Created 128000 atoms + using lattice units in orthogonal box = (0 0 0) to (144.6 144.6 72.3) + create_atoms CPU = 0.004 seconds + +pair_style eam +pair_coeff 1 1 Cu_u3.eam +Reading eam potential file Cu_u3.eam with DATE: 2007-06-11 + +velocity all create 1600.0 376847 loop geom + +neighbor 1.0 bin +neigh_modify every 1 delay 5 check yes + +fix 1 all nve + +timestep 0.005 +thermo 50 + +run 100 +Neighbor list info ... + update: every = 1 steps, delay = 5 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 5.95 + ghost atom cutoff = 5.95 + binsize = 2.975, bins = 49 49 25 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair eam, perpetual + attributes: half, newton on + pair build: half/bin/atomonly/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 17.13 | 17.13 | 17.13 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 1600 -453120 0 -426647.73 18704.012 + 50 779.50001 -439457.02 0 -426560.06 52355.276 + 100 797.97828 -439764.76 0 -426562.07 51474.74 +Loop time of 2.6471 on 4 procs for 100 steps with 128000 atoms + +Performance: 16.320 ns/day, 1.471 hours/ns, 37.777 timesteps/s, 4.835 Matom-step/s +99.5% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 2.1797 | 2.1945 | 2.2029 | 0.6 | 82.90 +Neigh | 0.36186 | 0.36355 | 0.36453 | 0.2 | 13.73 +Comm | 0.038468 | 0.049112 | 0.066931 | 5.0 | 1.86 +Output | 0.00019847 | 0.00021211 | 0.00023473 | 0.0 | 0.01 +Modify | 0.027395 | 0.028328 | 0.029054 | 0.4 | 1.07 +Other | | 0.01136 | | | 0.43 + +Nlocal: 32000 ave 32092 max 31914 min +Histogram: 1 0 0 1 0 1 0 0 0 1 +Nghost: 19910 ave 19997 max 19818 min +Histogram: 1 0 0 0 1 0 1 0 0 1 +Neighs: 1.20728e+06 ave 1.21142e+06 max 1.2036e+06 min +Histogram: 1 0 0 1 1 0 0 0 0 1 + +Total # of neighbors = 4829126 +Ave neighs/atom = 37.727547 +Neighbor list builds = 14 +Dangerous builds = 0 +Total wall time: 0:00:02 diff --git a/bench/log.15Jul25.lj.fixed.g++.1 b/bench/log.15Jul25.lj.fixed.g++.1 new file mode 100644 index 0000000000..31581533aa --- /dev/null +++ b/bench/log.15Jul25.lj.fixed.g++.1 @@ -0,0 +1,88 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# 3d Lennard-Jones melt + +variable x index 1 +variable y index 1 +variable z index 1 + +variable xx equal 20*$x +variable xx equal 20*1 +variable yy equal 20*$y +variable yy equal 20*1 +variable zz equal 20*$z +variable zz equal 20*1 + +units lj +atom_style atomic + +lattice fcc 0.8442 +Lattice spacing in x,y,z = 1.6795962 1.6795962 1.6795962 +region box block 0 ${xx} 0 ${yy} 0 ${zz} +region box block 0 20 0 ${yy} 0 ${zz} +region box block 0 20 0 20 0 ${zz} +region box block 0 20 0 20 0 20 +create_box 1 box +Created orthogonal box = (0 0 0) to (33.591924 33.591924 33.591924) + 1 by 1 by 1 MPI processor grid +create_atoms 1 box +Created 32000 atoms + using lattice units in orthogonal box = (0 0 0) to (33.591924 33.591924 33.591924) + create_atoms CPU = 0.004 seconds +mass 1 1.0 + +velocity all create 1.44 87287 loop geom + +pair_style lj/cut 2.5 +pair_coeff 1 1 1.0 1.0 2.5 + +neighbor 0.3 bin +neigh_modify delay 0 every 20 check no + +fix 1 all nve + +run 100 +Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule +Neighbor list info ... + update: every = 20 steps, delay = 0 steps, check = no + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 2.8 + ghost atom cutoff = 2.8 + binsize = 1.4, bins = 24 24 24 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/cut, perpetual + attributes: half, newton on + pair build: half/bin/atomonly/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 13.82 | 13.82 | 13.82 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 1.44 -6.7733681 0 -4.6134356 -5.0197073 + 100 0.7574531 -5.7585055 0 -4.6223613 0.20726105 +Loop time of 0.852877 on 1 procs for 100 steps with 32000 atoms + +Performance: 50652.077 tau/day, 117.250 timesteps/s, 3.752 Matom-step/s +99.6% CPU use with 1 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.70475 | 0.70475 | 0.70475 | 0.0 | 82.63 +Neigh | 0.12731 | 0.12731 | 0.12731 | 0.0 | 14.93 +Comm | 0.0062962 | 0.0062962 | 0.0062962 | 0.0 | 0.74 +Output | 9.7908e-05 | 9.7908e-05 | 9.7908e-05 | 0.0 | 0.01 +Modify | 0.012837 | 0.012837 | 0.012837 | 0.0 | 1.51 +Other | | 0.001579 | | | 0.19 + +Nlocal: 32000 ave 32000 max 32000 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Nghost: 19657 ave 19657 max 19657 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Neighs: 1.20283e+06 ave 1.20283e+06 max 1.20283e+06 min +Histogram: 1 0 0 0 0 0 0 0 0 0 + +Total # of neighbors = 1202833 +Ave neighs/atom = 37.588531 +Neighbor list builds = 5 +Dangerous builds not checked +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.lj.fixed.g++.4 b/bench/log.15Jul25.lj.fixed.g++.4 new file mode 100644 index 0000000000..9bf03cb43a --- /dev/null +++ b/bench/log.15Jul25.lj.fixed.g++.4 @@ -0,0 +1,88 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# 3d Lennard-Jones melt + +variable x index 1 +variable y index 1 +variable z index 1 + +variable xx equal 20*$x +variable xx equal 20*1 +variable yy equal 20*$y +variable yy equal 20*1 +variable zz equal 20*$z +variable zz equal 20*1 + +units lj +atom_style atomic + +lattice fcc 0.8442 +Lattice spacing in x,y,z = 1.6795962 1.6795962 1.6795962 +region box block 0 ${xx} 0 ${yy} 0 ${zz} +region box block 0 20 0 ${yy} 0 ${zz} +region box block 0 20 0 20 0 ${zz} +region box block 0 20 0 20 0 20 +create_box 1 box +Created orthogonal box = (0 0 0) to (33.591924 33.591924 33.591924) + 1 by 2 by 2 MPI processor grid +create_atoms 1 box +Created 32000 atoms + using lattice units in orthogonal box = (0 0 0) to (33.591924 33.591924 33.591924) + create_atoms CPU = 0.001 seconds +mass 1 1.0 + +velocity all create 1.44 87287 loop geom + +pair_style lj/cut 2.5 +pair_coeff 1 1 1.0 1.0 2.5 + +neighbor 0.3 bin +neigh_modify delay 0 every 20 check no + +fix 1 all nve + +run 100 +Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule +Neighbor list info ... + update: every = 20 steps, delay = 0 steps, check = no + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 2.8 + ghost atom cutoff = 2.8 + binsize = 1.4, bins = 24 24 24 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/cut, perpetual + attributes: half, newton on + pair build: half/bin/atomonly/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 5.881 | 5.881 | 5.881 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 1.44 -6.7733681 0 -4.6134356 -5.0197073 + 100 0.7574531 -5.7585055 0 -4.6223613 0.20726105 +Loop time of 0.230839 on 4 procs for 100 steps with 32000 atoms + +Performance: 187143.043 tau/day, 433.201 timesteps/s, 13.862 Matom-step/s +99.6% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.18211 | 0.18376 | 0.18584 | 0.3 | 79.60 +Neigh | 0.033629 | 0.033787 | 0.03418 | 0.1 | 14.64 +Comm | 0.007259 | 0.0092728 | 0.01113 | 1.5 | 4.02 +Output | 3.5578e-05 | 3.778e-05 | 4.1359e-05 | 0.0 | 0.02 +Modify | 0.0033806 | 0.0034297 | 0.0034832 | 0.1 | 1.49 +Other | | 0.0005558 | | | 0.24 + +Nlocal: 8000 ave 8037 max 7964 min +Histogram: 2 0 0 0 0 0 0 0 1 1 +Nghost: 9007.5 ave 9050 max 8968 min +Histogram: 1 1 0 0 0 0 0 1 0 1 +Neighs: 300708 ave 305113 max 297203 min +Histogram: 1 0 0 1 1 0 0 0 0 1 + +Total # of neighbors = 1202833 +Ave neighs/atom = 37.588531 +Neighbor list builds = 5 +Dangerous builds not checked +Total wall time: 0:00:00 diff --git a/bench/log.15Jul25.lj.scaled.g++.4 b/bench/log.15Jul25.lj.scaled.g++.4 new file mode 100644 index 0000000000..3354c1882c --- /dev/null +++ b/bench/log.15Jul25.lj.scaled.g++.4 @@ -0,0 +1,88 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-784-g8c564460e6-modified) + using 1 OpenMP thread(s) per MPI task +# 3d Lennard-Jones melt + +variable x index 1 +variable y index 1 +variable z index 1 + +variable xx equal 20*$x +variable xx equal 20*2 +variable yy equal 20*$y +variable yy equal 20*2 +variable zz equal 20*$z +variable zz equal 20*1 + +units lj +atom_style atomic + +lattice fcc 0.8442 +Lattice spacing in x,y,z = 1.6795962 1.6795962 1.6795962 +region box block 0 ${xx} 0 ${yy} 0 ${zz} +region box block 0 40 0 ${yy} 0 ${zz} +region box block 0 40 0 40 0 ${zz} +region box block 0 40 0 40 0 20 +create_box 1 box +Created orthogonal box = (0 0 0) to (67.183848 67.183848 33.591924) + 2 by 2 by 1 MPI processor grid +create_atoms 1 box +Created 128000 atoms + using lattice units in orthogonal box = (0 0 0) to (67.183848 67.183848 33.591924) + create_atoms CPU = 0.004 seconds +mass 1 1.0 + +velocity all create 1.44 87287 loop geom + +pair_style lj/cut 2.5 +pair_coeff 1 1 1.0 1.0 2.5 + +neighbor 0.3 bin +neigh_modify delay 0 every 20 check no + +fix 1 all nve + +run 100 +Generated 0 of 0 mixed pair_coeff terms from geometric mixing rule +Neighbor list info ... + update: every = 20 steps, delay = 0 steps, check = no + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 2.8 + ghost atom cutoff = 2.8 + binsize = 1.4, bins = 48 48 24 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/cut, perpetual + attributes: half, newton on + pair build: half/bin/atomonly/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 14.12 | 14.12 | 14.12 Mbytes + Step Temp E_pair E_mol TotEng Press + 0 1.44 -6.7733681 0 -4.6133849 -5.0196788 + 100 0.75841891 -5.759957 0 -4.6223375 0.20008866 +Loop time of 0.961718 on 4 procs for 100 steps with 128000 atoms + +Performance: 44919.610 tau/day, 103.981 timesteps/s, 13.310 Matom-step/s +99.5% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 0.76816 | 0.7701 | 0.77277 | 0.2 | 80.08 +Neigh | 0.13077 | 0.13132 | 0.13191 | 0.1 | 13.65 +Comm | 0.026232 | 0.02948 | 0.03169 | 1.2 | 3.07 +Output | 0.00011422 | 0.00012437 | 0.0001401 | 0.0 | 0.01 +Modify | 0.026175 | 0.026553 | 0.026857 | 0.2 | 2.76 +Other | | 0.004141 | | | 0.43 + +Nlocal: 32000 ave 32060 max 31939 min +Histogram: 1 0 1 0 0 0 0 1 0 1 +Nghost: 19630.8 ave 19681 max 19562 min +Histogram: 1 0 0 0 1 0 0 0 1 1 +Neighs: 1.20195e+06 ave 1.20354e+06 max 1.19931e+06 min +Histogram: 1 0 0 0 0 0 0 2 0 1 + +Total # of neighbors = 4807797 +Ave neighs/atom = 37.560914 +Neighbor list builds = 5 +Dangerous builds not checked +Total wall time: 0:00:01 diff --git a/bench/log.15Jul25.rhodo.fixed.g++.1 b/bench/log.15Jul25.rhodo.fixed.g++.1 new file mode 100644 index 0000000000..e3e7b29e37 --- /dev/null +++ b/bench/log.15Jul25.rhodo.fixed.g++.1 @@ -0,0 +1,139 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# Rhodopsin model + +units real +neigh_modify delay 5 every 1 + +atom_style full +bond_style harmonic +angle_style charmm +dihedral_style charmm +improper_style harmonic +pair_style lj/charmm/coul/long 8.0 10.0 +pair_modify mix arithmetic +kspace_style pppm 1e-4 + +read_data data.rhodo +Reading data file ... + orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) + 1 by 1 by 1 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + scanning bonds ... + 4 = max bonds/atom + scanning angles ... + 8 = max angles/atom + scanning dihedrals ... + 18 = max dihedrals/atom + scanning impropers ... + 2 = max impropers/atom + orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) + 1 by 1 by 1 MPI processor grid + reading bonds ... + 27723 bonds + reading angles ... + 40467 angles + reading dihedrals ... + 56829 dihedrals + reading impropers ... + 1034 impropers +Finding 1-2 1-3 1-4 neighbors ... + special bond factors lj: 0 0 0 + special bond factors coul: 0 0 0 + 4 = max # of 1-2 neighbors + 12 = max # of 1-3 neighbors + 24 = max # of 1-4 neighbors + 26 = max # of special neighbors + special bonds CPU = 0.009 seconds + read_data CPU = 0.236 seconds + +fix 1 all shake 0.0001 5 0 m 1.0 a 232 +Finding SHAKE clusters ... + 1617 = # of size 2 clusters + 3633 = # of size 3 clusters + 747 = # of size 4 clusters + 4233 = # of frozen angles + find clusters CPU = 0.006 seconds +fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1 + +special_bonds charmm + +thermo 50 +thermo_style multi +timestep 2.0 + +run 100 +PPPM initialization ... + using 12-bit tables for long-range coulomb + G vector (1/distance) = 0.24883488 + grid = 25 32 32 + stencil order = 5 + estimated absolute RMS force accuracy = 0.035547797 + estimated relative force accuracy = 0.00010705113 + using double precision FFTW3 + 3d grid and FFT values/proc = 41070 25600 +Generated 2278 of 2278 mixed pair_coeff terms from arithmetic mixing rule +Neighbor list info ... + update: every = 1 steps, delay = 5 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 12 + ghost atom cutoff = 12 + binsize = 6, bins = 10 13 13 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/charmm/coul/long, perpetual + attributes: half, newton on + pair build: half/bin/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 140 | 140 | 140 Mbytes +------------ Step 0 ----- CPU = 0 (sec) ------------- +TotEng = -25356.2057 KinEng = 21444.8313 Temp = 299.0397 +PotEng = -46801.0370 E_bond = 2537.9940 E_angle = 10921.3742 +E_dihed = 5211.7865 E_impro = 213.5116 E_vdwl = -2307.8634 +E_coul = 207025.8934 E_long = -270403.7333 Press = -149.3300 +Volume = 307995.0335 +------------ Step 50 ----- CPU = 6.946771 (sec) ------------- +TotEng = -25330.0307 KinEng = 21501.0009 Temp = 299.8229 +PotEng = -46831.0316 E_bond = 2471.7035 E_angle = 10836.5102 +E_dihed = 5239.6319 E_impro = 227.1218 E_vdwl = -1993.2873 +E_coul = 206797.6807 E_long = -270410.3925 Press = 237.6572 +Volume = 308031.6762 +------------ Step 100 ----- CPU = 14.20402 (sec) ------------- +TotEng = -25290.7364 KinEng = 21591.9089 Temp = 301.0906 +PotEng = -46882.6454 E_bond = 2567.9807 E_angle = 10781.9571 +E_dihed = 5198.7492 E_impro = 216.7864 E_vdwl = -1902.6616 +E_coul = 206659.5159 E_long = -270404.9730 Press = 6.7352 +Volume = 308134.2286 +Loop time of 14.2041 on 1 procs for 100 steps with 32000 atoms + +Performance: 1.217 ns/day, 19.728 hours/ns, 7.040 timesteps/s, 225.288 katom-step/s +99.6% CPU use with 1 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 11.001 | 11.001 | 11.001 | 0.0 | 77.45 +Bond | 0.38231 | 0.38231 | 0.38231 | 0.0 | 2.69 +Kspace | 0.6111 | 0.6111 | 0.6111 | 0.0 | 4.30 +Neigh | 1.891 | 1.891 | 1.891 | 0.0 | 13.31 +Comm | 0.021749 | 0.021749 | 0.021749 | 0.0 | 0.15 +Output | 0.00021602 | 0.00021602 | 0.00021602 | 0.0 | 0.00 +Modify | 0.29015 | 0.29015 | 0.29015 | 0.0 | 2.04 +Other | | 0.006949 | | | 0.05 + +Nlocal: 32000 ave 32000 max 32000 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Nghost: 47958 ave 47958 max 47958 min +Histogram: 1 0 0 0 0 0 0 0 0 0 +Neighs: 1.20281e+07 ave 1.20281e+07 max 1.20281e+07 min +Histogram: 1 0 0 0 0 0 0 0 0 0 + +Total # of neighbors = 12028093 +Ave neighs/atom = 375.87791 +Ave special neighs/atom = 7.431875 +Neighbor list builds = 11 +Dangerous builds = 0 +Total wall time: 0:00:14 diff --git a/bench/log.15Jul25.rhodo.fixed.g++.4 b/bench/log.15Jul25.rhodo.fixed.g++.4 new file mode 100644 index 0000000000..4defa96dbe --- /dev/null +++ b/bench/log.15Jul25.rhodo.fixed.g++.4 @@ -0,0 +1,139 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-808-g67067cbc80) + using 1 OpenMP thread(s) per MPI task +# Rhodopsin model + +units real +neigh_modify delay 5 every 1 + +atom_style full +bond_style harmonic +angle_style charmm +dihedral_style charmm +improper_style harmonic +pair_style lj/charmm/coul/long 8.0 10.0 +pair_modify mix arithmetic +kspace_style pppm 1e-4 + +read_data data.rhodo +Reading data file ... + orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) + 1 by 2 by 2 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + scanning bonds ... + 4 = max bonds/atom + scanning angles ... + 8 = max angles/atom + scanning dihedrals ... + 18 = max dihedrals/atom + scanning impropers ... + 2 = max impropers/atom + orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) + 1 by 2 by 2 MPI processor grid + reading bonds ... + 27723 bonds + reading angles ... + 40467 angles + reading dihedrals ... + 56829 dihedrals + reading impropers ... + 1034 impropers +Finding 1-2 1-3 1-4 neighbors ... + special bond factors lj: 0 0 0 + special bond factors coul: 0 0 0 + 4 = max # of 1-2 neighbors + 12 = max # of 1-3 neighbors + 24 = max # of 1-4 neighbors + 26 = max # of special neighbors + special bonds CPU = 0.004 seconds + read_data CPU = 0.218 seconds + +fix 1 all shake 0.0001 5 0 m 1.0 a 232 +Finding SHAKE clusters ... + 1617 = # of size 2 clusters + 3633 = # of size 3 clusters + 747 = # of size 4 clusters + 4233 = # of frozen angles + find clusters CPU = 0.002 seconds +fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1 + +special_bonds charmm + +thermo 50 +thermo_style multi +timestep 2.0 + +run 100 +PPPM initialization ... + using 12-bit tables for long-range coulomb + G vector (1/distance) = 0.24883488 + grid = 25 32 32 + stencil order = 5 + estimated absolute RMS force accuracy = 0.035547797 + estimated relative force accuracy = 0.00010705113 + using double precision FFTW3 + 3d grid and FFT values/proc = 13230 6400 +Generated 2278 of 2278 mixed pair_coeff terms from arithmetic mixing rule +Neighbor list info ... + update: every = 1 steps, delay = 5 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 12 + ghost atom cutoff = 12 + binsize = 6, bins = 10 13 13 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/charmm/coul/long, perpetual + attributes: half, newton on + pair build: half/bin/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 49.25 | 49.35 | 49.64 Mbytes +------------ Step 0 ----- CPU = 0 (sec) ------------- +TotEng = -25356.2057 KinEng = 21444.8313 Temp = 299.0397 +PotEng = -46801.0370 E_bond = 2537.9940 E_angle = 10921.3742 +E_dihed = 5211.7865 E_impro = 213.5116 E_vdwl = -2307.8634 +E_coul = 207025.8934 E_long = -270403.7333 Press = -149.3300 +Volume = 307995.0335 +------------ Step 50 ----- CPU = 1.894152 (sec) ------------- +TotEng = -25330.0307 KinEng = 21501.0009 Temp = 299.8229 +PotEng = -46831.0316 E_bond = 2471.7035 E_angle = 10836.5102 +E_dihed = 5239.6319 E_impro = 227.1218 E_vdwl = -1993.2873 +E_coul = 206797.6807 E_long = -270410.3925 Press = 237.6572 +Volume = 308031.6762 +------------ Step 100 ----- CPU = 3.886163 (sec) ------------- +TotEng = -25290.7364 KinEng = 21591.9089 Temp = 301.0906 +PotEng = -46882.6453 E_bond = 2567.9807 E_angle = 10781.9571 +E_dihed = 5198.7492 E_impro = 216.7864 E_vdwl = -1902.6616 +E_coul = 206659.5159 E_long = -270404.9730 Press = 6.7352 +Volume = 308134.2286 +Loop time of 3.8862 on 4 procs for 100 steps with 32000 atoms + +Performance: 4.447 ns/day, 5.397 hours/ns, 25.732 timesteps/s, 823.427 katom-step/s +99.3% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 2.8294 | 2.8755 | 2.9708 | 3.3 | 73.99 +Bond | 0.092202 | 0.096409 | 0.10326 | 1.3 | 2.48 +Kspace | 0.18009 | 0.27454 | 0.32256 | 10.6 | 7.06 +Neigh | 0.5049 | 0.50495 | 0.50501 | 0.0 | 12.99 +Comm | 0.029701 | 0.029904 | 0.030144 | 0.1 | 0.77 +Output | 0.00010524 | 0.00010923 | 0.00012092 | 0.0 | 0.00 +Modify | 0.098097 | 0.09846 | 0.098727 | 0.1 | 2.53 +Other | | 0.006314 | | | 0.16 + +Nlocal: 8000 ave 8143 max 7933 min +Histogram: 1 2 0 0 0 0 0 0 0 1 +Nghost: 22733.5 ave 22769 max 22693 min +Histogram: 1 0 0 0 0 2 0 0 0 1 +Neighs: 3.00702e+06 ave 3.0975e+06 max 2.96492e+06 min +Histogram: 1 2 0 0 0 0 0 0 0 1 + +Total # of neighbors = 12028093 +Ave neighs/atom = 375.87791 +Ave special neighs/atom = 7.431875 +Neighbor list builds = 11 +Dangerous builds = 0 +Total wall time: 0:00:04 diff --git a/bench/log.15Jul25.rhodo.scaled.g++.4 b/bench/log.15Jul25.rhodo.scaled.g++.4 new file mode 100644 index 0000000000..37fbce6468 --- /dev/null +++ b/bench/log.15Jul25.rhodo.scaled.g++.4 @@ -0,0 +1,139 @@ +LAMMPS (12 Jun 2025 - Development - patch_12Jun2025-784-g8c564460e6-modified) + using 1 OpenMP thread(s) per MPI task +# Rhodopsin model + +units real +neigh_modify delay 5 every 1 + +atom_style full +bond_style harmonic +angle_style charmm +dihedral_style charmm +improper_style harmonic +pair_style lj/charmm/coul/long 8.0 10.0 +pair_modify mix arithmetic +kspace_style pppm 1e-4 + +read_data data.rhodo +Reading data file ... + orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) + 1 by 2 by 2 MPI processor grid + reading atoms ... + 32000 atoms + reading velocities ... + 32000 velocities + scanning bonds ... + 4 = max bonds/atom + scanning angles ... + 8 = max angles/atom + scanning dihedrals ... + 18 = max dihedrals/atom + scanning impropers ... + 2 = max impropers/atom + orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) + 1 by 2 by 2 MPI processor grid + reading bonds ... + 27723 bonds + reading angles ... + 40467 angles + reading dihedrals ... + 56829 dihedrals + reading impropers ... + 1034 impropers +Finding 1-2 1-3 1-4 neighbors ... + special bond factors lj: 0 0 0 + special bond factors coul: 0 0 0 + 4 = max # of 1-2 neighbors + 12 = max # of 1-3 neighbors + 24 = max # of 1-4 neighbors + 26 = max # of special neighbors + special bonds CPU = 0.003 seconds + read_data CPU = 0.221 seconds + +fix 1 all shake 0.0001 5 0 m 1.0 a 232 +Finding SHAKE clusters ... + 1617 = # of size 2 clusters + 3633 = # of size 3 clusters + 747 = # of size 4 clusters + 4233 = # of frozen angles + find clusters CPU = 0.002 seconds +fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1 + +special_bonds charmm + +thermo 50 +thermo_style multi +timestep 2.0 + +run 100 +PPPM initialization ... + using 12-bit tables for long-range coulomb + G vector (1/distance) = 0.24883488 + grid = 25 32 32 + stencil order = 5 + estimated absolute RMS force accuracy = 0.035547797 + estimated relative force accuracy = 0.00010705113 + using double precision FFTW3 + 3d grid and FFT values/proc = 13230 6400 +Generated 2278 of 2278 mixed pair_coeff terms from arithmetic mixing rule +Neighbor list info ... + update: every = 1 steps, delay = 5 steps, check = yes + max neighbors/atom: 2000, page size: 100000 + master list distance cutoff = 12 + ghost atom cutoff = 12 + binsize = 6, bins = 10 13 13 + 1 neighbor lists, perpetual/occasional/extra = 1 0 0 + (1) pair lj/charmm/coul/long, perpetual + attributes: half, newton on + pair build: half/bin/newton + stencil: half/bin/3d + bin: standard +Per MPI rank memory allocation (min/avg/max) = 49.25 | 49.35 | 49.64 Mbytes +------------ Step 0 ----- CPU = 0 (sec) ------------- +TotEng = -25356.2057 KinEng = 21444.8313 Temp = 299.0397 +PotEng = -46801.0370 E_bond = 2537.9940 E_angle = 10921.3742 +E_dihed = 5211.7865 E_impro = 213.5116 E_vdwl = -2307.8634 +E_coul = 207025.8934 E_long = -270403.7333 Press = -149.3300 +Volume = 307995.0335 +------------ Step 50 ----- CPU = 1.889623 (sec) ------------- +TotEng = -25330.0307 KinEng = 21501.0009 Temp = 299.8229 +PotEng = -46831.0316 E_bond = 2471.7035 E_angle = 10836.5102 +E_dihed = 5239.6319 E_impro = 227.1218 E_vdwl = -1993.2873 +E_coul = 206797.6807 E_long = -270410.3925 Press = 237.6572 +Volume = 308031.6762 +------------ Step 100 ----- CPU = 3.870869 (sec) ------------- +TotEng = -25290.7364 KinEng = 21591.9089 Temp = 301.0906 +PotEng = -46882.6453 E_bond = 2567.9807 E_angle = 10781.9571 +E_dihed = 5198.7492 E_impro = 216.7864 E_vdwl = -1902.6616 +E_coul = 206659.5159 E_long = -270404.9730 Press = 6.7352 +Volume = 308134.2286 +Loop time of 3.8709 on 4 procs for 100 steps with 32000 atoms + +Performance: 4.464 ns/day, 5.376 hours/ns, 25.834 timesteps/s, 826.680 katom-step/s +99.3% CPU use with 4 MPI tasks x 1 OpenMP threads + +MPI task timing breakdown: +Section | min time | avg time | max time |%varavg| %total +--------------------------------------------------------------- +Pair | 2.8153 | 2.8576 | 2.9543 | 3.4 | 73.82 +Bond | 0.092732 | 0.096503 | 0.10317 | 1.3 | 2.49 +Kspace | 0.17705 | 0.27418 | 0.31978 | 10.9 | 7.08 +Neigh | 0.50993 | 0.51001 | 0.51006 | 0.0 | 13.18 +Comm | 0.028631 | 0.028776 | 0.028899 | 0.1 | 0.74 +Output | 9.7056e-05 | 0.00010098 | 0.00011123 | 0.0 | 0.00 +Modify | 0.09746 | 0.097676 | 0.098001 | 0.1 | 2.52 +Other | | 0.006022 | | | 0.16 + +Nlocal: 8000 ave 8143 max 7933 min +Histogram: 1 2 0 0 0 0 0 0 0 1 +Nghost: 22733.5 ave 22769 max 22693 min +Histogram: 1 0 0 0 0 2 0 0 0 1 +Neighs: 3.00702e+06 ave 3.0975e+06 max 2.96492e+06 min +Histogram: 1 2 0 0 0 0 0 0 0 1 + +Total # of neighbors = 12028093 +Ave neighs/atom = 375.87791 +Ave special neighs/atom = 7.431875 +Neighbor list builds = 11 +Dangerous builds = 0 +Total wall time: 0:00:04 diff --git a/bench/log.6Oct16.chain.fixed.icc.1 b/bench/log.6Oct16.chain.fixed.icc.1 deleted file mode 100644 index d1279b8ca1..0000000000 --- a/bench/log.6Oct16.chain.fixed.icc.1 +++ /dev/null @@ -1,78 +0,0 @@ -LAMMPS (6 Oct 2016) -# FENE beadspring benchmark - -units lj -atom_style bond -special_bonds fene - -read_data data.chain - orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) - 1 by 1 by 1 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - scanning bonds ... - 1 = max bonds/atom - reading bonds ... - 31680 bonds - 2 = max # of 1-2 neighbors - 2 = max # of special neighbors - -neighbor 0.4 bin -neigh_modify every 1 delay 1 - -bond_style fene -bond_coeff 1 30.0 1.5 1.0 1.0 - -pair_style lj/cut 1.12 -pair_modify shift yes -pair_coeff 1 1 1.0 1.0 1.12 - -fix 1 all nve -fix 2 all langevin 1.0 1.0 10.0 904297 - -thermo 100 -timestep 0.012 - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 1 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 1.52 - ghost atom cutoff = 1.52 - binsize = 0.76 -> bins = 45 45 45 -Memory usage per processor = 12.0423 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 0.97029772 0.44484087 20.494523 22.394765 4.6721833 - 100 0.9729966 0.4361122 20.507698 22.40326 4.6548819 -Loop time of 0.977647 on 1 procs for 100 steps with 32000 atoms - -Performance: 106050.541 tau/day, 102.286 timesteps/s -99.9% CPU use with 1 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 0.19421 | 0.19421 | 0.19421 | 0.0 | 19.86 -Bond | 0.08741 | 0.08741 | 0.08741 | 0.0 | 8.94 -Neigh | 0.45791 | 0.45791 | 0.45791 | 0.0 | 46.84 -Comm | 0.032649 | 0.032649 | 0.032649 | 0.0 | 3.34 -Output | 0.00012207 | 0.00012207 | 0.00012207 | 0.0 | 0.01 -Modify | 0.18071 | 0.18071 | 0.18071 | 0.0 | 18.48 -Other | | 0.02464 | | | 2.52 - -Nlocal: 32000 ave 32000 max 32000 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Nghost: 9493 ave 9493 max 9493 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Neighs: 155873 ave 155873 max 155873 min -Histogram: 1 0 0 0 0 0 0 0 0 0 - -Total # of neighbors = 155873 -Ave neighs/atom = 4.87103 -Ave special neighs/atom = 1.98 -Neighbor list builds = 25 -Dangerous builds = 0 -Total wall time: 0:00:01 diff --git a/bench/log.6Oct16.chain.fixed.icc.4 b/bench/log.6Oct16.chain.fixed.icc.4 deleted file mode 100644 index ce088d20a6..0000000000 --- a/bench/log.6Oct16.chain.fixed.icc.4 +++ /dev/null @@ -1,78 +0,0 @@ -LAMMPS (6 Oct 2016) -# FENE beadspring benchmark - -units lj -atom_style bond -special_bonds fene - -read_data data.chain - orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) - 1 by 2 by 2 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - scanning bonds ... - 1 = max bonds/atom - reading bonds ... - 31680 bonds - 2 = max # of 1-2 neighbors - 2 = max # of special neighbors - -neighbor 0.4 bin -neigh_modify every 1 delay 1 - -bond_style fene -bond_coeff 1 30.0 1.5 1.0 1.0 - -pair_style lj/cut 1.12 -pair_modify shift yes -pair_coeff 1 1 1.0 1.0 1.12 - -fix 1 all nve -fix 2 all langevin 1.0 1.0 10.0 904297 - -thermo 100 -timestep 0.012 - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 1 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 1.52 - ghost atom cutoff = 1.52 - binsize = 0.76 -> bins = 45 45 45 -Memory usage per processor = 4.14663 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 0.97029772 0.44484087 20.494523 22.394765 4.6721833 - 100 0.97145835 0.43803883 20.502691 22.397872 4.626988 -Loop time of 0.269205 on 4 procs for 100 steps with 32000 atoms - -Performance: 385133.446 tau/day, 371.464 timesteps/s -99.8% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 0.049383 | 0.049756 | 0.049988 | 0.1 | 18.48 -Bond | 0.022701 | 0.022813 | 0.022872 | 0.0 | 8.47 -Neigh | 0.11982 | 0.12002 | 0.12018 | 0.0 | 44.58 -Comm | 0.020274 | 0.021077 | 0.022348 | 0.5 | 7.83 -Output | 5.3167e-05 | 5.6148e-05 | 6.3181e-05 | 0.1 | 0.02 -Modify | 0.046276 | 0.046809 | 0.047016 | 0.1 | 17.39 -Other | | 0.008669 | | | 3.22 - -Nlocal: 8000 ave 8030 max 7974 min -Histogram: 1 0 0 1 0 1 0 0 0 1 -Nghost: 4177 ave 4191 max 4160 min -Histogram: 1 0 0 0 1 0 0 1 0 1 -Neighs: 38995.8 ave 39169 max 38852 min -Histogram: 1 0 0 1 1 0 0 0 0 1 - -Total # of neighbors = 155983 -Ave neighs/atom = 4.87447 -Ave special neighs/atom = 1.98 -Neighbor list builds = 25 -Dangerous builds = 0 -Total wall time: 0:00:00 diff --git a/bench/log.6Oct16.chain.scaled.icc.4 b/bench/log.6Oct16.chain.scaled.icc.4 deleted file mode 100644 index 2f2d47d78b..0000000000 --- a/bench/log.6Oct16.chain.scaled.icc.4 +++ /dev/null @@ -1,94 +0,0 @@ -LAMMPS (6 Oct 2016) -# FENE beadspring benchmark - -variable x index 1 -variable y index 1 -variable z index 1 - -units lj -atom_style bond -atom_modify map hash -special_bonds fene - -read_data data.chain - orthogonal box = (-16.796 -16.796 -16.796) to (16.796 16.796 16.796) - 1 by 2 by 2 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - scanning bonds ... - 1 = max bonds/atom - reading bonds ... - 31680 bonds - 2 = max # of 1-2 neighbors - 2 = max # of special neighbors - -replicate $x $y $z -replicate 2 $y $z -replicate 2 2 $z -replicate 2 2 1 - orthogonal box = (-16.796 -16.796 -16.796) to (50.388 50.388 16.796) - 2 by 2 by 1 MPI processor grid - 128000 atoms - 126720 bonds - 2 = max # of 1-2 neighbors - 2 = max # of special neighbors - -neighbor 0.4 bin -neigh_modify every 1 delay 1 - -bond_style fene -bond_coeff 1 30.0 1.5 1.0 1.0 - -pair_style lj/cut 1.12 -pair_modify shift yes -pair_coeff 1 1 1.0 1.0 1.12 - -fix 1 all nve -fix 2 all langevin 1.0 1.0 10.0 904297 - -thermo 100 -timestep 0.012 - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 1 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 1.52 - ghost atom cutoff = 1.52 - binsize = 0.76 -> bins = 89 89 45 -Memory usage per processor = 13.2993 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 0.97027498 0.44484087 20.494523 22.394765 4.6721833 - 100 0.97682955 0.44239968 20.500229 22.407862 4.6527025 -Loop time of 1.14845 on 4 procs for 100 steps with 128000 atoms - -Performance: 90277.919 tau/day, 87.074 timesteps/s -99.9% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 0.2203 | 0.22207 | 0.22386 | 0.3 | 19.34 -Bond | 0.094861 | 0.095302 | 0.095988 | 0.1 | 8.30 -Neigh | 0.52127 | 0.5216 | 0.52189 | 0.0 | 45.42 -Comm | 0.079585 | 0.082159 | 0.084366 | 0.7 | 7.15 -Output | 0.00013304 | 0.00015306 | 0.00018501 | 0.2 | 0.01 -Modify | 0.18351 | 0.18419 | 0.1856 | 0.2 | 16.04 -Other | | 0.04298 | | | 3.74 - -Nlocal: 32000 ave 32015 max 31983 min -Histogram: 1 0 1 0 0 0 0 0 1 1 -Nghost: 9492 ave 9522 max 9432 min -Histogram: 1 0 0 0 0 0 1 0 0 2 -Neighs: 155837 ave 156079 max 155506 min -Histogram: 1 0 0 0 0 1 0 0 1 1 - -Total # of neighbors = 623349 -Ave neighs/atom = 4.86991 -Ave special neighs/atom = 1.98 -Neighbor list builds = 25 -Dangerous builds = 0 -Total wall time: 0:00:01 diff --git a/bench/log.6Oct16.chute.fixed.icc.1 b/bench/log.6Oct16.chute.fixed.icc.1 deleted file mode 100644 index 9f53d44092..0000000000 --- a/bench/log.6Oct16.chute.fixed.icc.1 +++ /dev/null @@ -1,80 +0,0 @@ -LAMMPS (6 Oct 2016) -# LAMMPS benchmark of granular flow -# chute flow of 32000 atoms with frozen base at 26 degrees - -units lj -atom_style sphere -boundary p p fs -newton off -comm_modify vel yes - -read_data data.chute - orthogonal box = (0 0 0) to (40 20 37.2886) - 1 by 1 by 1 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - -pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0 -pair_coeff * * - -neighbor 0.1 bin -neigh_modify every 1 delay 0 - -timestep 0.0001 - -group bottom type 2 -912 atoms in group bottom -group active subtract all bottom -31088 atoms in group active -neigh_modify exclude group bottom bottom - -fix 1 all gravity 1.0 chute 26.0 -fix 2 bottom freeze -fix 3 active nve/sphere - -compute 1 all erotate/sphere -thermo_style custom step atoms ke c_1 vol -thermo_modify norm no -thermo 100 - -run 100 -Neighbor list info ... - 2 neighbor list requests - update every 1 steps, delay 0 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 1.1 - ghost atom cutoff = 1.1 - binsize = 0.55 -> bins = 73 37 68 -Memory usage per processor = 16.0904 Mbytes -Step Atoms KinEng c_1 Volume - 0 32000 784139.13 1601.1263 29833.783 - 100 32000 784292.08 1571.0968 29834.707 -Loop time of 0.534174 on 1 procs for 100 steps with 32000 atoms - -Performance: 1617.451 tau/day, 187.205 timesteps/s -99.8% CPU use with 1 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 0.33346 | 0.33346 | 0.33346 | 0.0 | 62.43 -Neigh | 0.043902 | 0.043902 | 0.043902 | 0.0 | 8.22 -Comm | 0.018391 | 0.018391 | 0.018391 | 0.0 | 3.44 -Output | 0.00022411 | 0.00022411 | 0.00022411 | 0.0 | 0.04 -Modify | 0.11666 | 0.11666 | 0.11666 | 0.0 | 21.84 -Other | | 0.02153 | | | 4.03 - -Nlocal: 32000 ave 32000 max 32000 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Nghost: 5463 ave 5463 max 5463 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Neighs: 115133 ave 115133 max 115133 min -Histogram: 1 0 0 0 0 0 0 0 0 0 - -Total # of neighbors = 115133 -Ave neighs/atom = 3.59791 -Neighbor list builds = 2 -Dangerous builds = 0 -Total wall time: 0:00:00 diff --git a/bench/log.6Oct16.chute.fixed.icc.4 b/bench/log.6Oct16.chute.fixed.icc.4 deleted file mode 100644 index a75a7c1f01..0000000000 --- a/bench/log.6Oct16.chute.fixed.icc.4 +++ /dev/null @@ -1,80 +0,0 @@ -LAMMPS (6 Oct 2016) -# LAMMPS benchmark of granular flow -# chute flow of 32000 atoms with frozen base at 26 degrees - -units lj -atom_style sphere -boundary p p fs -newton off -comm_modify vel yes - -read_data data.chute - orthogonal box = (0 0 0) to (40 20 37.2886) - 2 by 1 by 2 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - -pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0 -pair_coeff * * - -neighbor 0.1 bin -neigh_modify every 1 delay 0 - -timestep 0.0001 - -group bottom type 2 -912 atoms in group bottom -group active subtract all bottom -31088 atoms in group active -neigh_modify exclude group bottom bottom - -fix 1 all gravity 1.0 chute 26.0 -fix 2 bottom freeze -fix 3 active nve/sphere - -compute 1 all erotate/sphere -thermo_style custom step atoms ke c_1 vol -thermo_modify norm no -thermo 100 - -run 100 -Neighbor list info ... - 2 neighbor list requests - update every 1 steps, delay 0 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 1.1 - ghost atom cutoff = 1.1 - binsize = 0.55 -> bins = 73 37 68 -Memory usage per processor = 7.04927 Mbytes -Step Atoms KinEng c_1 Volume - 0 32000 784139.13 1601.1263 29833.783 - 100 32000 784292.08 1571.0968 29834.707 -Loop time of 0.171815 on 4 procs for 100 steps with 32000 atoms - -Performance: 5028.653 tau/day, 582.020 timesteps/s -99.7% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 0.093691 | 0.096898 | 0.10005 | 0.8 | 56.40 -Neigh | 0.011976 | 0.012059 | 0.012146 | 0.1 | 7.02 -Comm | 0.016384 | 0.017418 | 0.018465 | 0.8 | 10.14 -Output | 7.7963e-05 | 0.00010747 | 0.00013304 | 0.2 | 0.06 -Modify | 0.031744 | 0.031943 | 0.032167 | 0.1 | 18.59 -Other | | 0.01339 | | | 7.79 - -Nlocal: 8000 ave 8008 max 7992 min -Histogram: 2 0 0 0 0 0 0 0 0 2 -Nghost: 2439 ave 2450 max 2428 min -Histogram: 2 0 0 0 0 0 0 0 0 2 -Neighs: 29500.5 ave 30488 max 28513 min -Histogram: 2 0 0 0 0 0 0 0 0 2 - -Total # of neighbors = 118002 -Ave neighs/atom = 3.68756 -Neighbor list builds = 2 -Dangerous builds = 0 -Total wall time: 0:00:00 diff --git a/bench/log.6Oct16.chute.scaled.icc.4 b/bench/log.6Oct16.chute.scaled.icc.4 deleted file mode 100644 index 0538e9fbe5..0000000000 --- a/bench/log.6Oct16.chute.scaled.icc.4 +++ /dev/null @@ -1,90 +0,0 @@ -LAMMPS (6 Oct 2016) -# LAMMPS benchmark of granular flow -# chute flow of 32000 atoms with frozen base at 26 degrees - -variable x index 1 -variable y index 1 - -units lj -atom_style sphere -boundary p p fs -newton off -comm_modify vel yes - -read_data data.chute - orthogonal box = (0 0 0) to (40 20 37.2886) - 2 by 1 by 2 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - -replicate $x $y 1 -replicate 2 $y 1 -replicate 2 2 1 - orthogonal box = (0 0 0) to (80 40 37.2922) - 2 by 2 by 1 MPI processor grid - 128000 atoms - -pair_style gran/hooke/history 200000.0 NULL 50.0 NULL 0.5 0 -pair_coeff * * - -neighbor 0.1 bin -neigh_modify every 1 delay 0 - -timestep 0.0001 - -group bottom type 2 -3648 atoms in group bottom -group active subtract all bottom -124352 atoms in group active -neigh_modify exclude group bottom bottom - -fix 1 all gravity 1.0 chute 26.0 -fix 2 bottom freeze -fix 3 active nve/sphere - -compute 1 all erotate/sphere -thermo_style custom step atoms ke c_1 vol -thermo_modify norm no -thermo 100 - -run 100 -Neighbor list info ... - 2 neighbor list requests - update every 1 steps, delay 0 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 1.1 - ghost atom cutoff = 1.1 - binsize = 0.55 -> bins = 146 73 68 -Memory usage per processor = 16.1265 Mbytes -Step Atoms KinEng c_1 Volume - 0 128000 3136556.5 6404.5051 119335.13 - 100 128000 3137168.3 6284.3873 119338.83 -Loop time of 0.832365 on 4 procs for 100 steps with 128000 atoms - -Performance: 1038.006 tau/day, 120.140 timesteps/s -99.8% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 0.5178 | 0.52208 | 0.52793 | 0.5 | 62.72 -Neigh | 0.047003 | 0.047113 | 0.047224 | 0.0 | 5.66 -Comm | 0.05233 | 0.052988 | 0.053722 | 0.2 | 6.37 -Output | 0.00024986 | 0.00032717 | 0.00036693 | 0.3 | 0.04 -Modify | 0.15517 | 0.15627 | 0.15808 | 0.3 | 18.77 -Other | | 0.0536 | | | 6.44 - -Nlocal: 32000 ave 32000 max 32000 min -Histogram: 4 0 0 0 0 0 0 0 0 0 -Nghost: 5463 ave 5463 max 5463 min -Histogram: 4 0 0 0 0 0 0 0 0 0 -Neighs: 115133 ave 115133 max 115133 min -Histogram: 4 0 0 0 0 0 0 0 0 0 - -Total # of neighbors = 460532 -Ave neighs/atom = 3.59791 -Neighbor list builds = 2 -Dangerous builds = 0 -Total wall time: 0:00:00 diff --git a/bench/log.6Oct16.eam.fixed.icc.1 b/bench/log.6Oct16.eam.fixed.icc.1 deleted file mode 100644 index f5ddfcde0d..0000000000 --- a/bench/log.6Oct16.eam.fixed.icc.1 +++ /dev/null @@ -1,83 +0,0 @@ -LAMMPS (6 Oct 2016) -# bulk Cu lattice - -variable x index 1 -variable y index 1 -variable z index 1 - -variable xx equal 20*$x -variable xx equal 20*1 -variable yy equal 20*$y -variable yy equal 20*1 -variable zz equal 20*$z -variable zz equal 20*1 - -units metal -atom_style atomic - -lattice fcc 3.615 -Lattice spacing in x,y,z = 3.615 3.615 3.615 -region box block 0 ${xx} 0 ${yy} 0 ${zz} -region box block 0 20 0 ${yy} 0 ${zz} -region box block 0 20 0 20 0 ${zz} -region box block 0 20 0 20 0 20 -create_box 1 box -Created orthogonal box = (0 0 0) to (72.3 72.3 72.3) - 1 by 1 by 1 MPI processor grid -create_atoms 1 box -Created 32000 atoms - -pair_style eam -pair_coeff 1 1 Cu_u3.eam -Reading potential file Cu_u3.eam with DATE: 2007-06-11 - -velocity all create 1600.0 376847 loop geom - -neighbor 1.0 bin -neigh_modify every 1 delay 5 check yes - -fix 1 all nve - -timestep 0.005 -thermo 50 - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 5 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 5.95 - ghost atom cutoff = 5.95 - binsize = 2.975 -> bins = 25 25 25 -Memory usage per processor = 11.2238 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 1600 -113280 0 -106662.09 18703.573 - 50 781.69049 -109873.35 0 -106640.13 52273.088 - 100 801.832 -109957.3 0 -106640.77 51322.821 -Loop time of 5.96529 on 1 procs for 100 steps with 32000 atoms - -Performance: 7.242 ns/day, 3.314 hours/ns, 16.764 timesteps/s -99.9% CPU use with 1 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 5.2743 | 5.2743 | 5.2743 | 0.0 | 88.42 -Neigh | 0.59212 | 0.59212 | 0.59212 | 0.0 | 9.93 -Comm | 0.030399 | 0.030399 | 0.030399 | 0.0 | 0.51 -Output | 0.00026202 | 0.00026202 | 0.00026202 | 0.0 | 0.00 -Modify | 0.050487 | 0.050487 | 0.050487 | 0.0 | 0.85 -Other | | 0.01776 | | | 0.30 - -Nlocal: 32000 ave 32000 max 32000 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Nghost: 19909 ave 19909 max 19909 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Neighs: 1.20778e+06 ave 1.20778e+06 max 1.20778e+06 min -Histogram: 1 0 0 0 0 0 0 0 0 0 - -Total # of neighbors = 1207784 -Ave neighs/atom = 37.7433 -Neighbor list builds = 13 -Dangerous builds = 0 -Total wall time: 0:00:06 diff --git a/bench/log.6Oct16.eam.fixed.icc.4 b/bench/log.6Oct16.eam.fixed.icc.4 deleted file mode 100644 index 3414210acf..0000000000 --- a/bench/log.6Oct16.eam.fixed.icc.4 +++ /dev/null @@ -1,83 +0,0 @@ -LAMMPS (6 Oct 2016) -# bulk Cu lattice - -variable x index 1 -variable y index 1 -variable z index 1 - -variable xx equal 20*$x -variable xx equal 20*1 -variable yy equal 20*$y -variable yy equal 20*1 -variable zz equal 20*$z -variable zz equal 20*1 - -units metal -atom_style atomic - -lattice fcc 3.615 -Lattice spacing in x,y,z = 3.615 3.615 3.615 -region box block 0 ${xx} 0 ${yy} 0 ${zz} -region box block 0 20 0 ${yy} 0 ${zz} -region box block 0 20 0 20 0 ${zz} -region box block 0 20 0 20 0 20 -create_box 1 box -Created orthogonal box = (0 0 0) to (72.3 72.3 72.3) - 1 by 2 by 2 MPI processor grid -create_atoms 1 box -Created 32000 atoms - -pair_style eam -pair_coeff 1 1 Cu_u3.eam -Reading potential file Cu_u3.eam with DATE: 2007-06-11 - -velocity all create 1600.0 376847 loop geom - -neighbor 1.0 bin -neigh_modify every 1 delay 5 check yes - -fix 1 all nve - -timestep 0.005 -thermo 50 - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 5 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 5.95 - ghost atom cutoff = 5.95 - binsize = 2.975 -> bins = 25 25 25 -Memory usage per processor = 5.59629 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 1600 -113280 0 -106662.09 18703.573 - 50 781.69049 -109873.35 0 -106640.13 52273.088 - 100 801.832 -109957.3 0 -106640.77 51322.821 -Loop time of 1.64562 on 4 procs for 100 steps with 32000 atoms - -Performance: 26.252 ns/day, 0.914 hours/ns, 60.767 timesteps/s -99.8% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 1.408 | 1.4175 | 1.4341 | 0.9 | 86.14 -Neigh | 0.15512 | 0.15722 | 0.16112 | 0.6 | 9.55 -Comm | 0.029105 | 0.049986 | 0.061822 | 5.8 | 3.04 -Output | 0.00010991 | 0.00011539 | 0.00012302 | 0.0 | 0.01 -Modify | 0.013383 | 0.013573 | 0.013883 | 0.2 | 0.82 -Other | | 0.007264 | | | 0.44 - -Nlocal: 8000 ave 8008 max 7993 min -Histogram: 2 0 0 0 0 0 0 0 1 1 -Nghost: 9130.25 ave 9138 max 9122 min -Histogram: 2 0 0 0 0 0 0 0 0 2 -Neighs: 301946 ave 302392 max 301360 min -Histogram: 1 0 0 0 1 0 0 0 1 1 - -Total # of neighbors = 1207784 -Ave neighs/atom = 37.7433 -Neighbor list builds = 13 -Dangerous builds = 0 -Total wall time: 0:00:01 diff --git a/bench/log.6Oct16.eam.scaled.icc.4 b/bench/log.6Oct16.eam.scaled.icc.4 deleted file mode 100644 index 8a2ec90b78..0000000000 --- a/bench/log.6Oct16.eam.scaled.icc.4 +++ /dev/null @@ -1,83 +0,0 @@ -LAMMPS (6 Oct 2016) -# bulk Cu lattice - -variable x index 1 -variable y index 1 -variable z index 1 - -variable xx equal 20*$x -variable xx equal 20*2 -variable yy equal 20*$y -variable yy equal 20*2 -variable zz equal 20*$z -variable zz equal 20*1 - -units metal -atom_style atomic - -lattice fcc 3.615 -Lattice spacing in x,y,z = 3.615 3.615 3.615 -region box block 0 ${xx} 0 ${yy} 0 ${zz} -region box block 0 40 0 ${yy} 0 ${zz} -region box block 0 40 0 40 0 ${zz} -region box block 0 40 0 40 0 20 -create_box 1 box -Created orthogonal box = (0 0 0) to (144.6 144.6 72.3) - 2 by 2 by 1 MPI processor grid -create_atoms 1 box -Created 128000 atoms - -pair_style eam -pair_coeff 1 1 Cu_u3.eam -Reading potential file Cu_u3.eam with DATE: 2007-06-11 - -velocity all create 1600.0 376847 loop geom - -neighbor 1.0 bin -neigh_modify every 1 delay 5 check yes - -fix 1 all nve - -timestep 0.005 -thermo 50 - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 5 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 5.95 - ghost atom cutoff = 5.95 - binsize = 2.975 -> bins = 49 49 25 -Memory usage per processor = 11.1402 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 1600 -453120 0 -426647.73 18704.012 - 50 779.50001 -439457.02 0 -426560.06 52355.276 - 100 797.97828 -439764.76 0 -426562.07 51474.74 -Loop time of 6.60121 on 4 procs for 100 steps with 128000 atoms - -Performance: 6.544 ns/day, 3.667 hours/ns, 15.149 timesteps/s -99.9% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 5.6676 | 5.7011 | 5.7469 | 1.3 | 86.36 -Neigh | 0.66423 | 0.67119 | 0.68082 | 0.7 | 10.17 -Comm | 0.079367 | 0.13668 | 0.1791 | 10.5 | 2.07 -Output | 0.00026989 | 0.00028622 | 0.00031209 | 0.1 | 0.00 -Modify | 0.060046 | 0.062203 | 0.065009 | 0.9 | 0.94 -Other | | 0.02974 | | | 0.45 - -Nlocal: 32000 ave 32092 max 31914 min -Histogram: 1 0 0 1 0 1 0 0 0 1 -Nghost: 19910 ave 19997 max 19818 min -Histogram: 1 0 0 0 1 0 1 0 0 1 -Neighs: 1.20728e+06 ave 1.21142e+06 max 1.2036e+06 min -Histogram: 1 0 0 1 1 0 0 0 0 1 - -Total # of neighbors = 4829126 -Ave neighs/atom = 37.7275 -Neighbor list builds = 14 -Dangerous builds = 0 -Total wall time: 0:00:06 diff --git a/bench/log.6Oct16.lj.fixed.icc.1 b/bench/log.6Oct16.lj.fixed.icc.1 deleted file mode 100644 index b08ca3b6b8..0000000000 --- a/bench/log.6Oct16.lj.fixed.icc.1 +++ /dev/null @@ -1,79 +0,0 @@ -LAMMPS (6 Oct 2016) -# 3d Lennard-Jones melt - -variable x index 1 -variable y index 1 -variable z index 1 - -variable xx equal 20*$x -variable xx equal 20*1 -variable yy equal 20*$y -variable yy equal 20*1 -variable zz equal 20*$z -variable zz equal 20*1 - -units lj -atom_style atomic - -lattice fcc 0.8442 -Lattice spacing in x,y,z = 1.6796 1.6796 1.6796 -region box block 0 ${xx} 0 ${yy} 0 ${zz} -region box block 0 20 0 ${yy} 0 ${zz} -region box block 0 20 0 20 0 ${zz} -region box block 0 20 0 20 0 20 -create_box 1 box -Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919) - 1 by 1 by 1 MPI processor grid -create_atoms 1 box -Created 32000 atoms -mass 1 1.0 - -velocity all create 1.44 87287 loop geom - -pair_style lj/cut 2.5 -pair_coeff 1 1 1.0 1.0 2.5 - -neighbor 0.3 bin -neigh_modify delay 0 every 20 check no - -fix 1 all nve - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 20 steps, delay 0 steps, check no - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 2.8 - ghost atom cutoff = 2.8 - binsize = 1.4 -> bins = 24 24 24 -Memory usage per processor = 8.21387 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 1.44 -6.7733681 0 -4.6134356 -5.0197073 - 100 0.7574531 -5.7585055 0 -4.6223613 0.20726105 -Loop time of 2.26185 on 1 procs for 100 steps with 32000 atoms - -Performance: 19099.377 tau/day, 44.212 timesteps/s -99.9% CPU use with 1 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 1.9328 | 1.9328 | 1.9328 | 0.0 | 85.45 -Neigh | 0.2558 | 0.2558 | 0.2558 | 0.0 | 11.31 -Comm | 0.024061 | 0.024061 | 0.024061 | 0.0 | 1.06 -Output | 0.00012612 | 0.00012612 | 0.00012612 | 0.0 | 0.01 -Modify | 0.040887 | 0.040887 | 0.040887 | 0.0 | 1.81 -Other | | 0.008214 | | | 0.36 - -Nlocal: 32000 ave 32000 max 32000 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Nghost: 19657 ave 19657 max 19657 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Neighs: 1.20283e+06 ave 1.20283e+06 max 1.20283e+06 min -Histogram: 1 0 0 0 0 0 0 0 0 0 - -Total # of neighbors = 1202833 -Ave neighs/atom = 37.5885 -Neighbor list builds = 5 -Dangerous builds not checked -Total wall time: 0:00:02 diff --git a/bench/log.6Oct16.lj.fixed.icc.4 b/bench/log.6Oct16.lj.fixed.icc.4 deleted file mode 100644 index 9eee300a94..0000000000 --- a/bench/log.6Oct16.lj.fixed.icc.4 +++ /dev/null @@ -1,79 +0,0 @@ -LAMMPS (6 Oct 2016) -# 3d Lennard-Jones melt - -variable x index 1 -variable y index 1 -variable z index 1 - -variable xx equal 20*$x -variable xx equal 20*1 -variable yy equal 20*$y -variable yy equal 20*1 -variable zz equal 20*$z -variable zz equal 20*1 - -units lj -atom_style atomic - -lattice fcc 0.8442 -Lattice spacing in x,y,z = 1.6796 1.6796 1.6796 -region box block 0 ${xx} 0 ${yy} 0 ${zz} -region box block 0 20 0 ${yy} 0 ${zz} -region box block 0 20 0 20 0 ${zz} -region box block 0 20 0 20 0 20 -create_box 1 box -Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919) - 1 by 2 by 2 MPI processor grid -create_atoms 1 box -Created 32000 atoms -mass 1 1.0 - -velocity all create 1.44 87287 loop geom - -pair_style lj/cut 2.5 -pair_coeff 1 1 1.0 1.0 2.5 - -neighbor 0.3 bin -neigh_modify delay 0 every 20 check no - -fix 1 all nve - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 20 steps, delay 0 steps, check no - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 2.8 - ghost atom cutoff = 2.8 - binsize = 1.4 -> bins = 24 24 24 -Memory usage per processor = 4.09506 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 1.44 -6.7733681 0 -4.6134356 -5.0197073 - 100 0.7574531 -5.7585055 0 -4.6223613 0.20726105 -Loop time of 0.635957 on 4 procs for 100 steps with 32000 atoms - -Performance: 67929.172 tau/day, 157.243 timesteps/s -99.9% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 0.51335 | 0.51822 | 0.52569 | 0.7 | 81.49 -Neigh | 0.063695 | 0.064309 | 0.065397 | 0.3 | 10.11 -Comm | 0.027525 | 0.03629 | 0.041959 | 3.1 | 5.71 -Output | 6.3896e-05 | 6.6698e-05 | 7.081e-05 | 0.0 | 0.01 -Modify | 0.012472 | 0.01254 | 0.012618 | 0.1 | 1.97 -Other | | 0.004529 | | | 0.71 - -Nlocal: 8000 ave 8037 max 7964 min -Histogram: 2 0 0 0 0 0 0 0 1 1 -Nghost: 9007.5 ave 9050 max 8968 min -Histogram: 1 1 0 0 0 0 0 1 0 1 -Neighs: 300708 ave 305113 max 297203 min -Histogram: 1 0 0 1 1 0 0 0 0 1 - -Total # of neighbors = 1202833 -Ave neighs/atom = 37.5885 -Neighbor list builds = 5 -Dangerous builds not checked -Total wall time: 0:00:00 diff --git a/bench/log.6Oct16.lj.scaled.icc.4 b/bench/log.6Oct16.lj.scaled.icc.4 deleted file mode 100644 index 4599879e59..0000000000 --- a/bench/log.6Oct16.lj.scaled.icc.4 +++ /dev/null @@ -1,79 +0,0 @@ -LAMMPS (6 Oct 2016) -# 3d Lennard-Jones melt - -variable x index 1 -variable y index 1 -variable z index 1 - -variable xx equal 20*$x -variable xx equal 20*2 -variable yy equal 20*$y -variable yy equal 20*2 -variable zz equal 20*$z -variable zz equal 20*1 - -units lj -atom_style atomic - -lattice fcc 0.8442 -Lattice spacing in x,y,z = 1.6796 1.6796 1.6796 -region box block 0 ${xx} 0 ${yy} 0 ${zz} -region box block 0 40 0 ${yy} 0 ${zz} -region box block 0 40 0 40 0 ${zz} -region box block 0 40 0 40 0 20 -create_box 1 box -Created orthogonal box = (0 0 0) to (67.1838 67.1838 33.5919) - 2 by 2 by 1 MPI processor grid -create_atoms 1 box -Created 128000 atoms -mass 1 1.0 - -velocity all create 1.44 87287 loop geom - -pair_style lj/cut 2.5 -pair_coeff 1 1 1.0 1.0 2.5 - -neighbor 0.3 bin -neigh_modify delay 0 every 20 check no - -fix 1 all nve - -run 100 -Neighbor list info ... - 1 neighbor list requests - update every 20 steps, delay 0 steps, check no - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 2.8 - ghost atom cutoff = 2.8 - binsize = 1.4 -> bins = 48 48 24 -Memory usage per processor = 8.13678 Mbytes -Step Temp E_pair E_mol TotEng Press - 0 1.44 -6.7733681 0 -4.6133849 -5.0196788 - 100 0.75841891 -5.759957 0 -4.6223375 0.20008866 -Loop time of 2.55762 on 4 procs for 100 steps with 128000 atoms - -Performance: 16890.677 tau/day, 39.099 timesteps/s -99.8% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 2.0583 | 2.0988 | 2.1594 | 2.6 | 82.06 -Neigh | 0.24411 | 0.24838 | 0.25585 | 0.9 | 9.71 -Comm | 0.066397 | 0.13872 | 0.1863 | 11.9 | 5.42 -Output | 0.00012994 | 0.00021023 | 0.00025702 | 0.3 | 0.01 -Modify | 0.055533 | 0.058343 | 0.061791 | 1.2 | 2.28 -Other | | 0.0132 | | | 0.52 - -Nlocal: 32000 ave 32060 max 31939 min -Histogram: 1 0 1 0 0 0 0 1 0 1 -Nghost: 19630.8 ave 19681 max 19562 min -Histogram: 1 0 0 0 1 0 0 0 1 1 -Neighs: 1.20195e+06 ave 1.20354e+06 max 1.19931e+06 min -Histogram: 1 0 0 0 0 0 0 2 0 1 - -Total # of neighbors = 4807797 -Ave neighs/atom = 37.5609 -Neighbor list builds = 5 -Dangerous builds not checked -Total wall time: 0:00:02 diff --git a/bench/log.6Oct16.rhodo.fixed.icc.1 b/bench/log.6Oct16.rhodo.fixed.icc.1 deleted file mode 100644 index 65596d3285..0000000000 --- a/bench/log.6Oct16.rhodo.fixed.icc.1 +++ /dev/null @@ -1,122 +0,0 @@ -LAMMPS (6 Oct 2016) -# Rhodopsin model - -units real -neigh_modify delay 5 every 1 - -atom_style full -bond_style harmonic -angle_style charmm -dihedral_style charmm -improper_style harmonic -pair_style lj/charmm/coul/long 8.0 10.0 -pair_modify mix arithmetic -kspace_style pppm 1e-4 - -read_data data.rhodo - orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) - 1 by 1 by 1 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - scanning bonds ... - 4 = max bonds/atom - scanning angles ... - 8 = max angles/atom - scanning dihedrals ... - 18 = max dihedrals/atom - scanning impropers ... - 2 = max impropers/atom - reading bonds ... - 27723 bonds - reading angles ... - 40467 angles - reading dihedrals ... - 56829 dihedrals - reading impropers ... - 1034 impropers - 4 = max # of 1-2 neighbors - 12 = max # of 1-3 neighbors - 24 = max # of 1-4 neighbors - 26 = max # of special neighbors - -fix 1 all shake 0.0001 5 0 m 1.0 a 232 - 1617 = # of size 2 clusters - 3633 = # of size 3 clusters - 747 = # of size 4 clusters - 4233 = # of frozen angles -fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1 - -special_bonds charmm - -thermo 50 -thermo_style multi -timestep 2.0 - -run 100 -PPPM initialization ... -WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:316) - G vector (1/distance) = 0.248835 - grid = 25 32 32 - stencil order = 5 - estimated absolute RMS force accuracy = 0.0355478 - estimated relative force accuracy = 0.000107051 - using double precision FFTs - 3d grid and FFT values/proc = 41070 25600 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 5 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 12 - ghost atom cutoff = 12 - binsize = 6 -> bins = 10 13 13 -Memory usage per processor = 93.2721 Mbytes ----------------- Step 0 ----- CPU = 0.0000 (sec) ---------------- -TotEng = -25356.2064 KinEng = 21444.8313 Temp = 299.0397 -PotEng = -46801.0377 E_bond = 2537.9940 E_angle = 10921.3742 -E_dihed = 5211.7865 E_impro = 213.5116 E_vdwl = -2307.8634 -E_coul = 207025.8927 E_long = -270403.7333 Press = -149.3301 -Volume = 307995.0335 ----------------- Step 50 ----- CPU = 17.2007 (sec) ---------------- -TotEng = -25330.0321 KinEng = 21501.0036 Temp = 299.8230 -PotEng = -46831.0357 E_bond = 2471.7033 E_angle = 10836.5108 -E_dihed = 5239.6316 E_impro = 227.1219 E_vdwl = -1993.2763 -E_coul = 206797.6655 E_long = -270410.3927 Press = 237.6866 -Volume = 308031.5640 ----------------- Step 100 ----- CPU = 35.0315 (sec) ---------------- -TotEng = -25290.7387 KinEng = 21591.9096 Temp = 301.0906 -PotEng = -46882.6484 E_bond = 2567.9789 E_angle = 10781.9556 -E_dihed = 5198.7493 E_impro = 216.7863 E_vdwl = -1902.6458 -E_coul = 206659.5006 E_long = -270404.9733 Press = 6.7898 -Volume = 308133.9933 -Loop time of 35.0316 on 1 procs for 100 steps with 32000 atoms - -Performance: 0.493 ns/day, 48.655 hours/ns, 2.855 timesteps/s -99.9% CPU use with 1 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 25.021 | 25.021 | 25.021 | 0.0 | 71.42 -Bond | 1.2834 | 1.2834 | 1.2834 | 0.0 | 3.66 -Kspace | 3.2116 | 3.2116 | 3.2116 | 0.0 | 9.17 -Neigh | 4.2767 | 4.2767 | 4.2767 | 0.0 | 12.21 -Comm | 0.069283 | 0.069283 | 0.069283 | 0.0 | 0.20 -Output | 0.00028205 | 0.00028205 | 0.00028205 | 0.0 | 0.00 -Modify | 1.14 | 1.14 | 1.14 | 0.0 | 3.25 -Other | | 0.02938 | | | 0.08 - -Nlocal: 32000 ave 32000 max 32000 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Nghost: 47958 ave 47958 max 47958 min -Histogram: 1 0 0 0 0 0 0 0 0 0 -Neighs: 1.20281e+07 ave 1.20281e+07 max 1.20281e+07 min -Histogram: 1 0 0 0 0 0 0 0 0 0 - -Total # of neighbors = 12028098 -Ave neighs/atom = 375.878 -Ave special neighs/atom = 7.43187 -Neighbor list builds = 11 -Dangerous builds = 0 -Total wall time: 0:00:36 diff --git a/bench/log.6Oct16.rhodo.fixed.icc.4 b/bench/log.6Oct16.rhodo.fixed.icc.4 deleted file mode 100644 index 50526063f1..0000000000 --- a/bench/log.6Oct16.rhodo.fixed.icc.4 +++ /dev/null @@ -1,122 +0,0 @@ -LAMMPS (6 Oct 2016) -# Rhodopsin model - -units real -neigh_modify delay 5 every 1 - -atom_style full -bond_style harmonic -angle_style charmm -dihedral_style charmm -improper_style harmonic -pair_style lj/charmm/coul/long 8.0 10.0 -pair_modify mix arithmetic -kspace_style pppm 1e-4 - -read_data data.rhodo - orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) - 1 by 2 by 2 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - scanning bonds ... - 4 = max bonds/atom - scanning angles ... - 8 = max angles/atom - scanning dihedrals ... - 18 = max dihedrals/atom - scanning impropers ... - 2 = max impropers/atom - reading bonds ... - 27723 bonds - reading angles ... - 40467 angles - reading dihedrals ... - 56829 dihedrals - reading impropers ... - 1034 impropers - 4 = max # of 1-2 neighbors - 12 = max # of 1-3 neighbors - 24 = max # of 1-4 neighbors - 26 = max # of special neighbors - -fix 1 all shake 0.0001 5 0 m 1.0 a 232 - 1617 = # of size 2 clusters - 3633 = # of size 3 clusters - 747 = # of size 4 clusters - 4233 = # of frozen angles -fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1 - -special_bonds charmm - -thermo 50 -thermo_style multi -timestep 2.0 - -run 100 -PPPM initialization ... -WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:316) - G vector (1/distance) = 0.248835 - grid = 25 32 32 - stencil order = 5 - estimated absolute RMS force accuracy = 0.0355478 - estimated relative force accuracy = 0.000107051 - using double precision FFTs - 3d grid and FFT values/proc = 13230 6400 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 5 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 12 - ghost atom cutoff = 12 - binsize = 6 -> bins = 10 13 13 -Memory usage per processor = 37.3604 Mbytes ----------------- Step 0 ----- CPU = 0.0000 (sec) ---------------- -TotEng = -25356.2064 KinEng = 21444.8313 Temp = 299.0397 -PotEng = -46801.0377 E_bond = 2537.9940 E_angle = 10921.3742 -E_dihed = 5211.7865 E_impro = 213.5116 E_vdwl = -2307.8634 -E_coul = 207025.8927 E_long = -270403.7333 Press = -149.3301 -Volume = 307995.0335 ----------------- Step 50 ----- CPU = 4.6056 (sec) ---------------- -TotEng = -25330.0321 KinEng = 21501.0036 Temp = 299.8230 -PotEng = -46831.0357 E_bond = 2471.7033 E_angle = 10836.5108 -E_dihed = 5239.6316 E_impro = 227.1219 E_vdwl = -1993.2763 -E_coul = 206797.6655 E_long = -270410.3927 Press = 237.6866 -Volume = 308031.5640 ----------------- Step 100 ----- CPU = 9.3910 (sec) ---------------- -TotEng = -25290.7386 KinEng = 21591.9096 Temp = 301.0906 -PotEng = -46882.6482 E_bond = 2567.9789 E_angle = 10781.9556 -E_dihed = 5198.7493 E_impro = 216.7863 E_vdwl = -1902.6458 -E_coul = 206659.5007 E_long = -270404.9733 Press = 6.7898 -Volume = 308133.9933 -Loop time of 9.39107 on 4 procs for 100 steps with 32000 atoms - -Performance: 1.840 ns/day, 13.043 hours/ns, 10.648 timesteps/s -99.8% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 6.2189 | 6.3266 | 6.6072 | 6.5 | 67.37 -Bond | 0.30793 | 0.32122 | 0.3414 | 2.4 | 3.42 -Kspace | 0.87994 | 1.1644 | 1.2855 | 15.3 | 12.40 -Neigh | 1.1358 | 1.136 | 1.1362 | 0.0 | 12.10 -Comm | 0.08292 | 0.084935 | 0.087077 | 0.5 | 0.90 -Output | 0.00015712 | 0.00016558 | 0.00018501 | 0.1 | 0.00 -Modify | 0.33717 | 0.34246 | 0.34794 | 0.7 | 3.65 -Other | | 0.01526 | | | 0.16 - -Nlocal: 8000 ave 8143 max 7933 min -Histogram: 1 2 0 0 0 0 0 0 0 1 -Nghost: 22733.5 ave 22769 max 22693 min -Histogram: 1 0 0 0 0 2 0 0 0 1 -Neighs: 3.00702e+06 ave 3.0975e+06 max 2.96492e+06 min -Histogram: 1 2 0 0 0 0 0 0 0 1 - -Total # of neighbors = 12028098 -Ave neighs/atom = 375.878 -Ave special neighs/atom = 7.43187 -Neighbor list builds = 11 -Dangerous builds = 0 -Total wall time: 0:00:09 diff --git a/bench/log.6Oct16.rhodo.scaled.icc.4 b/bench/log.6Oct16.rhodo.scaled.icc.4 deleted file mode 100644 index db445ca72c..0000000000 --- a/bench/log.6Oct16.rhodo.scaled.icc.4 +++ /dev/null @@ -1,143 +0,0 @@ -LAMMPS (6 Oct 2016) -# Rhodopsin model - -variable x index 1 -variable y index 1 -variable z index 1 - -units real -neigh_modify delay 5 every 1 - -atom_style full -atom_modify map hash -bond_style harmonic -angle_style charmm -dihedral_style charmm -improper_style harmonic -pair_style lj/charmm/coul/long 8.0 10.0 -pair_modify mix arithmetic -kspace_style pppm 1e-4 - -read_data data.rhodo - orthogonal box = (-27.5 -38.5 -36.3646) to (27.5 38.5 36.3615) - 1 by 2 by 2 MPI processor grid - reading atoms ... - 32000 atoms - reading velocities ... - 32000 velocities - scanning bonds ... - 4 = max bonds/atom - scanning angles ... - 8 = max angles/atom - scanning dihedrals ... - 18 = max dihedrals/atom - scanning impropers ... - 2 = max impropers/atom - reading bonds ... - 27723 bonds - reading angles ... - 40467 angles - reading dihedrals ... - 56829 dihedrals - reading impropers ... - 1034 impropers - 4 = max # of 1-2 neighbors - 12 = max # of 1-3 neighbors - 24 = max # of 1-4 neighbors - 26 = max # of special neighbors - -replicate $x $y $z -replicate 2 $y $z -replicate 2 2 $z -replicate 2 2 1 - orthogonal box = (-27.5 -38.5 -36.3646) to (82.5 115.5 36.3615) - 2 by 2 by 1 MPI processor grid - 128000 atoms - 110892 bonds - 161868 angles - 227316 dihedrals - 4136 impropers - 4 = max # of 1-2 neighbors - 12 = max # of 1-3 neighbors - 24 = max # of 1-4 neighbors - 26 = max # of special neighbors - -fix 1 all shake 0.0001 5 0 m 1.0 a 232 - 6468 = # of size 2 clusters - 14532 = # of size 3 clusters - 2988 = # of size 4 clusters - 16932 = # of frozen angles -fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1 - -special_bonds charmm - -thermo 50 -thermo_style multi -timestep 2.0 - -run 100 -PPPM initialization ... -WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:316) - G vector (1/distance) = 0.248593 - grid = 48 60 36 - stencil order = 5 - estimated absolute RMS force accuracy = 0.0359793 - estimated relative force accuracy = 0.00010835 - using double precision FFTs - 3d grid and FFT values/proc = 41615 25920 -Neighbor list info ... - 1 neighbor list requests - update every 1 steps, delay 5 steps, check yes - max neighbors/atom: 2000, page size: 100000 - master list distance cutoff = 12 - ghost atom cutoff = 12 - binsize = 6 -> bins = 19 26 13 -Memory usage per processor = 96.9597 Mbytes ----------------- Step 0 ----- CPU = 0.0000 (sec) ---------------- -TotEng = -101425.4887 KinEng = 85779.3251 Temp = 299.0304 -PotEng = -187204.8138 E_bond = 10151.9760 E_angle = 43685.4968 -E_dihed = 20847.1460 E_impro = 854.0463 E_vdwl = -9231.4537 -E_coul = 827053.5824 E_long = -1080565.6077 Press = -149.0358 -Volume = 1231980.1340 ----------------- Step 50 ----- CPU = 18.1689 (sec) ---------------- -TotEng = -101320.0211 KinEng = 86003.4933 Temp = 299.8118 -PotEng = -187323.5144 E_bond = 9887.1189 E_angle = 43346.8448 -E_dihed = 20958.7108 E_impro = 908.4721 E_vdwl = -7973.4486 -E_coul = 826141.5493 E_long = -1080592.7617 Press = 238.0404 -Volume = 1232126.1814 ----------------- Step 100 ----- CPU = 37.2027 (sec) ---------------- -TotEng = -101157.9546 KinEng = 86355.7413 Temp = 301.0398 -PotEng = -187513.6959 E_bond = 10272.0456 E_angle = 43128.7018 -E_dihed = 20794.0107 E_impro = 867.0928 E_vdwl = -7587.2409 -E_coul = 825584.2416 E_long = -1080572.5474 Press = 15.1729 -Volume = 1232535.8440 -Loop time of 37.2028 on 4 procs for 100 steps with 128000 atoms - -Performance: 0.464 ns/day, 51.671 hours/ns, 2.688 timesteps/s -99.9% CPU use with 4 MPI tasks x no OpenMP threads - -MPI task timing breakdown: -Section | min time | avg time | max time |%varavg| %total ---------------------------------------------------------------- -Pair | 25.431 | 25.738 | 25.984 | 4.0 | 69.18 -Bond | 1.2966 | 1.3131 | 1.3226 | 0.9 | 3.53 -Kspace | 3.7563 | 4.0123 | 4.3127 | 10.0 | 10.79 -Neigh | 4.3778 | 4.378 | 4.3782 | 0.0 | 11.77 -Comm | 0.1903 | 0.19549 | 0.20485 | 1.3 | 0.53 -Output | 0.00031805 | 0.00037521 | 0.00039601 | 0.2 | 0.00 -Modify | 1.4861 | 1.5051 | 1.5122 | 0.9 | 4.05 -Other | | 0.05992 | | | 0.16 - -Nlocal: 32000 ave 32000 max 32000 min -Histogram: 4 0 0 0 0 0 0 0 0 0 -Nghost: 47957 ave 47957 max 47957 min -Histogram: 4 0 0 0 0 0 0 0 0 0 -Neighs: 1.20281e+07 ave 1.20572e+07 max 1.19991e+07 min -Histogram: 2 0 0 0 0 0 0 0 0 2 - -Total # of neighbors = 48112540 -Ave neighs/atom = 375.879 -Ave special neighs/atom = 7.43187 -Neighbor list builds = 11 -Dangerous builds = 0 -Total wall time: 0:00:38 diff --git a/cmake/CMakeLists.txt b/cmake/CMakeLists.txt index 04ec037184..c721487ea6 100644 --- a/cmake/CMakeLists.txt +++ b/cmake/CMakeLists.txt @@ -2,11 +2,12 @@ ######################################## # CMake build system # This file is part of LAMMPS -cmake_minimum_required(VERSION 3.16) -if(CMAKE_VERSION VERSION_LESS 3.20) - message(WARNING "LAMMPS is planning to require at least CMake version 3.20 by Summer 2025. Please upgrade!") -endif() +cmake_minimum_required(VERSION 3.20) ######################################## +# initialize version variables with project command +if(POLICY CMP0048) + cmake_policy(SET CMP0048 NEW) +endif() # set policy to silence warnings about ignoring _ROOT but use it if(POLICY CMP0074) cmake_policy(SET CMP0074 NEW) @@ -27,7 +28,10 @@ endif() ######################################## -project(lammps CXX) +project(lammps + DESCRIPTION "The LAMMPS Molecular Dynamics Simulator" + HOMEPAGE_URL "https://www.lammps.org" + LANGUAGES CXX C) set(SOVERSION 0) get_property(BUILD_IS_MULTI_CONFIG GLOBAL PROPERTY GENERATOR_IS_MULTI_CONFIG) @@ -130,33 +134,32 @@ if((CMAKE_CXX_COMPILER_ID STREQUAL "NVHPC") OR (CMAKE_CXX_COMPILER_ID STREQUAL " set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Minform=severe") endif() -# silence nvcc warnings -if((PKG_KOKKOS) AND (Kokkos_ENABLE_CUDA) AND NOT (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")) - set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Xcudafe --diag_suppress=unrecognized_pragma,--diag_suppress=128") +# silence nvcc warnings when using nvcc_wrapper +get_filename_component(LAMMPS_CXX_COMPILER_NAME "${CMAKE_CXX_COMPILER}" NAME CACHE) +if((PKG_KOKKOS) AND (Kokkos_ENABLE_CUDA) AND (LAMMPS_CXX_COMPILER_NAME STREQUAL "nvcc_wrapper")) + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Xcudafe --diag_suppress=unrecognized_pragma,--diag_suppress=128,--diag_suppress=186") endif() -# we *require* C++11 without extensions but prefer C++17. -# Kokkos requires at least C++17 (currently) +# We *require* C++17 without extensions +# Kokkos also requires at least C++17 (currently) if(NOT CMAKE_CXX_STANDARD) - if(cxx_std_17 IN_LIST CMAKE_CXX_COMPILE_FEATURES) +# uncomment in case we plan to switch to C++20 as minimum standard +# if(cxx_std_20 IN_LIST CMAKE_CXX_COMPILE_FEATURES) +# set(CMAKE_CXX_STANDARD 20) +# else() set(CMAKE_CXX_STANDARD 17) - else() - set(CMAKE_CXX_STANDARD 11) - endif() -endif() -if(CMAKE_CXX_STANDARD LESS 11) - message(FATAL_ERROR "C++ standard must be set to at least 11") +# endif() endif() if(CMAKE_CXX_STANDARD LESS 17) - message(WARNING "Selecting C++17 standard is preferred over C++${CMAKE_CXX_STANDARD}") + message(FATAL_ERROR "C++ standard must be set to at least 17") endif() if(PKG_KOKKOS AND (CMAKE_CXX_STANDARD LESS 17)) set(CMAKE_CXX_STANDARD 17) endif() -# turn off C++17 check in lmptype.h -if(LAMMPS_CXX11) - add_compile_definitions(LAMMPS_CXX11) -endif() +# turn off C++20 check in lmptype.h +#if(LAMMPS_CXX17) +# add_compile_definitions(LAMMPS_CXX17) +#endif() set(CMAKE_CXX_STANDARD_REQUIRED ON) set(CMAKE_CXX_EXTENSIONS OFF CACHE BOOL "Use compiler extensions") # ugly hacks for MSVC which by default always reports an old C++ standard in the __cplusplus macro @@ -176,15 +179,9 @@ endif() # warn about potentially problematic GCC compiler versions if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU") if (CMAKE_CXX_STANDARD GREATER_EQUAL 17) - if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 9.0) - message(WARNING "Using ${CMAKE_CXX_COMPILER_ID} compiler version ${CMAKE_CXX_COMPILER_VERSION} " - "with C++17 is not recommended. Please use ${CMAKE_CXX_COMPILER_ID} compiler version 9.x or later") - endif() - endif() - if (CMAKE_CXX_STANDARD GREATER_EQUAL 11) - if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0) - message(WARNING "Using ${CMAKE_CXX_COMPILER_ID} compiler version ${CMAKE_CXX_COMPILER_VERSION} " - "with C++11 is not recommended. Please use ${CMAKE_CXX_COMPILER_ID} compiler version 5.x or later") + if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 9.3) + message(WARNING "Using the GNU compilers version ${CMAKE_CXX_COMPILER_VERSION} with C++17 " + "or later is not recommended. Please use the GNU compilers version 9.3 or later") endif() endif() endif() @@ -194,6 +191,10 @@ if((CMAKE_SYSTEM_NAME STREQUAL "Windows") AND BUILD_SHARED_LIBS) set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON) endif() +# do not include the (obsolete) MPI C++ bindings which makes for leaner object files +# and avoids namespace conflicts. Put this early to increase its visbility. +set(MPI_CXX_SKIP_MPICXX TRUE CACHE BOOL "Skip MPI C++ Bindings" FORCE) + ######################################################################## # User input options # ######################################################################## @@ -231,20 +232,18 @@ option(CMAKE_POSITION_INDEPENDENT_CODE "Create object compatible with shared lib option(BUILD_TOOLS "Build and install LAMMPS tools (msi2lmp, binary2txt, chain)" OFF) option(BUILD_LAMMPS_GUI "Build and install the LAMMPS GUI" OFF) -# Support using clang-tidy for C++ files with selected options -set(ENABLE_CLANG_TIDY OFF CACHE BOOL "Include clang-tidy processing when compiling") -if(ENABLE_CLANG_TIDY) - set(CMAKE_CXX_CLANG_TIDY "clang-tidy;-checks=-*,performance-trivially-destructible,performance-unnecessary-copy-initialization,performance-unnecessary-value-param,readability-redundant-control-flow,readability-redundant-declaration,readability-redundant-function-ptr-dereference,readability-redundant-member-init,readability-redundant-string-cstr,readability-redundant-string-init,readability-simplify-boolean-expr,readability-static-accessed-through-instance,readability-static-definition-in-anonymous-namespace,readability-qualified-auto,misc-unused-parameters,modernize-deprecated-ios-base-aliases,modernize-loop-convert,modernize-shrink-to-fit,modernize-use-auto,modernize-use-using,modernize-use-override,modernize-use-bool-literals,modernize-use-emplace,modernize-return-braced-init-list,modernize-use-equals-default,modernize-use-equals-delete,modernize-replace-random-shuffle,modernize-deprecated-headers,modernize-use-nullptr,modernize-use-noexcept,modernize-redundant-void-arg;-fix;-header-filter=.*,header-filter=library.h,header-filter=fmt/*.h" CACHE STRING "clang-tidy settings") -else() - unset(CMAKE_CXX_CLANG_TIDY CACHE) -endif() - - file(GLOB ALL_SOURCES CONFIGURE_DEPENDS ${LAMMPS_SOURCE_DIR}/[^.]*.cpp) file(GLOB MAIN_SOURCES CONFIGURE_DEPENDS ${LAMMPS_SOURCE_DIR}/main.cpp) list(REMOVE_ITEM ALL_SOURCES ${MAIN_SOURCES}) add_library(lammps ${ALL_SOURCES}) +# add extra libraries for std::filesystem with older compilers +if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 9.1) + target_link_libraries(lammps PRIVATE stdc++fs) +elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Clang" AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 9.0) + target_link_libraries(lammps PRIVATE c++fs) +endif() + # tell CMake to export all symbols to a .dll on Windows with MinGW cross-compilers if(BUILD_SHARED_LIBS AND (CMAKE_SYSTEM_NAME STREQUAL "Windows") AND CMAKE_CROSSCOMPILING) set_target_properties(lammps PROPERTIES LINK_FLAGS "-Wl,--export-all-symbols") @@ -264,9 +263,8 @@ option(CMAKE_VERBOSE_MAKEFILE "Generate verbose Makefiles" OFF) set(STANDARD_PACKAGES ADIOS AMOEBA + APIP ASPHERE - ATC - AWPMD BOCS BODY BPM @@ -327,7 +325,6 @@ set(STANDARD_PACKAGES PHONON PLUGIN PLUMED - POEMS PTM PYTHON QEQ @@ -356,17 +353,6 @@ foreach(PKG ${STANDARD_PACKAGES} ${SUFFIX_PACKAGES}) option(PKG_${PKG} "Build ${PKG} Package" OFF) endforeach() -set(DEPRECATED_PACKAGES AWPMD ATC POEMS) -foreach(PKG ${DEPRECATED_PACKAGES}) - if(PKG_${PKG}) - message(WARNING - "The ${PKG} package will be removed from LAMMPS in Summer 2025 due to lack of " - "maintenance and use of code constructs that conflict with modern C++ compilers " - "and standards. Please contact developers@lammps.org if you have any concerns " - "about this step.") - endif() -endforeach() - ###################################################### # packages with special compiler needs or external libs ###################################################### @@ -377,7 +363,6 @@ if(PKG_ADIOS) # The search for ADIOS2 must come before MPI because # it includes its own MPI search with the latest FindMPI.cmake # script that defines the MPI::MPI_C target - enable_language(C) find_package(ADIOS2 REQUIRED) if(BUILD_MPI) if(NOT ADIOS2_HAVE_MPI) @@ -392,21 +377,18 @@ if(PKG_ADIOS) endif() if(NOT CMAKE_CROSSCOMPILING) - find_package(MPI QUIET) + find_package(MPI QUIET COMPONENTS CXX) option(BUILD_MPI "Build MPI version" ${MPI_FOUND}) else() option(BUILD_MPI "Build MPI version" OFF) endif() if(BUILD_MPI) - # do not include the (obsolete) MPI C++ bindings which makes - # for leaner object files and avoids namespace conflicts - set(MPI_CXX_SKIP_MPICXX TRUE) # We use a non-standard procedure to cross-compile with MPI on Windows if((CMAKE_SYSTEM_NAME STREQUAL "Windows") AND CMAKE_CROSSCOMPILING) include(MPI4WIN) else() - find_package(MPI REQUIRED) + find_package(MPI REQUIRED COMPONENTS CXX) option(LAMMPS_LONGLONG_TO_LONG "Workaround if your system or MPI version does not recognize 'long long' data types" OFF) if(LAMMPS_LONGLONG_TO_LONG) target_compile_definitions(lammps PRIVATE -DLAMMPS_LONGLONG_TO_LONG) @@ -455,7 +437,6 @@ endif() # "hard" dependencies between packages resulting # in an error instead of skipping over files pkg_depends(ML-IAP ML-SNAP) -pkg_depends(ATC MANYBODY) pkg_depends(LATBOLTZ MPI) pkg_depends(SCAFACOS MPI) pkg_depends(AMOEBA KSPACE) @@ -467,6 +448,7 @@ pkg_depends(ELECTRODE KSPACE) pkg_depends(EXTRA-MOLECULE MOLECULE) pkg_depends(MESONT MOLECULE) pkg_depends(RHEO BPM) +pkg_depends(APIP ML-PACE) # detect if we may enable OpenMP support by default set(BUILD_OMP_DEFAULT OFF) @@ -532,8 +514,7 @@ if((CMAKE_CXX_COMPILER_ID STREQUAL "Intel") AND (CMAKE_CXX_STANDARD GREATER_EQUA PROPERTIES COMPILE_OPTIONS "-std=c++14") endif() -if(PKG_ATC OR PKG_AWPMD OR PKG_ML-QUIP OR PKG_ML-POD OR PKG_ELECTRODE OR PKG_RHEO OR BUILD_TOOLS) - enable_language(C) +if(PKG_ML-QUIP OR PKG_ML-POD OR PKG_ELECTRODE OR PKG_RHEO OR BUILD_TOOLS) if (NOT USE_INTERNAL_LINALG) find_package(LAPACK) find_package(BLAS) @@ -631,15 +612,6 @@ if(WITH_SWIG) endif() ######################################################################## -# Basic system tests (standard libraries, headers, functions, types) # -######################################################################## -if (NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "Intel") OR (CMAKE_CXX_COMPILER_ID STREQUAL "IntelLLVM"))) - check_include_file_cxx(cmath FOUND_CMATH) - if(NOT FOUND_CMATH) - message(FATAL_ERROR "Could not find the required 'cmath' header") - endif(NOT FOUND_CMATH) -endif() - # make the standard math library overrideable and autodetected (for systems that don't have it) find_library(STANDARD_MATH_LIB m DOC "Standard Math library") mark_as_advanced(STANDARD_MATH_LIB) @@ -703,7 +675,7 @@ endforeach() ############################################## # add lib sources of (simple) enabled packages ############################################ -foreach(PKG_LIB POEMS ATC AWPMD H5MD) +foreach(PKG_LIB H5MD) if(PKG_${PKG_LIB}) string(TOLOWER "${PKG_LIB}" PKG_LIB) file(GLOB_RECURSE ${PKG_LIB}_SOURCES CONFIGURE_DEPENDS @@ -711,9 +683,7 @@ foreach(PKG_LIB POEMS ATC AWPMD H5MD) add_library(${PKG_LIB} STATIC ${${PKG_LIB}_SOURCES}) set_target_properties(${PKG_LIB} PROPERTIES OUTPUT_NAME lammps_${PKG_LIB}${LAMMPS_MACHINE}) target_link_libraries(lammps PRIVATE ${PKG_LIB}) - if(PKG_LIB STREQUAL "awpmd") - target_include_directories(awpmd PUBLIC ${LAMMPS_LIB_SOURCE_DIR}/awpmd/systems/interact ${LAMMPS_LIB_SOURCE_DIR}/awpmd/ivutils/include) - elseif(PKG_LIB STREQUAL "h5md") + if(PKG_LIB STREQUAL "h5md") target_include_directories(h5md PUBLIC ${LAMMPS_LIB_SOURCE_DIR}/h5md/include ${HDF5_INCLUDE_DIRS}) else() target_include_directories(${PKG_LIB} PUBLIC ${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}) @@ -725,23 +695,6 @@ if(PKG_ELECTRODE OR PKG_ML-POD) target_link_libraries(lammps PRIVATE ${LAPACK_LIBRARIES}) endif() -if(PKG_AWPMD) - target_link_libraries(awpmd PRIVATE ${LAPACK_LIBRARIES}) -endif() - -if(PKG_ATC) - if(LAMMPS_SIZES STREQUAL "BIGBIG") - message(FATAL_ERROR "The ATC Package is not compatible with -DLAMMPS_BIGBIG") - endif() - if(BUILD_MPI) - target_link_libraries(atc PRIVATE ${LAPACK_LIBRARIES} MPI::MPI_CXX) - else() - target_link_libraries(atc PRIVATE ${LAPACK_LIBRARIES} mpi_stubs) - endif() - target_include_directories(atc PRIVATE ${LAMMPS_SOURCE_DIR}) - target_compile_definitions(atc PRIVATE -DLAMMPS_${LAMMPS_SIZES}) -endif() - if(PKG_H5MD) include(Packages/H5MD) endif() diff --git a/cmake/Modules/CodeCoverage.cmake b/cmake/Modules/CodeCoverage.cmake index 885b5cba6d..530a3c6366 100644 --- a/cmake/Modules/CodeCoverage.cmake +++ b/cmake/Modules/CodeCoverage.cmake @@ -30,7 +30,7 @@ if(ENABLE_COVERAGE) add_custom_target( gen_coverage_html - COMMAND ${GCOVR_BINARY} -s --html --html-details -r ${ABSOLUTE_LAMMPS_SOURCE_DIR} --object-directory=${CMAKE_BINARY_DIR} -o ${COVERAGE_HTML_DIR}/index.html + COMMAND ${GCOVR_BINARY} -s --html --html-nested --html-self-contained -r ${ABSOLUTE_LAMMPS_SOURCE_DIR} --object-directory=${CMAKE_BINARY_DIR} -o ${COVERAGE_HTML_DIR}/index.html WORKING_DIRECTORY ${CMAKE_BINARY_DIR} COMMENT "Generating HTML coverage report..." ) diff --git a/cmake/Modules/Documentation.cmake b/cmake/Modules/Documentation.cmake index 511d54114c..c0ca3e2778 100644 --- a/cmake/Modules/Documentation.cmake +++ b/cmake/Modules/Documentation.cmake @@ -73,7 +73,7 @@ if(BUILD_DOC) # download mathjax distribution and unpack to folder "mathjax" if(NOT EXISTS ${DOC_BUILD_STATIC_DIR}/mathjax/es5) if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz) - file(MD5 ${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz) + file(MD5 ${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz DL_MD5) endif() if(NOT "${DL_MD5}" STREQUAL "${MATHJAX_MD5}") file(DOWNLOAD ${MATHJAX_URL} "${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz" STATUS DL_STATUS SHOW_PROGRESS) diff --git a/cmake/Modules/LAMMPSInterfacePlugin.cmake b/cmake/Modules/LAMMPSInterfacePlugin.cmake index 5b7444f62c..418396fa4d 100644 --- a/cmake/Modules/LAMMPSInterfacePlugin.cmake +++ b/cmake/Modules/LAMMPSInterfacePlugin.cmake @@ -34,26 +34,26 @@ if(MSVC) add_compile_definitions(_CRT_SECURE_NO_WARNINGS) endif() +# We *require* C++17 without extensions +# Kokkos also requires at least C++17 (currently) if(NOT CMAKE_CXX_STANDARD) - if(cxx_std_17 IN_LIST CMAKE_CXX_COMPILE_FEATURES) +# uncomment in case we plan to switch to C++20 as minimum standard +# if(cxx_std_20 IN_LIST CMAKE_CXX_COMPILE_FEATURES) +# set(CMAKE_CXX_STANDARD 20) +# else() set(CMAKE_CXX_STANDARD 17) - else() - set(CMAKE_CXX_STANDARD 11) - endif() -endif() -if(CMAKE_CXX_STANDARD LESS 11) - message(FATAL_ERROR "C++ standard must be set to at least 11") +# endif() endif() if(CMAKE_CXX_STANDARD LESS 17) - message(WARNING "Selecting C++17 standard is preferred over C++${CMAKE_CXX_STANDARD}") + message(FATAL_ERROR "C++ standard must be set to at least 17") endif() if(PKG_KOKKOS AND (CMAKE_CXX_STANDARD LESS 17)) set(CMAKE_CXX_STANDARD 17) endif() -# turn off C++17 check in lmptype.h -if(LAMMPS_CXX11) - add_compile_definitions(LAMMPS_CXX11) -endif() +# turn off C++20 check in lmptype.h +#if(LAMMPS_CXX17) +# add_compile_definitions(LAMMPS_CXX17) +#endif() set(CMAKE_CXX_STANDARD_REQUIRED ON) # Need -restrict with Intel compilers @@ -62,6 +62,9 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel") endif() set(CMAKE_POSITION_INDEPENDENT_CODE TRUE) +# skip over obsolete MPI-2 C++ bindings +set(MPI_CXX_SKIP_MPICXX TRUE) + ####### # helper functions from LAMMPSUtils.cmake function(validate_option name values) @@ -128,8 +131,7 @@ endif() ################################################################################ # MPI configuration if(NOT CMAKE_CROSSCOMPILING) - set(MPI_CXX_SKIP_MPICXX TRUE) - find_package(MPI QUIET) + find_package(MPI QUIET COMPONENTS CXX) option(BUILD_MPI "Build MPI version" ${MPI_FOUND}) else() option(BUILD_MPI "Build MPI version" OFF) @@ -141,78 +143,38 @@ if(BUILD_MPI) set(MPI_CXX_SKIP_MPICXX TRUE) # We use a non-standard procedure to cross-compile with MPI on Windows if((CMAKE_SYSTEM_NAME STREQUAL "Windows") AND CMAKE_CROSSCOMPILING) - # Download and configure MinGW compatible MPICH development files for Windows - option(USE_MSMPI "Use Microsoft's MS-MPI SDK instead of MPICH2-1.4.1" OFF) - if(USE_MSMPI) - message(STATUS "Downloading and configuring MS-MPI 10.1 for Windows cross-compilation") - set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/msmpi-win64-devel.tar.gz" CACHE STRING "URL for MS-MPI (win64) tarball") - set(MPICH2_WIN64_DEVEL_MD5 "86314daf1bffb809f1fcbefb8a547490" CACHE STRING "MD5 checksum of MS-MPI (win64) tarball") - mark_as_advanced(MPICH2_WIN64_DEVEL_URL) - mark_as_advanced(MPICH2_WIN64_DEVEL_MD5) - - include(ExternalProject) - if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") - ExternalProject_Add(mpi4win_build - URL ${MPICH2_WIN64_DEVEL_URL} - URL_MD5 ${MPICH2_WIN64_DEVEL_MD5} - CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" - BUILD_BYPRODUCTS /lib/libmsmpi.a) - else() - message(FATAL_ERROR "Only x86 64-bit builds are supported with MS-MPI") - endif() - - ExternalProject_get_property(mpi4win_build SOURCE_DIR) - file(MAKE_DIRECTORY "${SOURCE_DIR}/include") - add_library(MPI::MPI_CXX UNKNOWN IMPORTED) - set_target_properties(MPI::MPI_CXX PROPERTIES - IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmsmpi.a" - INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include" - INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - add_dependencies(MPI::MPI_CXX mpi4win_build) - - # set variables for status reporting at the end of CMake run - set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") - set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmsmpi.a") + message(STATUS "Downloading and configuring MS-MPI 10.1 for Windows cross-compilation") + set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/msmpi-win64-devel.tar.gz" CACHE STRING "URL for MS-MPI (win64) tarball") + set(MPICH2_WIN64_DEVEL_MD5 "86314daf1bffb809f1fcbefb8a547490" CACHE STRING "MD5 checksum of MS-MPI (win64) tarball") + mark_as_advanced(MPICH2_WIN64_DEVEL_URL) + mark_as_advanced(MPICH2_WIN64_DEVEL_MD5) + + include(ExternalProject) + if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") + ExternalProject_Add(mpi4win_build + URL ${MPICH2_WIN64_DEVEL_URL} + URL_MD5 ${MPICH2_WIN64_DEVEL_MD5} + CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" + BUILD_BYPRODUCTS /lib/libmsmpi.a) else() - # Download and configure custom MPICH files for Windows - message(STATUS "Downloading and configuring MPICH-1.4.1 for Windows") - set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win64-devel.tar.gz" CACHE STRING "URL for MPICH2 (win64) tarball") - set(MPICH2_WIN64_DEVEL_MD5 "4939fdb59d13182fd5dd65211e469f14" CACHE STRING "MD5 checksum of MPICH2 (win64) tarball") - mark_as_advanced(MPICH2_WIN64_DEVEL_URL) - mark_as_advanced(MPICH2_WIN64_DEVEL_MD5) - - include(ExternalProject) - if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") - ExternalProject_Add(mpi4win_build - URL ${MPICH2_WIN64_DEVEL_URL} - URL_MD5 ${MPICH2_WIN64_DEVEL_MD5} - CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" - BUILD_BYPRODUCTS /lib/libmpi.a) - else() - ExternalProject_Add(mpi4win_build - URL ${MPICH2_WIN32_DEVEL_URL} - URL_MD5 ${MPICH2_WIN32_DEVEL_MD5} - CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" - BUILD_BYPRODUCTS /lib/libmpi.a) - endif() - - ExternalProject_get_property(mpi4win_build SOURCE_DIR) - file(MAKE_DIRECTORY "${SOURCE_DIR}/include") - add_library(MPI::MPI_CXX UNKNOWN IMPORTED) - set_target_properties(MPI::MPI_CXX PROPERTIES - IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmpi.a" - INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include" - INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - add_dependencies(MPI::MPI_CXX mpi4win_build) - - # set variables for status reporting at the end of CMake run - set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") - set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmpi.a") + message(FATAL_ERROR "Only x86 64-bit builds are supported with MS-MPI") endif() + + ExternalProject_get_property(mpi4win_build SOURCE_DIR) + file(MAKE_DIRECTORY "${SOURCE_DIR}/include") + add_library(MPI::MPI_CXX UNKNOWN IMPORTED) + set_target_properties(MPI::MPI_CXX PROPERTIES + IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmsmpi.a" + INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include" + INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX=1") + add_dependencies(MPI::MPI_CXX mpi4win_build) + + # set variables for status reporting at the end of CMake run + set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") + set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX=1") + set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmsmpi.a") else() - find_package(MPI REQUIRED) + find_package(MPI REQUIRED COMPONENTS CXX) option(LAMMPS_LONGLONG_TO_LONG "Workaround if your system or MPI version does not recognize 'long long' data types" OFF) if(LAMMPS_LONGLONG_TO_LONG) target_compile_definitions(lammps INTERFACE -DLAMMPS_LONGLONG_TO_LONG) diff --git a/cmake/Modules/LAMMPSUtils.cmake b/cmake/Modules/LAMMPSUtils.cmake index 4675788647..93f541f921 100644 --- a/cmake/Modules/LAMMPSUtils.cmake +++ b/cmake/Modules/LAMMPSUtils.cmake @@ -75,13 +75,25 @@ function(get_lammps_version version_header variable) list(FIND MONTHS "${month}" month) string(LENGTH ${day} day_length) string(LENGTH ${month} month_length) - if(day_length EQUAL 1) - set(day "0${day}") + # no leading zero needed for new version string with dots + # if(day_length EQUAL 1) + # set(day "0${day}") + # endif() + # if(month_length EQUAL 1) + # set(month "0${month}") + #endif() + file(STRINGS ${version_header} line REGEX LAMMPS_UPDATE) + string(REGEX REPLACE "#define LAMMPS_UPDATE \"Update ([0-9]+)\"" "\\1" tweak "${line}") + if (line MATCHES "#define LAMMPS_UPDATE \"(Maintenance|Development)\"") + set(tweak "99") endif() - if(month_length EQUAL 1) - set(month "0${month}") + if(NOT tweak) + set(tweak "0") endif() - set(${variable} "${year}${month}${day}" PARENT_SCOPE) + # new version string with dots + set(${variable} "${year}.${month}.${day}.${tweak}" PARENT_SCOPE) + # old version string without dots + # set(${variable} "${year}${month}${day}" PARENT_SCOPE) endfunction() function(check_for_autogen_files source_dir) diff --git a/cmake/Modules/MPI4WIN.cmake b/cmake/Modules/MPI4WIN.cmake index 02db6d4744..cd48ab279e 100644 --- a/cmake/Modules/MPI4WIN.cmake +++ b/cmake/Modules/MPI4WIN.cmake @@ -1,74 +1,31 @@ -# Download and configure MinGW compatible MPICH development files for Windows -option(USE_MSMPI "Use Microsoft's MS-MPI SDK instead of MPICH2-1.4.1" OFF) +# set-up MS-MPI library for Windows with MinGW compatibility +message(STATUS "Downloading and configuring MS-MPI 10.1 for Windows cross-compilation") +set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/msmpi-win64-devel.tar.gz" CACHE STRING "URL for MS-MPI (win64) tarball") +set(MPICH2_WIN64_DEVEL_MD5 "86314daf1bffb809f1fcbefb8a547490" CACHE STRING "MD5 checksum of MS-MPI (win64) tarball") +mark_as_advanced(MPICH2_WIN64_DEVEL_URL) +mark_as_advanced(MPICH2_WIN64_DEVEL_MD5) -if(USE_MSMPI) - message(STATUS "Downloading and configuring MS-MPI 10.1 for Windows cross-compilation") - set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/msmpi-win64-devel.tar.gz" CACHE STRING "URL for MS-MPI (win64) tarball") - set(MPICH2_WIN64_DEVEL_MD5 "86314daf1bffb809f1fcbefb8a547490" CACHE STRING "MD5 checksum of MS-MPI (win64) tarball") - mark_as_advanced(MPICH2_WIN64_DEVEL_URL) - mark_as_advanced(MPICH2_WIN64_DEVEL_MD5) - - include(ExternalProject) - if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") - ExternalProject_Add(mpi4win_build - URL ${MPICH2_WIN64_DEVEL_URL} - URL_MD5 ${MPICH2_WIN64_DEVEL_MD5} - CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" - BUILD_BYPRODUCTS /lib/libmsmpi.a) - else() - message(FATAL_ERROR "Only x86 64-bit builds are supported with MS-MPI") - endif() - - ExternalProject_get_property(mpi4win_build SOURCE_DIR) - file(MAKE_DIRECTORY "${SOURCE_DIR}/include") - add_library(MPI::MPI_CXX UNKNOWN IMPORTED) - set_target_properties(MPI::MPI_CXX PROPERTIES - IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmsmpi.a" - INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include" - INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - add_dependencies(MPI::MPI_CXX mpi4win_build) - - # set variables for status reporting at the end of CMake run - set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") - set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmsmpi.a") +include(ExternalProject) +if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") + ExternalProject_Add(mpi4win_build + URL ${MPICH2_WIN64_DEVEL_URL} + URL_MD5 ${MPICH2_WIN64_DEVEL_MD5} + CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" + BUILD_BYPRODUCTS /lib/libmsmpi.a) else() - message(STATUS "Downloading and configuring MPICH2-1.4.1 for Windows cross-compilation") - set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win64-devel.tar.gz" CACHE STRING "URL for MPICH2 (win64) tarball") - set(MPICH2_WIN32_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win32-devel.tar.gz" CACHE STRING "URL for MPICH2 (win32) tarball") - set(MPICH2_WIN64_DEVEL_MD5 "4939fdb59d13182fd5dd65211e469f14" CACHE STRING "MD5 checksum of MPICH2 (win64) tarball") - set(MPICH2_WIN32_DEVEL_MD5 "a61d153500dce44e21b755ee7257e031" CACHE STRING "MD5 checksum of MPICH2 (win32) tarball") - mark_as_advanced(MPICH2_WIN64_DEVEL_URL) - mark_as_advanced(MPICH2_WIN32_DEVEL_URL) - mark_as_advanced(MPICH2_WIN64_DEVEL_MD5) - mark_as_advanced(MPICH2_WIN32_DEVEL_MD5) - - include(ExternalProject) - if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") - ExternalProject_Add(mpi4win_build - URL ${MPICH2_WIN64_DEVEL_URL} - URL_MD5 ${MPICH2_WIN64_DEVEL_MD5} - CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" - BUILD_BYPRODUCTS /lib/libmpi.a) - else() - ExternalProject_Add(mpi4win_build - URL ${MPICH2_WIN32_DEVEL_URL} - URL_MD5 ${MPICH2_WIN32_DEVEL_MD5} - CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND "" - BUILD_BYPRODUCTS /lib/libmpi.a) - endif() + message(FATAL_ERROR "Only x86 64-bit builds are supported with MS-MPI") +endif() - ExternalProject_get_property(mpi4win_build SOURCE_DIR) - file(MAKE_DIRECTORY "${SOURCE_DIR}/include") - add_library(MPI::MPI_CXX UNKNOWN IMPORTED) - set_target_properties(MPI::MPI_CXX PROPERTIES - IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmpi.a" - INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include" - INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - add_dependencies(MPI::MPI_CXX mpi4win_build) +ExternalProject_get_property(mpi4win_build SOURCE_DIR) +file(MAKE_DIRECTORY "${SOURCE_DIR}/include") +add_library(MPI::MPI_CXX UNKNOWN IMPORTED) +set_target_properties(MPI::MPI_CXX PROPERTIES + IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmsmpi.a" + INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include" + INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX=1") +add_dependencies(MPI::MPI_CXX mpi4win_build) - # set variables for status reporting at the end of CMake run - set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") - set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") - set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmpi.a") -endif() +# set variables for status reporting at the end of CMake run +set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") +set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX=1") +set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmsmpi.a") diff --git a/cmake/Modules/OpenCLUtils.cmake b/cmake/Modules/OpenCLUtils.cmake index eb17da0b3d..352c8c50b0 100644 --- a/cmake/Modules/OpenCLUtils.cmake +++ b/cmake/Modules/OpenCLUtils.cmake @@ -3,6 +3,8 @@ function(WriteOpenCLHeader varname outfile files) separate_arguments(files) foreach(filename ${files}) + # In case ${filename} would have blanks, CMake will have replaced them with ';'. Revert: + string(REPLACE ";" " " filename "${filename}") file(READ ${filename} content) string(REGEX REPLACE "\\s*//[^\n]*\n" "\n" content "${content}") string(REGEX REPLACE "\\\\" "\\\\\\\\" content "${content}") diff --git a/cmake/Modules/Packages/COLVARS.cmake b/cmake/Modules/Packages/COLVARS.cmake index 8fa0d84f01..b4dc738626 100644 --- a/cmake/Modules/Packages/COLVARS.cmake +++ b/cmake/Modules/Packages/COLVARS.cmake @@ -26,6 +26,11 @@ if(BUILD_OMP) target_link_libraries(colvars PRIVATE OpenMP::OpenMP_CXX) endif() +if(BUILD_MPI) + target_compile_definitions(colvars PUBLIC -DCOLVARS_MPI) + target_link_libraries(colvars PUBLIC MPI::MPI_CXX) +endif() + if(COLVARS_DEBUG) # Need to export the define publicly to be valid in interface code target_compile_definitions(colvars PUBLIC -DCOLVARS_DEBUG) diff --git a/cmake/Modules/Packages/GPU.cmake b/cmake/Modules/Packages/GPU.cmake index 6d0ce303a5..592b7eff2a 100644 --- a/cmake/Modules/Packages/GPU.cmake +++ b/cmake/Modules/Packages/GPU.cmake @@ -74,7 +74,7 @@ if(GPU_API STREQUAL "CUDA") option(CUDA_BUILD_MULTIARCH "Enable building CUDA kernels for all supported GPU architectures" ON) mark_as_advanced(GPU_BUILD_MULTIARCH) - set(GPU_ARCH "sm_50" CACHE STRING "LAMMPS GPU CUDA SM primary architecture (e.g. sm_60)") + set(GPU_ARCH "sm_75" CACHE STRING "LAMMPS GPU CUDA SM primary architecture (e.g. sm_80)") # ensure that no *cubin.h files exist from a compile in the lib/gpu folder file(GLOB GPU_LIB_OLD_CUBIN_HEADERS CONFIGURE_DEPENDS ${LAMMPS_LIB_SOURCE_DIR}/gpu/*_cubin.h) @@ -150,10 +150,7 @@ if(GPU_API STREQUAL "CUDA") if(CUDA_VERSION VERSION_GREATER_EQUAL "11.8") string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]") endif() - # Hopper (GPU Arch 9.0) is supported by CUDA 12.0 and later - if(CUDA_VERSION VERSION_GREATER_EQUAL "12.0") - string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]") - endif() + # newer GPU Arch versions require CUDA 12.0 or later which is handled above endif() endif() @@ -189,7 +186,7 @@ if(GPU_API STREQUAL "CUDA") endif() add_executable(nvc_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp) - target_compile_definitions(nvc_get_devices PRIVATE -DUCL_CUDADR) + target_compile_definitions(nvc_get_devices PRIVATE -DUCL_CUDADR -DLAMMPS_${LAMMPS_SIZES}) target_link_libraries(nvc_get_devices PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY}) target_include_directories(nvc_get_devices PRIVATE ${CUDA_INCLUDE_DIRS}) @@ -287,7 +284,7 @@ elseif(GPU_API STREQUAL "HIP") set(HIP_ARCH "spirv" CACHE STRING "HIP target architecture") elseif(HIP_PLATFORM STREQUAL "nvcc") find_package(CUDA REQUIRED) - set(HIP_ARCH "sm_50" CACHE STRING "HIP primary CUDA architecture (e.g. sm_60)") + set(HIP_ARCH "sm_75" CACHE STRING "HIP primary CUDA architecture (e.g. sm_75)") if(CUDA_VERSION VERSION_LESS 8.0) message(FATAL_ERROR "CUDA Toolkit version 8.0 or later is required") @@ -335,10 +332,7 @@ elseif(GPU_API STREQUAL "HIP") if(CUDA_VERSION VERSION_GREATER_EQUAL "11.8") string(APPEND HIP_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]") endif() - # Hopper (GPU Arch 9.0) is supported by CUDA 12.0 and later - if(CUDA_VERSION VERSION_GREATER_EQUAL "12.0") - string(APPEND HIP_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]") - endif() + # newer GPU Arch versions require CUDA 12.0 or later which is handled above endif() endif() @@ -489,7 +483,7 @@ else() target_link_libraries(gpu PRIVATE mpi_stubs) endif() -target_compile_definitions(gpu PRIVATE -DLAMMPS_${LAMMPS_SIZES}) set_target_properties(gpu PROPERTIES OUTPUT_NAME lammps_gpu${LAMMPS_MACHINE}) +target_compile_definitions(gpu PRIVATE -DLAMMPS_${LAMMPS_SIZES}) target_sources(lammps PRIVATE ${GPU_SOURCES}) target_include_directories(lammps PRIVATE ${GPU_SOURCES_DIR}) diff --git a/cmake/Modules/Packages/INTEL.cmake b/cmake/Modules/Packages/INTEL.cmake index 6fb1c57e8a..467c183107 100644 --- a/cmake/Modules/Packages/INTEL.cmake +++ b/cmake/Modules/Packages/INTEL.cmake @@ -13,11 +13,11 @@ string(TOUPPER ${INTEL_ARCH} INTEL_ARCH) find_package(Threads QUIET) if(Threads_FOUND) - set(INTEL_LRT_MODE "threads" CACHE STRING "Long-range threads mode (none, threads, or c++11)") + set(INTEL_LRT_MODE "threads" CACHE STRING "Long-range threads mode (none, threads, or c++17)") else() - set(INTEL_LRT_MODE "none" CACHE STRING "Long-range threads mode (none, threads, or c++11)") + set(INTEL_LRT_MODE "none" CACHE STRING "Long-range threads mode (none, threads, or c++17)") endif() -set(INTEL_LRT_VALUES none threads c++11) +set(INTEL_LRT_VALUES none threads c++17) set_property(CACHE INTEL_LRT_MODE PROPERTY STRINGS ${INTEL_LRT_VALUES}) validate_option(INTEL_LRT_MODE INTEL_LRT_VALUES) string(TOUPPER ${INTEL_LRT_MODE} INTEL_LRT_MODE) @@ -29,9 +29,9 @@ if(INTEL_LRT_MODE STREQUAL "THREADS") message(FATAL_ERROR "Must have working threads library for Long-range thread support") endif() endif() -if(INTEL_LRT_MODE STREQUAL "C++11") +if(INTEL_LRT_MODE STREQUAL "C++17") if(Threads_FOUND) - target_compile_definitions(lammps PRIVATE -DLMP_INTEL_USELRT -DLMP_INTEL_LRT11) + target_compile_definitions(lammps PRIVATE -DLMP_INTEL_USELRT -DLMP_INTEL_LRT17) target_link_libraries(lammps PRIVATE Threads::Threads) else() message(FATAL_ERROR "Must have working threads library for Long-range thread support") diff --git a/cmake/Modules/Packages/KOKKOS.cmake b/cmake/Modules/Packages/KOKKOS.cmake index f878db654c..293a6b2e0e 100644 --- a/cmake/Modules/Packages/KOKKOS.cmake +++ b/cmake/Modules/Packages/KOKKOS.cmake @@ -5,6 +5,37 @@ if(CMAKE_CXX_STANDARD LESS 17) be set to at least C++17") endif() +# Set Kokkos Precision +set(KOKKOS_PREC "double" CACHE STRING "LAMMPS KOKKOS precision") +set(KOKKOS_PREC_VALUES double mixed single) +set_property(CACHE KOKKOS_PREC PROPERTY STRINGS ${KOKKOS_PREC_VALUES}) +validate_option(KOKKOS_PREC KOKKOS_PREC_VALUES) +string(TOLOWER ${KOKKOS_PREC} KOKKOS_PREC_LOWER) +string(TOUPPER ${KOKKOS_PREC} KOKKOS_PREC) + +if(KOKKOS_PREC STREQUAL "DOUBLE") + set(KOKKOS_PREC_SETTING "DOUBLE_DOUBLE") +elseif(KOKKOS_PREC STREQUAL "MIXED") + set(KOKKOS_PREC_SETTING "SINGLE_DOUBLE") +elseif(KOKKOS_PREC STREQUAL "SINGLE") + set(KOKKOS_PREC_SETTING "SINGLE_SINGLE") +endif() + +target_compile_definitions(lammps PRIVATE -DLMP_KOKKOS_${KOKKOS_PREC_SETTING}) + +# Set Kokkos View Layout +set(KOKKOS_LAYOUT "legacy" CACHE STRING "LAMMPS KOKKOS view layout") +set(KOKKOS_LAYOUT_VALUES legacy default) +set_property(CACHE KOKKOS_LAYOUT PROPERTY STRINGS ${KOKKOS_LAYOUT_VALUES}) +validate_option(KOKKOS_LAYOUT KOKKOS_LAYOUT_VALUES) +string(TOLOWER ${KOKKOS_LAYOUT} KOKKOS_LAYOUT_LOWER) +string(TOUPPER ${KOKKOS_LAYOUT} KOKKOS_LAYOUT) + +target_compile_definitions(lammps PRIVATE -DLMP_KOKKOS_LAYOUT_${KOKKOS_LAYOUT}) + +message(STATUS "Using " ${KOKKOS_PREC_LOWER} " precision for KOKKOS package") +message(STATUS "Using " ${KOKKOS_LAYOUT_LOWER} " view layout for KOKKOS package") + ######################################################################## # consistency checks and Kokkos options/settings required by LAMMPS if(Kokkos_ENABLE_HIP) @@ -57,8 +88,8 @@ if(DOWNLOAD_KOKKOS) list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_CXX_EXTENSIONS=${CMAKE_CXX_EXTENSIONS}") list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}") include(ExternalProject) - set(KOKKOS_URL "https://github.com/kokkos/kokkos/archive/4.6.00.tar.gz" CACHE STRING "URL for KOKKOS tarball") - set(KOKKOS_MD5 "61b2b69ae50d83eedcc7d47a3fa3d6cb" CACHE STRING "MD5 checksum of KOKKOS tarball") + set(KOKKOS_URL "https://github.com/kokkos/kokkos/archive/4.6.02.tar.gz" CACHE STRING "URL for KOKKOS tarball") + set(KOKKOS_MD5 "14c02fac07bfcec48a1654f88ddee9c6" CACHE STRING "MD5 checksum of KOKKOS tarball") mark_as_advanced(KOKKOS_URL) mark_as_advanced(KOKKOS_MD5) GetFallbackURL(KOKKOS_URL KOKKOS_FALLBACK) @@ -83,7 +114,7 @@ if(DOWNLOAD_KOKKOS) add_dependencies(LAMMPS::KOKKOSCORE kokkos_build) add_dependencies(LAMMPS::KOKKOSCONTAINERS kokkos_build) elseif(EXTERNAL_KOKKOS) - find_package(Kokkos 4.6.00 REQUIRED CONFIG) + find_package(Kokkos 4.6.02 REQUIRED CONFIG) target_link_libraries(lammps PRIVATE Kokkos::kokkos) else() set(LAMMPS_LIB_KOKKOS_SRC_DIR ${LAMMPS_LIB_SOURCE_DIR}/kokkos) @@ -180,6 +211,14 @@ if(PKG_KSPACE) endif() if(PKG_ML-IAP) + if(NOT (KOKKOS_PREC STREQUAL "DOUBLE")) + message(FATAL_ERROR "Must use KOKKOS_PREC=double with package ML-IAP") + endif() + + if(NOT (KOKKOS_LAYOUT STREQUAL "LEGACY")) + message(FATAL_ERROR "Must use KOKKOS_LAYOUT=legacy with package ML-IAP") + endif() + list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/mliap_data_kokkos.cpp ${KOKKOS_PKG_SOURCES_DIR}/mliap_descriptor_so3_kokkos.cpp ${KOKKOS_PKG_SOURCES_DIR}/mliap_model_linear_kokkos.cpp diff --git a/cmake/Modules/Packages/MC.cmake b/cmake/Modules/Packages/MC.cmake index f162254558..a39a630da3 100644 --- a/cmake/Modules/Packages/MC.cmake +++ b/cmake/Modules/Packages/MC.cmake @@ -7,3 +7,23 @@ if(NOT PKG_MANYBODY) list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MC/fix_sgcmc.cpp) set_property(TARGET lammps PROPERTY SOURCES "${LAMMPS_SOURCES}") endif() + +# fix hmc may only be installed if also fix rigid/small from RIGID is installed +if(NOT PKG_RIGID) + get_property(LAMMPS_FIX_HEADERS GLOBAL PROPERTY FIX) + list(REMOVE_ITEM LAMMPS_FIX_HEADERS ${LAMMPS_SOURCE_DIR}/MC/fix_hmc.h) + set_property(GLOBAL PROPERTY FIX "${LAMMPS_FIX_HEADERS}") + get_target_property(LAMMPS_SOURCES lammps SOURCES) + list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MC/fix_hmc.cpp) + set_property(TARGET lammps PROPERTY SOURCES "${LAMMPS_SOURCES}") +endif() + +# fix neighbor/swap may only be installed if also the VORONOI package is installed +if(NOT PKG_VORONOI) + get_property(LAMMPS_FIX_HEADERS GLOBAL PROPERTY FIX) + list(REMOVE_ITEM LAMMPS_FIX_HEADERS ${LAMMPS_SOURCE_DIR}/MC/fix_neighbor_swap.h) + set_property(GLOBAL PROPERTY FIX "${LAMMPS_FIX_HEADERS}") + get_target_property(LAMMPS_SOURCES lammps SOURCES) + list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MC/fix_neighbor_swap.cpp) + set_property(TARGET lammps PROPERTY SOURCES "${LAMMPS_SOURCES}") +endif() diff --git a/cmake/Modules/Packages/ML-PACE.cmake b/cmake/Modules/Packages/ML-PACE.cmake index b30c61b8e4..7d3d1a452e 100644 --- a/cmake/Modules/Packages/ML-PACE.cmake +++ b/cmake/Modules/Packages/ML-PACE.cmake @@ -53,7 +53,13 @@ else() add_library(yaml-cpp::yaml-cpp ALIAS yaml-cpp) endif() - add_subdirectory(${lib-pace} build-pace) + # fixup yaml-cpp/emitterutils.cpp for GCC 15+ until patch is applied + file(READ ${lib-pace}/yaml-cpp/src/emitterutils.cpp yaml_emitterutils) + string(REPLACE "#include " "#include \n#include " yaml_tmp_emitterutils "${yaml_emitterutils}") + string(REPLACE "#include \n#include " "#include " yaml_emitterutils "${yaml_tmp_emitterutils}") + file(WRITE ${lib-pace}/yaml-cpp/src/emitterutils.cpp "${yaml_emitterutils}") + + add_subdirectory(${lib-pace} build-pace EXCLUDE_FROM_ALL) set_target_properties(pace PROPERTIES CXX_EXTENSIONS ON OUTPUT_NAME lammps_pace${LAMMPS_MACHINE}) if(CMAKE_PROJECT_NAME STREQUAL "lammps") diff --git a/cmake/Modules/Packages/PLUMED.cmake b/cmake/Modules/Packages/PLUMED.cmake index 1b4845d259..606fe2174b 100644 --- a/cmake/Modules/Packages/PLUMED.cmake +++ b/cmake/Modules/Packages/PLUMED.cmake @@ -32,9 +32,9 @@ endif() # Note: must also adjust check for supported API versions in # fix_plumed.cpp when version changes from v2.n.x to v2.n+1.y -set(PLUMED_URL "https://github.com/plumed/plumed2/releases/download/v2.9.3/plumed-src-2.9.3.tgz" +set(PLUMED_URL "https://github.com/plumed/plumed2/releases/download/v2.9.4/plumed-src-2.9.4.tgz" CACHE STRING "URL for PLUMED tarball") -set(PLUMED_MD5 "ee1249805fe94bccee17d10610d3f6f1" CACHE STRING "MD5 checksum of PLUMED tarball") +set(PLUMED_MD5 "e540bf5132e3270e843398a6080d00c7" CACHE STRING "MD5 checksum of PLUMED tarball") mark_as_advanced(PLUMED_URL) mark_as_advanced(PLUMED_MD5) diff --git a/cmake/Modules/Packages/SCAFACOS.cmake b/cmake/Modules/Packages/SCAFACOS.cmake index 9a5580163f..2905a207b0 100644 --- a/cmake/Modules/Packages/SCAFACOS.cmake +++ b/cmake/Modules/Packages/SCAFACOS.cmake @@ -14,27 +14,16 @@ endif() option(DOWNLOAD_SCAFACOS "Download ScaFaCoS library instead of using an already installed one" ${DOWNLOAD_SCAFACOS_DEFAULT}) if(DOWNLOAD_SCAFACOS) message(STATUS "ScaFaCoS download requested - we will build our own") - set(SCAFACOS_URL "https://github.com/scafacos/scafacos/releases/download/v1.0.1/scafacos-1.0.1.tar.gz" CACHE STRING "URL for SCAFACOS tarball") - set(SCAFACOS_MD5 "bd46d74e3296bd8a444d731bb10c1738" CACHE STRING "MD5 checksum of SCAFACOS tarball") + set(SCAFACOS_URL "https://github.com/scafacos/scafacos/releases/download/v1.0.4/scafacos-1.0.4.tar.gz" CACHE STRING "URL for SCAFACOS tarball") + set(SCAFACOS_MD5 "23867540ec32e63ce71d6ecc105278d2" CACHE STRING "MD5 checksum of SCAFACOS tarball") mark_as_advanced(SCAFACOS_URL) mark_as_advanced(SCAFACOS_MD5) GetFallbackURL(SCAFACOS_URL SCAFACOS_FALLBACK) - - # version 1.0.1 needs a patch to compile and linke cleanly with GCC 10 and later. - file(DOWNLOAD ${LAMMPS_THIRDPARTY_URL}/scafacos-1.0.1-fix.diff ${CMAKE_CURRENT_BINARY_DIR}/scafacos-1.0.1.fix.diff - EXPECTED_HASH MD5=4baa1333bb28fcce102d505e1992d032) - - find_program(HAVE_PATCH patch) - if(NOT HAVE_PATCH) - message(FATAL_ERROR "The 'patch' program is required to build the ScaFaCoS library") - endif() - include(ExternalProject) ExternalProject_Add(scafacos_build URL ${SCAFACOS_URL} ${SCAFACOS_FALLBACK} URL_MD5 ${SCAFACOS_MD5} - PATCH_COMMAND patch -p1 < ${CMAKE_CURRENT_BINARY_DIR}/scafacos-1.0.1.fix.diff CONFIGURE_COMMAND /configure --prefix= --disable-doc --enable-fcs-solvers=fmm,p2nfft,direct,ewald,p3m --with-internal-fftw --with-internal-pfft diff --git a/cmake/Modules/Packages/VORONOI.cmake b/cmake/Modules/Packages/VORONOI.cmake index cbc350340f..3a03a1b826 100644 --- a/cmake/Modules/Packages/VORONOI.cmake +++ b/cmake/Modules/Packages/VORONOI.cmake @@ -34,7 +34,7 @@ if(DOWNLOAD_VORO) ExternalProject_Add(voro_build URL ${VORO_URL} URL_MD5 ${VORO_MD5} - PATCH_COMMAND patch -b -p0 < ${LAMMPS_LIB_SOURCE_DIR}/voronoi/voro-make.patch + PATCH_COMMAND patch -b -p0 < ${LAMMPS_DIR}/cmake/patches/voro-make.patch CONFIGURE_COMMAND "" BUILD_COMMAND make ${VORO_BUILD_OPTIONS} BUILD_IN_SOURCE 1 diff --git a/cmake/Modules/Testing.cmake b/cmake/Modules/Testing.cmake index a72ce17e1b..7aa3506642 100644 --- a/cmake/Modules/Testing.cmake +++ b/cmake/Modules/Testing.cmake @@ -21,11 +21,11 @@ if(ENABLE_TESTING) # also only verified with Fedora Linux > 30 and Ubuntu 18.04 or 22.04+(Ubuntu 20.04 fails) if((CMAKE_SYSTEM_NAME STREQUAL "Linux") AND ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") OR (CMAKE_CXX_COMPILER_ID STREQUAL "Clang"))) - if(((CMAKE_LINUX_DISTRO STREQUAL "Ubuntu") AND - ((CMAKE_DISTRO_VERSION VERSION_LESS_EQUAL 18.04) OR (CMAKE_DISTRO_VERSION VERSION_GREATER_EQUAL 22.04))) + if(((CMAKE_LINUX_DISTRO STREQUAL "Ubuntu") AND (CMAKE_DISTRO_VERSION VERSION_GREATER_EQUAL 22.04)) OR ((CMAKE_LINUX_DISTRO STREQUAL "Fedora") AND (CMAKE_DISTRO_VERSION VERSION_GREATER 30))) include(CheckCXXCompilerFlag) set(CMAKE_CUSTOM_LINKER_DEFAULT default) + check_cxx_compiler_flag(--ld-path=${CMAKE_LINKER} HAVE_LD_PATH_FLAG) check_cxx_compiler_flag(-fuse-ld=mold HAVE_MOLD_LINKER_FLAG) check_cxx_compiler_flag(-fuse-ld=lld HAVE_LLD_LINKER_FLAG) check_cxx_compiler_flag(-fuse-ld=gold HAVE_GOLD_LINKER_FLAG) @@ -50,6 +50,17 @@ if(ENABLE_TESTING) if(NOT "${CMAKE_CUSTOM_LINKER}" STREQUAL "default") target_link_options(lammps PUBLIC -fuse-ld=${CMAKE_CUSTOM_LINKER}) endif() + if(HAVE_LD_PATH_FLAG) + if("${CMAKE_CUSTOM_LINKER}" STREQUAL "mold") + target_link_options(lammps PUBLIC --ld-path=${HAVE_MOLD_LINKER_BIN}) + elseif("${CMAKE_CUSTOM_LINKER}" STREQUAL "lld") + target_link_options(lammps PUBLIC --ld-path=${HAVE_LLD_LINKER_BIN}) + elseif("${CMAKE_CUSTOM_LINKER}" STREQUAL "gold") + target_link_options(lammps PUBLIC --ld-path=${HAVE_GOLD_LINKER_BIN}) + elseif("${CMAKE_CUSTOM_LINKER}" STREQUAL "bfd") + target_link_options(lammps PUBLIC --ld-path=${HAVE_BFD_LINKER_BIN}) + endif() + endif() endif() endif() diff --git a/cmake/Modules/Tools.cmake b/cmake/Modules/Tools.cmake index 94e077d51e..4dfa09c6f5 100644 --- a/cmake/Modules/Tools.cmake +++ b/cmake/Modules/Tools.cmake @@ -6,6 +6,10 @@ if(BUILD_TOOLS) add_executable(stl_bin2txt ${LAMMPS_TOOLS_DIR}/stl_bin2txt.cpp) install(TARGETS stl_bin2txt DESTINATION ${CMAKE_INSTALL_BINDIR}) + add_executable(reformat-json ${LAMMPS_TOOLS_DIR}/json/reformat-json.cpp) + target_include_directories(reformat-json PRIVATE ${LAMMPS_SOURCE_DIR}) + install(TARGETS reformat-json DESTINATION ${CMAKE_INSTALL_BINDIR}) + include(CheckGeneratorSupport) if(CMAKE_GENERATOR_SUPPORT_FORTRAN) include(CheckLanguage) @@ -38,7 +42,235 @@ if(BUILD_TOOLS) endif() if(BUILD_LAMMPS_GUI) - get_filename_component(LAMMPS_GUI_DIR ${LAMMPS_SOURCE_DIR}/../tools/lammps-gui ABSOLUTE) - get_filename_component(LAMMPS_GUI_BIN ${CMAKE_BINARY_DIR}/lammps-gui-build ABSOLUTE) - add_subdirectory(${LAMMPS_GUI_DIR} ${LAMMPS_GUI_BIN}) + include(ExternalProject) + # When building LAMMPS-GUI with LAMMPS we don't support plugin mode and don't include docs. + ExternalProject_Add(lammps-gui_build + GIT_REPOSITORY https://github.com/akohlmey/lammps-gui.git + GIT_TAG main + GIT_SHALLOW TRUE + GIT_PROGRESS TRUE + CMAKE_ARGS -D BUILD_DOC=OFF + -D LAMMPS_GUI_USE_PLUGIN=OFF + -D LAMMPS_SOURCE_DIR=${LAMMPS_SOURCE_DIR} + -D LAMMPS_LIBRARY=$ + -D CMAKE_C_COMPILER=${CMAKE_C_COMPILER} + -D CMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER} + -D CMAKE_INSTALL_PREFIX= + -D CMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} + -D CMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM} + -D CMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE} + DEPENDS lammps + BUILD_BYPRODUCTS /bin/lammps-gui + ) + add_custom_target(lammps-gui ALL + ${CMAKE_COMMAND} -E copy_if_different lammps-gui_build-prefix/bin/lammps-gui* ${CMAKE_BINARY_DIR} + DEPENDS lammps-gui_build + ) + + # packaging support for LAMMPS-GUI when compiled with LAMMPS + option(BUILD_WHAM "Download and compile WHAM executable from Grossfield Lab" YES) + if(BUILD_WHAM) + set(WHAM_URL "http://membrane.urmc.rochester.edu/sites/default/files/wham/wham-release-2.1.0.tgz" CACHE STRING "URL for WHAM tarball") + set(WHAM_MD5 "4ed6e24254925ec124f44bb381c3b87f" CACHE STRING "MD5 checksum of WHAM tarball") + mark_as_advanced(WHAM_URL) + mark_as_advanced(WHAM_MD5) + + get_filename_component(archive ${WHAM_URL} NAME) + file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/_deps/src) + if(EXISTS ${CMAKE_BINARY_DIR}/_deps/${archive}) + file(MD5 ${CMAKE_BINARY_DIR}/_deps/${archive} DL_MD5) + endif() + if(NOT "${DL_MD5}" STREQUAL "${WHAM_MD5}") + message(STATUS "Downloading ${WHAM_URL}") + file(DOWNLOAD ${WHAM_URL} ${CMAKE_BINARY_DIR}/_deps/${archive} STATUS DL_STATUS SHOW_PROGRESS) + file(MD5 ${CMAKE_BINARY_DIR}/_deps/${archive} DL_MD5) + if((NOT DL_STATUS EQUAL 0) OR (NOT "${DL_MD5}" STREQUAL "${WHAM_MD5}")) + message(ERROR "Download of WHAM sources from ${WHAM_URL} failed") + endif() + else() + message(STATUS "Using already downloaded archive ${CMAKE_BINARY_DIR}/_deps/${archive}") + endif() + message(STATUS "Unpacking and configuring ${archive}") + + execute_process(COMMAND ${CMAKE_COMMAND} -E tar xf ${CMAKE_BINARY_DIR}/_deps/${archive} + WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/_deps/src) + find_package(Patch) + if(PATCH_FOUND) + message(STATUS "Apply patch to customize WHAM using ${Patch_EXECUTABLE}") + execute_process( + COMMAND ${Patch_EXECUTABLE} -p1 -i ${CMAKE_SOURCE_DIR}/cmake/packaging/update-wham-2.1.0.patch + WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/_deps/src/wham/ + ) + endif() + file(REMOVE_RECURSE ${CMAKE_BINARY_DIR}/_deps/wham-src) + file(RENAME "${CMAKE_BINARY_DIR}/_deps/src/wham" ${CMAKE_BINARY_DIR}/_deps/wham-src) + file(COPY packaging/CMakeLists.wham DESTINATION ${CMAKE_BINARY_DIR}/_deps/wham-src/) + file(RENAME "${CMAKE_BINARY_DIR}/_deps/wham-src/CMakeLists.wham" + "${CMAKE_BINARY_DIR}/_deps/wham-src/CMakeLists.txt") + add_subdirectory("${CMAKE_BINARY_DIR}/_deps/wham-src" "${CMAKE_BINARY_DIR}/_deps/wham-build") + set(WHAM_EXE wham wham-2d) + endif() + + # build LAMMPS-GUI and LAMMPS as flatpak, if tools are installed + find_program(FLATPAK_COMMAND flatpak DOC "Path to flatpak command") + find_program(FLATPAK_BUILDER flatpak-builder DOC "Path to flatpak-builder command") + if(FLATPAK_COMMAND AND FLATPAK_BUILDER) + file(STRINGS ${LAMMPS_DIR}/src/version.h line REGEX LAMMPS_VERSION) + string(REGEX REPLACE "#define LAMMPS_VERSION \"([0-9]+) ([A-Za-z][A-Za-z][A-Za-z])[A-Za-z]* ([0-9]+)\"" + "\\1\\2\\3" LAMMPS_RELEASE "${line}") + set(FLATPAK_BUNDLE "LAMMPS-Linux-x86_64-GUI-${LAMMPS_RELEASE}.flatpak") + add_custom_target(flatpak + COMMAND ${FLATPAK_COMMAND} --user remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo + COMMAND ${FLATPAK_BUILDER} --force-clean --verbose --repo=${CMAKE_CURRENT_BINARY_DIR}/flatpak-repo + --install-deps-from=flathub --state-dir=${CMAKE_CURRENT_BINARY_DIR} + --user --ccache --default-branch=${LAMMPS_RELEASE} + flatpak-build ${LAMMPS_DIR}/cmake/packaging/org.lammps.lammps-gui.yml + COMMAND ${FLATPAK_COMMAND} build-bundle --runtime-repo=https://flathub.org/repo/flathub.flatpakrepo --verbose + ${CMAKE_CURRENT_BINARY_DIR}/flatpak-repo + ${FLATPAK_BUNDLE} org.lammps.lammps-gui ${LAMMPS_RELEASE} + COMMENT "Create Flatpak bundle file of LAMMPS and LAMMPS-GUI" + BYPRODUCT ${FLATPAK_BUNDLE} + WORKING_DIRECTORY ${CMAKE_BINARY_DIR} + ) + else() + add_custom_target(flatpak + COMMAND ${CMAKE_COMMAND} -E echo "The flatpak and flatpak-builder commands required to build a LAMMPS-GUI flatpak bundle were not found. Skipping.") + endif() + + if(APPLE) + file(STRINGS ${LAMMPS_DIR}/src/version.h line REGEX LAMMPS_VERSION) + string(REGEX REPLACE "#define LAMMPS_VERSION \"([0-9]+) ([A-Za-z][A-Za-z][A-Za-z])[A-Za-z]* ([0-9]+)\"" + "\\1\\2\\3" LAMMPS_RELEASE "${line}") + + # additional targets to populate the bundle tree and create the .dmg image file + set(APP_CONTENTS ${CMAKE_BINARY_DIR}/lammps-gui_build-prefix/bin/lammps-gui.app/Contents) + if(BUILD_TOOLS) + file(REMOVE_RECURSE ${CMAKE_BINARY_DIR}/lammps-gui_build-prefix/bin/lammps-gui.app) + add_custom_target(complete-bundle + ${CMAKE_COMMAND} -E make_directory ${APP_CONTENTS}/bin + COMMAND ${CMAKE_COMMAND} -E make_directory ${APP_CONTENTS}/Frameworks + COMMAND ${CMAKE_COMMAND} -E copy_if_different $ ${APP_CONTENTS}/Frameworks/ + COMMAND ${CMAKE_COMMAND} -E copy_if_different $ ${APP_CONTENTS}/bin/ + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_BINARY_DIR}/lmp ${APP_CONTENTS}/bin/ + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_BINARY_DIR}/msi2lmp ${APP_CONTENTS}/bin/ + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_BINARY_DIR}/binary2txt ${APP_CONTENTS}/bin/ + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_BINARY_DIR}/stl_bin2txt ${APP_CONTENTS}/bin/ + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_BINARY_DIR}/phana ${APP_CONTENTS}/bin/ + COMMAND ${CMAKE_COMMAND} -E create_symlink ../MacOS/lammps-gui ${APP_CONTENTS}/bin/lammps-gui + COMMAND ${CMAKE_COMMAND} -E make_directory ${APP_CONTENTS}/Resources + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_DIR}/cmake/packaging/README.macos ${APP_CONTENTS}/Resources/README.txt + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_DIR}/cmake/packaging/lammps.icns ${APP_CONTENTS}/Resources + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_DIR}/cmake/packaging/lammps-gui.icns ${APP_CONTENTS}/Resources + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_DIR}/cmake/packaging/LAMMPS_DMG_Background.png ${APP_CONTENTS}/Resources + COMMAND ${CMAKE_COMMAND} -E make_directory ${APP_CONTENTS}/share/lammps + COMMAND ${CMAKE_COMMAND} -E make_directory ${APP_CONTENTS}/share/lammps/man/man1 + COMMAND ${CMAKE_COMMAND} -E copy_directory ${LAMMPS_DIR}/potentials ${APP_CONTENTS}/share/lammps/potentials + COMMAND ${CMAKE_COMMAND} -E copy_directory ${LAMMPS_DIR}/bench ${APP_CONTENTS}/share/lammps/bench + COMMAND ${CMAKE_COMMAND} -E copy_directory ${LAMMPS_DIR}/tools/msi2lmp/frc_files ${APP_CONTENTS}/share/lammps/frc_files + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_DIR}/doc/lammps.1 ${APP_CONTENTS}/share/lammps/man/man1/ + COMMAND ${CMAKE_COMMAND} -E create_symlink lammps.1 ${APP_CONTENTS}/share/lammps/man/man1/lmp.1 + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_DIR}/doc/msi2lmp.1 ${APP_CONTENTS}/share/lammps/man/man1 + DEPENDS lammps lmp binary2txt stl_bin2txt msi2lmp phana lammps-gui_build + COMMENT "Copying additional files into macOS app bundle tree" + WORKING_DIRECTORY ${CMAKE_BINARY_DIR} + ) + else() + message(FATAL_ERROR "Must use -D BUILD_TOOLS=yes for building app bundle") + endif() + if(BUILD_WHAM) + add_custom_target(copy-wham + ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_BINARY_DIR}/wham ${APP_CONTENTS}/bin/ + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_BINARY_DIR}/wham-2d ${APP_CONTENTS}/bin/ + DEPENDS complete-bundle wham wham-2d + COMMENT "Copying WHAM executables into macOS app bundle tree" + ) + set(WHAM_TARGET copy-wham) + endif() + if(FFMPEG_EXECUTABLE) + add_custom_target(copy-ffmpeg + COMMAND ${CMAKE_COMMAND} -E copy_if_different ${FFMPEG_EXECUTABLE} ${APP_CONTENTS}/bin/ + COMMENT "Copying FFMpeg into macOS app bundle tree" + DEPENDS complete-bundle + ) + set(FFMPEG_TARGET copy-ffmpeg) + endif() + add_custom_target(dmg + COMMAND ${LAMMPS_DIR}/cmake/packaging/build_macos_dmg.sh ${LAMMPS_RELEASE} ${CMAKE_BINARY_DIR}/lammps-gui_build-prefix/bin/lammps-gui.app + DEPENDS complete-bundle ${WHAM_TARGET} ${FFMPEG_TARGET} + COMMENT "Create Drag-n-Drop installer disk image from app bundle" + BYPRODUCT LAMMPS-macOS-multiarch-GUI-${LAMMPS_RELEASE}.dmg + WORKING_DIRECTORY ${CMAKE_BINARY_DIR} + ) + # settings or building on Windows with Visual Studio + elseif(MSVC) + file(STRINGS ${LAMMPS_DIR}/src/version.h line REGEX LAMMPS_VERSION) + string(REGEX REPLACE "#define LAMMPS_VERSION \"([0-9]+) ([A-Za-z][A-Za-z][A-Za-z])[A-Za-z]* ([0-9]+)\"" + "\\1\\2\\3" LAMMPS_RELEASE "${line}") + # install(FILES $ TYPE BIN) + if(BUILD_SHARED_LIBS) + install(FILES $ TYPE BIN) + endif() + install(FILES $ TYPE BIN) + # find path to VC++ init batch file + get_filename_component(VC_COMPILER_DIR "${CMAKE_CXX_COMPILER}" DIRECTORY) + get_filename_component(VC_BASE_DIR "${VC_COMPILER_DIR}/../../../../../.." ABSOLUTE) + set(VC_INIT "${VC_BASE_DIR}/Auxiliary/Build/vcvarsall.bat") + get_filename_component(QT5_BIN_DIR "${Qt5Core_DIR}/../../../bin" ABSOLUTE) + get_filename_component(INSTNAME ${CMAKE_INSTALL_PREFIX} NAME) + install(CODE "execute_process(COMMAND \"${CMAKE_COMMAND}\" -D INSTNAME=${INSTNAME} -D VC_INIT=\"${VC_INIT}\" -D QT5_BIN_DIR=\"${QT5_BIN_DIR}\" -P \"${CMAKE_SOURCE_DIR}/packaging/build_windows_vs.cmake\" WORKING_DIRECTORY \"${CMAKE_INSTALL_PREFIX}/..\" COMMAND_ECHO STDOUT)") + elseif((CMAKE_SYSTEM_NAME STREQUAL "Windows") AND CMAKE_CROSSCOMPILING) + file(STRINGS ${LAMMPS_DIR}/src/version.h line REGEX LAMMPS_VERSION) + string(REGEX REPLACE "#define LAMMPS_VERSION \"([0-9]+) ([A-Za-z][A-Za-z][A-Za-z])[A-Za-z]* ([0-9]+)\"" + "\\1\\2\\3" LAMMPS_RELEASE "${line}") + if(BUILD_SHARED_LIBS) + install(FILES $ TYPE BIN) + endif() + install(FILES $ TYPE BIN) + add_custom_target(zip + COMMAND sh -vx ${LAMMPS_DIR}/cmake/packaging/build_windows_cross_zip.sh ${CMAKE_INSTALL_PREFIX} ${LAMMPS_RELEASE} + DEPENDS lmp lammps-gui_build ${WHAM_EXE} + COMMENT "Create zip file with windows binaries" + BYPRODUCT LAMMPS-Win10-amd64-${LAMMPS_VERSION}.zip + WORKING_DIRECTORY ${CMAKE_BINARY_DIR}) + elseif((CMAKE_SYSTEM_NAME STREQUAL "Linux") AND NOT LAMMPS_GUI_USE_PLUGIN) + file(STRINGS ${LAMMPS_DIR}/src/version.h line REGEX LAMMPS_VERSION) + string(REGEX REPLACE "#define LAMMPS_VERSION \"([0-9]+) ([A-Za-z][A-Za-z][A-Za-z])[A-Za-z]* ([0-9]+)\"" + "\\1\\2\\3" LAMMPS_RELEASE "${line}") + set(LAMMPS_GUI_PACKAGING ${CMAKE_BINARY_DIR}/lammps-gui_build-prefix/src/lammps-gui_build/packaging/) + set(LAMMPS_GUI_RESOURCES ${CMAKE_BINARY_DIR}/lammps-gui_build-prefix/src/lammps-gui_build/resources/) + install(PROGRAMS ${CMAKE_BINARY_DIR}/lammps-gui_build-prefix/bin/lammps-gui DESTINATION ${CMAKE_INSTALL_BINDIR}) + install(FILES ${LAMMPS_GUI_PACKAGING}/lammps-gui.desktop DESTINATION ${CMAKE_INSTALL_DATADIR}/applications/) + install(FILES ${LAMMPS_GUI_PACKAGING}/lammps-gui.appdata.xml DESTINATION ${CMAKE_INSTALL_DATADIR}/appdata/) + install(FILES ${LAMMPS_GUI_PACKAGING}/lammps-input.xml DESTINATION ${CMAKE_INSTALL_DATADIR}/mime/packages/) + install(FILES ${LAMMPS_GUI_PACKAGING}/lammps-input.xml DESTINATION ${CMAKE_INSTALL_DATADIR}/mime/text/x-application-lammps.xml) + install(DIRECTORY ${LAMMPS_GUI_RESOURCES}/icons/hicolor DESTINATION ${CMAKE_INSTALL_DATADIR}/icons/) + install(CODE [[ + file(GET_RUNTIME_DEPENDENCIES + LIBRARIES $ + EXECUTABLES $ ${CMAKE_BINARY_DIR}/lammps-gui_build-prefix/bin/lammps-gui + RESOLVED_DEPENDENCIES_VAR _r_deps + UNRESOLVED_DEPENDENCIES_VAR _u_deps + ) + foreach(_file ${_r_deps}) + file(INSTALL + DESTINATION "${CMAKE_INSTALL_PREFIX}/lib" + TYPE SHARED_LIBRARY + FOLLOW_SYMLINK_CHAIN + FILES "${_file}" + ) + endforeach() + list(LENGTH _u_deps _u_length) + if("${_u_length}" GREATER 0) + message(WARNING "Unresolved dependencies detected: ${_u_deps}") + endif() ]] + ) + + add_custom_target(tgz + COMMAND ${LAMMPS_DIR}/cmake/packaging/build_linux_tgz.sh ${LAMMPS_RELEASE} + DEPENDS lmp lammps-gui_build ${WHAM_EXE} + COMMENT "Create compressed tar file of LAMMPS-GUI with dependent libraries and wrapper" + BYPRODUCT LAMMPS-Linux-x86_64-GUI-${LAMMPS_RELEASE}.tar.gz + WORKING_DIRECTORY ${CMAKE_BINARY_DIR} + ) + endif() endif() diff --git a/tools/lammps-gui/CMakeLists.wham b/cmake/packaging/CMakeLists.wham similarity index 100% rename from tools/lammps-gui/CMakeLists.wham rename to cmake/packaging/CMakeLists.wham diff --git a/cmake/packaging/MacOSXBundleInfo.plist.in b/cmake/packaging/MacOSXBundleInfo.plist.in index bc08591e97..8c6b0108b7 100644 --- a/cmake/packaging/MacOSXBundleInfo.plist.in +++ b/cmake/packaging/MacOSXBundleInfo.plist.in @@ -2,33 +2,33 @@ - CFBundleDevelopmentRegion - en-US - CFBundleExecutable - ${MACOSX_BUNDLE_EXECUTABLE_NAME} + CFBundleDevelopmentRegion + en-US + CFBundleExecutable + ${MACOSX_BUNDLE_EXECUTABLE_NAME} CFBundleDisplayName - The LAMMPS Molecular Dynamics Software - CFBundleIconFile - lammps - CFBundleIdentifier - org.lammps.gui - CFBundleInfoDictionaryVersion - 6.0 - CFBundleLongVersionString - ${MACOSX_BUNDLE_LONG_VERSION_STRING} - CFBundleName - LAMMPS_GUI - CFBundlePackageType - APPL - CFBundleShortVersionString - ${MACOSX_BUNDLE_SHORT_VERSION_STRING} - CFBundleSignature - ???? - CFBundleVersion - ${MACOSX_BUNDLE_BUNDLE_VERSION} - CSResourcesFileMapped - - NSHumanReadableCopyright - ${MACOSX_BUNDLE_COPYRIGHT} + The LAMMPS Molecular Dynamics Software and GUI + CFBundleIconFile + lammps-gui + CFBundleIdentifier + org.lammps.gui + CFBundleInfoDictionaryVersion + 6.0 + CFBundleLongVersionString + ${MACOSX_BUNDLE_LONG_VERSION_STRING} + CFBundleName + LAMMPS_GUI + CFBundlePackageType + APPL + CFBundleShortVersionString + ${MACOSX_BUNDLE_SHORT_VERSION_STRING} + CFBundleSignature + ???? + CFBundleVersion + ${MACOSX_BUNDLE_BUNDLE_VERSION} + CSResourcesFileMapped + + NSHumanReadableCopyright + ${MACOSX_BUNDLE_COPYRIGHT} diff --git a/cmake/packaging/build_linux_tgz.sh b/cmake/packaging/build_linux_tgz.sh index 276da019ae..97320fe51f 100755 --- a/cmake/packaging/build_linux_tgz.sh +++ b/cmake/packaging/build_linux_tgz.sh @@ -5,10 +5,11 @@ DESTDIR=${PWD}/../LAMMPS_GUI VERSION="$1" echo "Delete old files, if they exist" -rm -rf ${DESTDIR} ../LAMMPS_GUI-Linux-amd64*.tar.gz +rm -rf ${DESTDIR} LAMMPS-Linux-x86_64-GUI-*.tar.gz echo "Create staging area for deployment and populate" DESTDIR=${DESTDIR} cmake --install . --prefix "/" +cp lammps-gui_build-prefix/bin/lammps-gui ${DESTDIR}/bin/ echo "Remove debug info" for s in ${DESTDIR}/bin/* ${DESTDIR}/lib/liblammps* @@ -25,18 +26,18 @@ rm -f ${DESTDIR}/lib/libgcc_s* rm -f ${DESTDIR}/lib/libstdc++* # get qt dir -QTDIR=$(ldd ${DESTDIR}/bin/lammps-gui | grep libQt5Core | sed -e 's/^.*=> *//' -e 's/libQt5Core.so.*$/qt5/') +QTDIR=$(ldd ${DESTDIR}/bin/lammps-gui | grep libQt.Core | sed -e 's/^.*=> *//' -e 's/libQt\(.\)Core.so.*$/qt\1/') cat > ${DESTDIR}/bin/qt.conf < *//' -e 's/\(libQt5.*.so.*\) .*$/\1/') +QTDEPS=$(LD_LIBRARY_PATH=${DESTDIR}/lib ldd ${QTDIR}/plugins/platforms/libqxcb.so | grep -v ${DESTDIR} | grep libQt[56] | sed -e 's/^.*=> *//' -e 's/\(libQt[56].*.so.*\) .*$/\1/') for dep in ${QTDEPS} do \ cp ${dep} ${DESTDIR}/lib @@ -45,13 +46,13 @@ done echo "Add additional plugins for Qt" for dir in styles imageformats do \ - cp -r ${QTDIR}/plugins/${dir} ${DESTDIR}/qt5plugins/ + cp -r ${QTDIR}/plugins/${dir} ${DESTDIR}/qtplugins/ done # get imageplugin dependencies -for s in ${DESTDIR}/qt5plugins/imageformats/*.so +for s in ${DESTDIR}/qtplugins/imageformats/*.so do \ - QTDEPS=$(LD_LIBRARY_PATH=${DESTDIR}/lib ldd $s | grep -v ${DESTDIR} | grep -E '(libQt5|jpeg)' | sed -e 's/^.*=> *//' -e 's/\(lib.*.so.*\) .*$/\1/') + QTDEPS=$(LD_LIBRARY_PATH=${DESTDIR}/lib ldd $s | grep -v ${DESTDIR} | grep -E '(libQt.|jpeg)' | sed -e 's/^.*=> *//' -e 's/\(lib.*.so.*\) .*$/\1/') for dep in ${QTDEPS} do \ cp ${dep} ${DESTDIR}/lib @@ -72,8 +73,9 @@ do \ done pushd .. -tar -czvvf LAMMPS_GUI-Linux-amd64-${VERSION}.tar.gz LAMMPS_GUI +tar -czvvf LAMMPS-Linux-x86_64-GUI-${VERSION}.tar.gz LAMMPS_GUI popd +mv -v ../LAMMPS-Linux-x86_64-GUI-${VERSION}.tar.gz . echo "Cleanup dir" rm -r ${DESTDIR} diff --git a/cmake/packaging/build_macos_dmg.sh b/cmake/packaging/build_macos_dmg.sh index 6e6877d2dd..a85cc1c1bb 100755 --- a/cmake/packaging/build_macos_dmg.sh +++ b/cmake/packaging/build_macos_dmg.sh @@ -2,12 +2,15 @@ APP_NAME=lammps-gui VERSION="$1" +LAMMPS_GUI_APP="$2" +rm -rv ${APP_NAME}.app +mv -v ${LAMMPS_GUI_APP} . echo "Delete old files, if they exist" -rm -f ${APP_NAME}.dmg ${APP_NAME}-rw.dmg LAMMPS_GUI-macOS-multiarch*.dmg +rm -f ${APP_NAME}.dmg ${APP_NAME}-rw.dmg LAMMPS-macOS-multiarch-GUI-*.dmg echo "Create initial dmg file with macdeployqt" -macdeployqt lammps-gui.app -dmg +macdeployqt ${APP_NAME}.app -dmg echo "Create writable dmg file" hdiutil convert ${APP_NAME}.dmg -format UDRW -o ${APP_NAME}-rw.dmg @@ -26,10 +29,17 @@ mv ${APP_NAME}.app/Contents/Resources/LAMMPS_DMG_Background.png .background/back mv ${APP_NAME}.app LAMMPS_GUI.app cd LAMMPS_GUI.app/Contents +echo "Update rpath for LAMMPS and LAMMPS-GUI to link to liblammps.0.dylib" +LIB_DIR=/Applications/LAMMPS_GUI.app/Contents/Frameworks +LIB_NAME=liblammps.0.dylib +install_name_tool -change @rpath/${LIB_NAME} ${LIB_DIR}/${LIB_NAME} bin/lmp +install_name_tool -change @rpath/${LIB_NAME} ${LIB_DIR}/${LIB_NAME} MacOS/lammps-gui + echo "Attach icons to LAMMPS console and GUI executables" echo "read 'icns' (-16455) \"Resources/lammps.icns\";" > icon.rsrc Rez -a icon.rsrc -o bin/lmp SetFile -a C bin/lmp +echo "read 'icns' (-16455) \"Resources/lammps-gui.icns\";" > icon.rsrc Rez -a icon.rsrc -o MacOS/lammps-gui SetFile -a C MacOS/lammps-gui rm icon.rsrc @@ -97,12 +107,12 @@ sync echo "Unmount modified disk image and convert to compressed read-only image" hdiutil detach "${DEVICE}" -hdiutil convert "${APP_NAME}-rw.dmg" -format UDZO -o "LAMMPS_GUI-macOS-multiarch-${VERSION}.dmg" +hdiutil convert "${APP_NAME}-rw.dmg" -format UDZO -o "LAMMPS-macOS-multiarch-GUI-${VERSION}.dmg" echo "Attach icon to .dmg file" -echo "read 'icns' (-16455) \"lammps-gui.app/Contents/Resources/lammps.icns\";" > icon.rsrc -Rez -a icon.rsrc -o LAMMPS_GUI-macOS-multiarch-${VERSION}.dmg -SetFile -a C LAMMPS_GUI-macOS-multiarch-${VERSION}.dmg +echo "read 'icns' (-16455) \"${APP_NAME}.app/Contents/Resources/lammps.icns\";" > icon.rsrc +Rez -a icon.rsrc -o LAMMPS-macOS-multiarch-GUI-${VERSION}.dmg +SetFile -a C LAMMPS-macOS-multiarch-GUI-${VERSION}.dmg rm icon.rsrc echo "Delete temporary disk images" diff --git a/tools/lammps-gui/lammps-gui.desktop b/cmake/packaging/lammps-gui.desktop similarity index 100% rename from tools/lammps-gui/lammps-gui.desktop rename to cmake/packaging/lammps-gui.desktop diff --git a/cmake/packaging/lammps-gui.icns b/cmake/packaging/lammps-gui.icns new file mode 100644 index 0000000000..5d664b7995 Binary files /dev/null and b/cmake/packaging/lammps-gui.icns differ diff --git a/tools/lammps-gui/lammps-input.xml b/cmake/packaging/lammps-input.xml similarity index 100% rename from tools/lammps-gui/lammps-input.xml rename to cmake/packaging/lammps-input.xml diff --git a/cmake/packaging/linux_wrapper.sh b/cmake/packaging/linux_wrapper.sh index b777c09eb1..44c9f81427 100755 --- a/cmake/packaging/linux_wrapper.sh +++ b/cmake/packaging/linux_wrapper.sh @@ -7,6 +7,11 @@ export LC_ALL=C BASEDIR="$(dirname "$0")" EXENAME="$(basename "$0")" +# save old settings (for restoring them later) +OLDPATH="${PATH}" +OLDLDLIB="${LD_LIBRARY_PATH}" + +# prepend path to find our custom executables PATH="${BASEDIR}/bin:${PATH}" # append to LD_LIBRARY_PATH to prefer local (newer) libs @@ -15,6 +20,8 @@ LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${BASEDIR}/lib" # set some environment variables for LAMMPS etc. LAMMPS_POTENTIALS="${BASEDIR}/share/lammps/potentials" MSI2LMP_LIBRARY="${BASEDIR}/share/lammps/frc_files" -export LD_LIBRARY_PATH LAMMPS_POTENTIALS MSI2LMP_LIBRARY PATH + +# export everything +export LD_LIBRARY_PATH LAMMPS_POTENTIALS MSI2LMP_LIBRARY PATH OLDPATH OLDLDLIB exec "${BASEDIR}/bin/${EXENAME}" "$@" diff --git a/tools/lammps-gui/org.lammps.lammps-gui.yml b/cmake/packaging/org.lammps.lammps-gui.yml similarity index 98% rename from tools/lammps-gui/org.lammps.lammps-gui.yml rename to cmake/packaging/org.lammps.lammps-gui.yml index a16ef5fdee..01aedc416b 100644 --- a/tools/lammps-gui/org.lammps.lammps-gui.yml +++ b/cmake/packaging/org.lammps.lammps-gui.yml @@ -1,6 +1,6 @@ id: org.lammps.lammps-gui runtime: org.kde.Platform -runtime-version: "5.15-23.08" +runtime-version: "5.15-24.08" sdk: org.kde.Sdk command: lammps-gui finish-args: @@ -12,7 +12,7 @@ finish-args: build-options: build-args: - --share=network -rename-icon: lammps +rename-icon: lammps-gui rename-desktop-file: lammps-gui.desktop rename-appdata-file: lammps-gui.appdata.xml rename-mime-file: lammps-input.xml @@ -22,6 +22,15 @@ modules: builddir: true subdir: cmake config-opts: + - -D BUILD_LAMMPS_GUI=yes + - -D BUILD_WHAM=yes + - -D BUILD_SHARED_LIBS=yes + - -D BUILD_TOOLS=yes + - -D CMAKE_BUILD_TYPE=Release + - -D CMAKE_CXX_COMPILER=g++ + - -D CMAKE_C_COMPILER=gcc + - -D CMAKE_Fortran_COMPILER=gfortran + - -D DOWNLOAD_POTENTIALS=no - -D PKG_AMOEBA=yes - -D PKG_ASPHERE=yes - -D PKG_AWPMD=yes @@ -99,15 +108,6 @@ modules: - -D PKG_UEF=yes - -D PKG_VORONOI=yes - -D PKG_YAFF=yes - - -D BUILD_LAMMPS_GUI=yes - - -D BUILD_SHARED_LIBS=yes - - -D CMAKE_CXX_COMPILER=g++ - - -D CMAKE_C_COMPILER=gcc - - -D CMAKE_Fortran_COMPILER=gfortran - - -D CMAKE_BUILD_TYPE=Release - - -D DOWNLOAD_POTENTIALS=no - - -D BUILD_TOOLS=yes - - -D BUILD_WHAM=yes sources: - type: git url: https://github.com/lammps/lammps.git diff --git a/tools/lammps-gui/update-wham-2.1.0.patch b/cmake/packaging/update-wham-2.1.0.patch similarity index 100% rename from tools/lammps-gui/update-wham-2.1.0.patch rename to cmake/packaging/update-wham-2.1.0.patch diff --git a/cmake/packaging/xdg-open b/cmake/packaging/xdg-open index d282bb3d11..298919a44a 100755 --- a/cmake/packaging/xdg-open +++ b/cmake/packaging/xdg-open @@ -33,6 +33,14 @@ # #--------------------------------------------- +# restore previously saved environment variables, if available +if [ -n "${OLDPATH}" ] +then + PATH="${OLDPATH}" + LD_LIBRARY_PATH="${OLDLDLIB}" + export PATH LD_LIBRARY_PATH +fi + NEW_LIBRARY_PATH="/usr/local/lib64" for s in $(echo $LD_LIBRARY_PATH | sed -e 's/:/ /g') do \ diff --git a/cmake/patches/voro-make.patch b/cmake/patches/voro-make.patch new file mode 100644 index 0000000000..f2811e3adb --- /dev/null +++ b/cmake/patches/voro-make.patch @@ -0,0 +1,60 @@ +--- Makefile.orig 2025-06-04 12:16:01.056286325 -0400 ++++ Makefile 2025-06-04 12:18:47.454879006 -0400 +@@ -11,8 +11,7 @@ + + # Build all of the executable files + all: +- $(MAKE) -C src +- $(MAKE) -C examples ++ $(MAKE) -C src depend libvoro++.a + + # Build the help files (with Doxygen) + help: +@@ -24,16 +23,12 @@ + $(MAKE) -C examples clean + + # Install the executable, man page, and shared library +-install: +- $(MAKE) -C src +- $(INSTALL) -d $(IFLAGS_EXEC) $(PREFIX)/bin ++install: all + $(INSTALL) -d $(IFLAGS_EXEC) $(PREFIX)/lib + $(INSTALL) -d $(IFLAGS_EXEC) $(PREFIX)/man + $(INSTALL) -d $(IFLAGS_EXEC) $(PREFIX)/man/man1 + $(INSTALL) -d $(IFLAGS_EXEC) $(PREFIX)/include + $(INSTALL) -d $(IFLAGS_EXEC) $(PREFIX)/include/voro++ +- $(INSTALL) $(IFLAGS_EXEC) src/voro++ $(PREFIX)/bin +- $(INSTALL) $(IFLAGS) man/voro++.1 $(PREFIX)/man/man1 + $(INSTALL) $(IFLAGS) src/libvoro++.a $(PREFIX)/lib + $(INSTALL) $(IFLAGS) src/voro++.hh $(PREFIX)/include/voro++ + $(INSTALL) $(IFLAGS) src/c_loops.hh $(PREFIX)/include/voro++ +--- src/Makefile.orig 2013-10-17 13:54:13.000000000 -0400 ++++ src/Makefile 2025-06-04 12:16:47.293104880 -0400 +@@ -10,10 +10,10 @@ + # List of the common source files + objs=cell.o common.o container.o unitcell.o v_compute.o c_loops.o \ + v_base.o wall.o pre_container.o container_prd.o +-src=$(patsubst %.o,%.cc,$(objs)) ++src=$(objs:.o=.cc) + + # Makefile rules +-all: libvoro++.a voro++ ++all: depend libvoro++.a voro++ + + depend: + $(CXX) -MM $(src) >Makefile.dep +@@ -22,12 +22,12 @@ + + libvoro++.a: $(objs) + rm -f libvoro++.a +- ar rs libvoro++.a $^ ++ $(AR) rs libvoro++.a $(objs) + + voro++: libvoro++.a cmd_line.cc + $(CXX) $(CFLAGS) -L. -o voro++ cmd_line.cc -lvoro++ + +-%.o: %.cc ++.cc.o: + $(CXX) $(CFLAGS) -c $< + + help: Doxyfile $(SOURCE) diff --git a/cmake/presets/all_off.cmake b/cmake/presets/all_off.cmake index f2f5782480..31674911bd 100644 --- a/cmake/presets/all_off.cmake +++ b/cmake/presets/all_off.cmake @@ -4,9 +4,8 @@ set(ALL_PACKAGES ADIOS AMOEBA + APIP ASPHERE - ATC - AWPMD BOCS BODY BPM @@ -73,7 +72,6 @@ set(ALL_PACKAGES PHONON PLUGIN PLUMED - POEMS PTM PYTHON QEQ diff --git a/cmake/presets/all_on.cmake b/cmake/presets/all_on.cmake index 8dc4632138..75a4ffde87 100644 --- a/cmake/presets/all_on.cmake +++ b/cmake/presets/all_on.cmake @@ -6,9 +6,8 @@ set(ALL_PACKAGES ADIOS AMOEBA + APIP ASPHERE - ATC - AWPMD BOCS BODY BPM @@ -75,7 +74,6 @@ set(ALL_PACKAGES PHONON PLUGIN PLUMED - POEMS PTM PYTHON QEQ diff --git a/cmake/presets/clang.cmake b/cmake/presets/clang.cmake index f55c5be44a..a5d0fc9820 100644 --- a/cmake/presets/clang.cmake +++ b/cmake/presets/clang.cmake @@ -14,14 +14,14 @@ endif() set(CMAKE_CXX_COMPILER "clang++" CACHE STRING "" FORCE) set(CMAKE_C_COMPILER "clang" CACHE STRING "" FORCE) set(CMAKE_Fortran_COMPILER ${CLANG_FORTRAN} CACHE STRING "" FORCE) -set(CMAKE_CXX_FLAGS_DEBUG "-Wall -Wextra -g" CACHE STRING "" FORCE) -set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG" CACHE STRING "" FORCE) +set(CMAKE_CXX_FLAGS_DEBUG "-Wall -Wextra -Wno-bitwise-instead-of-logical -g" CACHE STRING "" FORCE) +set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "-Wall -Wextra -Wno-bitwise-instead-of-logical -g -O2 -DNDEBUG" CACHE STRING "" FORCE) set(CMAKE_CXX_FLAGS_RELEASE "-O3 -DNDEBUG" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_DEBUG "-Wall -Wextra -g ${FC_STD_VERSION}" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG ${FC_STD_VERSION}" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_RELEASE "-O3 -DNDEBUG ${FC_STD_VERSION}" CACHE STRING "" FORCE) -set(CMAKE_C_FLAGS_DEBUG "-Wall -Wextra -g" CACHE STRING "" FORCE) -set(CMAKE_C_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG" CACHE STRING "" FORCE) +set(CMAKE_C_FLAGS_DEBUG "-Wall -Wextra -Wno-bitwise-instead-of-logical -g" CACHE STRING "" FORCE) +set(CMAKE_C_FLAGS_RELWITHDEBINFO "-Wall -Wextra -Wno-bitwise-instead-of-logical -g -O2 -DNDEBUG" CACHE STRING "" FORCE) set(CMAKE_C_FLAGS_RELEASE "-O3 -DNDEBUG" CACHE STRING "" FORCE) set(MPI_CXX "clang++" CACHE STRING "" FORCE) diff --git a/cmake/presets/hip_amd.cmake b/cmake/presets/hip_amd.cmake index 4b8945e0c7..2cf28c05c4 100644 --- a/cmake/presets/hip_amd.cmake +++ b/cmake/presets/hip_amd.cmake @@ -19,12 +19,19 @@ set(CMAKE_C_FLAGS_RELEASE "-O3 -DNDEBUG" CACHE STRING "" FORCE) set(MPI_CXX "hipcc" CACHE STRING "" FORCE) set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE) +set(MPI_C "hipcc" CACHE STRING "" FORCE) +set(MPI_C_COMPILER "mpicc" CACHE STRING "" FORCE) + +# change as needed. This is for Fedora Linux 41 and 42 +set(_libomp_root "/usr/lib/clang/18") +# we need to explicitly specify the include dir, since hipcc will +# compile each file twice and doesn't find omp.h the second time unset(HAVE_OMP_H_INCLUDE CACHE) set(OpenMP_C "hipcc" CACHE STRING "" FORCE) -set(OpenMP_C_FLAGS "-fopenmp" CACHE STRING "" FORCE) +set(OpenMP_C_FLAGS "-fopenmp=libomp -I${_libomp_root}/include" CACHE STRING "" FORCE) set(OpenMP_C_LIB_NAMES "omp" CACHE STRING "" FORCE) set(OpenMP_CXX "hipcc" CACHE STRING "" FORCE) -set(OpenMP_CXX_FLAGS "-fopenmp" CACHE STRING "" FORCE) +set(OpenMP_CXX_FLAGS "-fopenmp=libomp -I${_libomp_root}/include" CACHE STRING "" FORCE) set(OpenMP_CXX_LIB_NAMES "omp" CACHE STRING "" FORCE) set(OpenMP_omp_LIBRARY "libomp.so" CACHE PATH "" FORCE) diff --git a/cmake/presets/kokkos-cuda-nowrapper.cmake b/cmake/presets/kokkos-cuda-nowrapper.cmake new file mode 100644 index 0000000000..a2569a3d54 --- /dev/null +++ b/cmake/presets/kokkos-cuda-nowrapper.cmake @@ -0,0 +1,11 @@ +# preset that enables KOKKOS and selects CUDA compilation without nvcc_wrapper +# enabled as well. The GPU architecture *must* match your hardware (If not manually set, Kokkos will try to autodetect it). +set(PKG_KOKKOS ON CACHE BOOL "" FORCE) +set(Kokkos_ENABLE_SERIAL ON CACHE BOOL "" FORCE) +set(Kokkos_ENABLE_CUDA ON CACHE BOOL "" FORCE) + +# If KSPACE is also enabled, use CUFFT for FFTs +set(FFT_KOKKOS "CUFFT" CACHE STRING "" FORCE) + +# hide deprecation warnings temporarily for stable release +set(Kokkos_ENABLE_DEPRECATION_WARNINGS OFF CACHE BOOL "" FORCE) diff --git a/cmake/presets/kokkos-cuda.cmake b/cmake/presets/kokkos-cuda.cmake index 31942b8fae..7b2ef6bddd 100644 --- a/cmake/presets/kokkos-cuda.cmake +++ b/cmake/presets/kokkos-cuda.cmake @@ -1,10 +1,8 @@ -# preset that enables KOKKOS and selects CUDA compilation with OpenMP -# enabled as well. The GPU architecture *must* match your hardware +# preset that enables KOKKOS and selects CUDA compilation using the nvcc_wrapper +# enabled as well. The GPU architecture *must* match your hardware (If not manually set, Kokkos will try to autodetect it). set(PKG_KOKKOS ON CACHE BOOL "" FORCE) set(Kokkos_ENABLE_SERIAL ON CACHE BOOL "" FORCE) set(Kokkos_ENABLE_CUDA ON CACHE BOOL "" FORCE) -set(Kokkos_ARCH_PASCAL60 ON CACHE BOOL "" FORCE) -set(BUILD_OMP ON CACHE BOOL "" FORCE) get_filename_component(NVCC_WRAPPER_CMD ${CMAKE_CURRENT_SOURCE_DIR}/../lib/kokkos/bin/nvcc_wrapper ABSOLUTE) set(CMAKE_CXX_COMPILER ${NVCC_WRAPPER_CMD} CACHE FILEPATH "" FORCE) diff --git a/cmake/presets/kokkos-hip.cmake b/cmake/presets/kokkos-hip.cmake index c1968c0ffa..58b09020fb 100644 --- a/cmake/presets/kokkos-hip.cmake +++ b/cmake/presets/kokkos-hip.cmake @@ -1,8 +1,8 @@ -# preset that enables KOKKOS and selects HIP compilation with OpenMP -# enabled as well. Also sets some performance related compiler flags. +# preset that enables KOKKOS and selects HIP compilation withOUT OpenMP. +# Kokkos OpenMP is not compatible with the second pass of hipcc. set(PKG_KOKKOS ON CACHE BOOL "" FORCE) set(Kokkos_ENABLE_SERIAL ON CACHE BOOL "" FORCE) -set(Kokkos_ENABLE_OPENMP ON CACHE BOOL "" FORCE) +set(Kokkos_ENABLE_OPENMP OFF CACHE BOOL "" FORCE) set(Kokkos_ENABLE_CUDA OFF CACHE BOOL "" FORCE) set(Kokkos_ENABLE_HIP ON CACHE BOOL "" FORCE) set(Kokkos_ARCH_VEGA90A on CACHE BOOL "" FORCE) @@ -11,11 +11,11 @@ set(BUILD_OMP ON CACHE BOOL "" FORCE) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -munsafe-fp-atomics" CACHE STRING "" FORCE) -# If KSPACE is also enabled, use CUFFT for FFTs +# If KSPACE is also enabled, use HIPFFT for FFTs set(FFT_KOKKOS "HIPFFT" CACHE STRING "" FORCE) # hide deprecation warnings temporarily for stable release -set(Kokkos_ENABLE_DEPRECATION_WARNINGS OFF CACHE BOOL "" FORCE) +#set(Kokkos_ENABLE_DEPRECATION_WARNINGS OFF CACHE BOOL "" FORCE) # these flags are needed to build with Cray MPICH on OLCF Crusher #-D CMAKE_CXX_FLAGS="-I/${MPICH_DIR}/include" diff --git a/cmake/presets/kokkos-openmp.cmake b/cmake/presets/kokkos-openmp.cmake index 3a7c19ff3c..f5fe4ca3cd 100644 --- a/cmake/presets/kokkos-openmp.cmake +++ b/cmake/presets/kokkos-openmp.cmake @@ -2,7 +2,6 @@ set(PKG_KOKKOS ON CACHE BOOL "" FORCE) set(Kokkos_ENABLE_SERIAL ON CACHE BOOL "" FORCE) set(Kokkos_ENABLE_OPENMP ON CACHE BOOL "" FORCE) -set(Kokkos_ENABLE_CUDA OFF CACHE BOOL "" FORCE) set(BUILD_OMP ON CACHE BOOL "" FORCE) # hide deprecation warnings temporarily for stable release diff --git a/cmake/presets/kokkos-serial.cmake b/cmake/presets/kokkos-serial.cmake index f1bda7124a..b60385f2d7 100644 --- a/cmake/presets/kokkos-serial.cmake +++ b/cmake/presets/kokkos-serial.cmake @@ -1,8 +1,6 @@ # preset that enables KOKKOS and selects serial compilation only set(PKG_KOKKOS ON CACHE BOOL "" FORCE) set(Kokkos_ENABLE_SERIAL ON CACHE BOOL "" FORCE) -set(Kokkos_ENABLE_OPENMP OFF CACHE BOOL "" FORCE) -set(Kokkos_ENABLE_CUDA OFF CACHE BOOL "" FORCE) # hide deprecation warnings temporarily for stable release set(Kokkos_ENABLE_DEPRECATION_WARNINGS OFF CACHE BOOL "" FORCE) diff --git a/cmake/presets/mingw-cross.cmake b/cmake/presets/mingw-cross.cmake index b5c5adb1f6..0776afee9f 100644 --- a/cmake/presets/mingw-cross.cmake +++ b/cmake/presets/mingw-cross.cmake @@ -1,8 +1,6 @@ set(WIN_PACKAGES AMOEBA ASPHERE - ATC - AWPMD BOCS BODY BPM @@ -60,7 +58,6 @@ set(WIN_PACKAGES PERI PHONON PLUGIN - POEMS PTM QEQ QTB diff --git a/cmake/presets/most.cmake b/cmake/presets/most.cmake index 05282eebdd..9354070011 100644 --- a/cmake/presets/most.cmake +++ b/cmake/presets/most.cmake @@ -55,7 +55,6 @@ set(ALL_PACKAGES PERI PHONON PLUGIN - POEMS QEQ REACTION REAXFF diff --git a/cmake/presets/nolib.cmake b/cmake/presets/nolib.cmake index 4a4a557505..d7bf422c70 100644 --- a/cmake/presets/nolib.cmake +++ b/cmake/presets/nolib.cmake @@ -3,8 +3,7 @@ set(PACKAGES_WITH_LIB ADIOS - ATC - AWPMD + APIP COMPRESS ELECTRODE GPU diff --git a/cmake/presets/windows.cmake b/cmake/presets/windows.cmake index 71241c559c..678b6dafe2 100644 --- a/cmake/presets/windows.cmake +++ b/cmake/presets/windows.cmake @@ -1,7 +1,6 @@ set(WIN_PACKAGES AMOEBA ASPHERE - AWPMD BOCS BODY BPM @@ -53,7 +52,6 @@ set(WIN_PACKAGES PERI PHONON PLUGIN - POEMS PTM QEQ QTB diff --git a/doc/.gitignore b/doc/.gitignore index 28e583fa0b..d75fff1f90 100644 --- a/doc/.gitignore +++ b/doc/.gitignore @@ -3,6 +3,8 @@ /fasthtml /epub /latex +/linkcheck +/linkchecker-out.html /mathjax /spelling /LAMMPS.epub @@ -17,6 +19,7 @@ *.el /utils/sphinx-config/_static/mathjax /utils/sphinx-config/_static/polyfill.js +/utils/sphinx-config/_themes/lammps_theme/search.html /src/pairs.rst /src/bonds.rst /src/angles.rst diff --git a/doc/Makefile b/doc/Makefile index 92132e7d8c..727edcdfa5 100644 --- a/doc/Makefile +++ b/doc/Makefile @@ -22,6 +22,7 @@ HAS_PYTHON3 = NO HAS_DOXYGEN = NO HAS_PDFLATEX = NO HAS_PANDOC = NO +WEB_SEARCH = NO ifeq ($(shell type python3 >/dev/null 2>&1; echo $$?), 0) HAS_PYTHON3 = YES @@ -51,7 +52,7 @@ SPHINXEXTRA = -j $(shell $(PYTHON) -c 'import multiprocessing;print(multiprocess # we only want to use explicitly listed files. DOXYFILES = $(shell sed -n -e 's/\#.*$$//' -e '/^ *INPUT \+=/,/^[A-Z_]\+ \+=/p' doxygen/Doxyfile.in | sed -e 's/@LAMMPS_SOURCE_DIR@/..\/src/g' -e 's/\\//g' -e 's/ \+/ /' -e 's/[A-Z_]\+ \+= *\(YES\|NO\|\)//') -.PHONY: help clean-all clean clean-spelling epub mobi html pdf spelling anchor_check style_check char_check role_check xmlgen fasthtml fasthtml-init +.PHONY: help clean-all clean clean-spelling epub mobi html pdf spelling anchor_check style_check char_check link_check role_check xmlgen fasthtml fasthtml-init FASTHTMLFILES = $(patsubst $(RSTDIR)/%.rst,fasthtml/%.html,$(wildcard $(RSTDIR)/*rst)) # ------------------------------------------ @@ -65,12 +66,15 @@ help: @echo " mobi convert ePUB to MOBI format manual for e-book readers (e.g. Kindle)" @echo " (requires ebook-convert tool from calibre)" @echo " fasthtml approximate HTML page creation in fasthtml dir (for development)" + @echo " upgrade upgrade sphinx, extensions, and dependencies to latest supported versions" @echo " clean remove all intermediate files" @echo " clean-all reset the entire build environment" @echo " anchor_check scan for duplicate anchor labels" @echo " style_check check for complete and consistent style lists" @echo " package_check check for complete and consistent package lists" + @echo " char_check check for non-ASCII characters" @echo " role_check check for misformatted role keywords" + @echo " link_check check external links in the manual" @echo " spelling spell-check the manual" # ------------------------------------------ @@ -78,12 +82,15 @@ help: clean-all: clean rm -rf $(BUILDDIR)/docenv $(MATHJAX) $(BUILDDIR)/LAMMPS.mobi $(BUILDDIR)/LAMMPS.epub $(BUILDDIR)/Manual.pdf -clean: clean-spelling - rm -rf $(BUILDDIR)/html $(BUILDDIR)/epub $(BUILDDIR)/latex $(BUILDDIR)/doctrees $(BUILDDIR)/doxygen/xml $(BUILDDIR)/doxygen-warn.log $(BUILDDIR)/doxygen/Doxyfile $(SPHINXCONFIG)/conf.py $(BUILDDIR)/fasthtml +clean: clean-spelling clean-linkcheck + rm -rf $(BUILDDIR)/html $(BUILDDIR)/epub $(BUILDDIR)/latex $(BUILDDIR)/doctrees $(BUILDDIR)/doxygen/xml $(BUILDDIR)/doxygen-warn.log $(BUILDDIR)/doxygen/Doxyfile $(SPHINXCONFIG)/conf.py $(BUILDDIR)/fasthtml $(BUILDDIR)/spelling clean-spelling: rm -rf $(BUILDDIR)/spelling +clean-linkcheck: + rm -rf $(BUILDDIR)/linkcheck $(BUILDDIR)/linkchecker-out.html + $(SPHINXCONFIG)/conf.py: $(SPHINXCONFIG)/conf.py.in sed -e 's,@DOXYGEN_XML_DIR@,$(BUILDDIR)/doxygen/xml,g' \ -e 's,@LAMMPS_SOURCE_DIR@,$(BUILDDIR)/../src,g' \ @@ -95,12 +102,17 @@ globbed-tocs: html: xmlgen globbed-tocs $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX) @if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi + @if [ "$(WEB_SEARCH)" == "YES" ] ; then \ + cp -v utils/sphinx-config/_themes/lammps_theme/google_search.html \ + utils/sphinx-config/_themes/lammps_theme/search.html; \ + else \ + cp -v utils/sphinx-config/_themes/lammps_theme/local_search.html \ + utils/sphinx-config/_themes/lammps_theme/search.html; \ + fi @$(MAKE) $(MFLAGS) -C graphviz all @(\ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ sphinx-build -E $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) html ;\ - touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ - sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) html ;\ ln -sf Manual.html html/index.html;\ rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ echo "############################################" ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ @@ -153,6 +165,17 @@ spelling: xmlgen globbed-tocs $(SPHINXCONFIG)/conf.py $(VENV) $(SPHINXCONFIG)/fa ) @echo "Spell check finished." +link_check: xmlgen globbed-tocs $(SPHINXCONFIG)/conf.py $(VENV) $(SPHINXCONFIG)/false_positives.txt + @if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi + @(\ + . $(VENV)/bin/activate ; \ + cp $(SPHINXCONFIG)/false_positives.txt $(RSTDIR)/; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ + sphinx-build -b linkcheck -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) linkcheck ;\ + rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ + deactivate ;\ + ) + @echo "Link check finished." + epub: xmlgen globbed-tocs $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) @if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi @$(MAKE) $(MFLAGS) -C graphviz all @@ -162,8 +185,6 @@ epub: xmlgen globbed-tocs $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) @(\ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ sphinx-build -E $(SPHINXEXTRA) -b epub -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\ - touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ - sphinx-build $(SPHINXEXTRA) -b epub -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\ rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ deactivate ;\ ) @@ -183,8 +204,6 @@ pdf: xmlgen globbed-tocs $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) @(\ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ sphinx-build -E $(SPHINXEXTRA) -b latex -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) latex ;\ - touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ - sphinx-build $(SPHINXEXTRA) -b latex -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) latex ;\ rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ echo "############################################" ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ rst_anchor_check src/*.rst ;\ @@ -249,11 +268,13 @@ role_check : @( env LC_ALL=C grep -n -E '^ *\.\. [a-z0-9]+:(\s+.*|)$$' \ $(RSTDIR)/*.rst ../src/*.{cpp,h} ../src/*/*.{cpp,h} && exit 1 || : ) -link_check : $(VENV) html +upgrade: $(VENV) @(\ - . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \ - linkchecker -F html --check-extern html/Manual.html ;\ - deactivate ;\ + . $(VENV)/bin/activate; \ + pip $(PIP_OPTIONS) install --upgrade pip; \ + pip $(PIP_OPTIONS) install --upgrade wheel; \ + pip $(PIP_OPTIONS) install --upgrade -r $(BUILDDIR)/utils/requirements.txt; \ + deactivate;\ ) xmlgen : doxygen/xml/index.xml diff --git a/doc/graphviz/lammps-releases.dot b/doc/graphviz/lammps-releases.dot index f641cac029..fb11f4bd68 100644 --- a/doc/graphviz/lammps-releases.dot +++ b/doc/graphviz/lammps-releases.dot @@ -5,13 +5,13 @@ digraph releases { github -> develop [label="Merge commits"]; { rank = "same"; - work [shape="none" label="Development branches:"] + work [shape="none" label="Development branches:" fontname="bold"] develop [label="'develop' branch" height=0.75]; maintenance [label="'maintenance' branch" height=0.75]; }; { rank = "same"; - upload [shape="none" label="Release branches:"] + upload [shape="none" label="Release branches:" fontname="bold"] release [label="'release' branch" height=0.75]; stable [label="'stable' branch" height=0.75]; }; @@ -22,7 +22,7 @@ digraph releases { maintenance -> stable [label="Updates to stable release"]; { rank = "same"; - tag [shape="none" label="Applied tags:"]; + tag [shape="none" label="Applied tags:" fontname="bold"]; patchtag [shape="box" label="patch_"]; stabletag [shape="box" label="stable_"]; updatetag [shape="box" label="stable__update"]; diff --git a/doc/graphviz/pylammps-invoke-lammps.dot b/doc/graphviz/pylammps-invoke-lammps.dot deleted file mode 100644 index 0d9e65a5fe..0000000000 --- a/doc/graphviz/pylammps-invoke-lammps.dot +++ /dev/null @@ -1,30 +0,0 @@ -// PyLammps -> LAMMPS -digraph api { - rankdir="LR"; - edge [constraint=false]; - python [shape=box style=filled color="#4e9a06" fontcolor=white label="Python\nScript" height=1.5]; - subgraph cluster0 { - style=filled; - color="#e5e5e5"; - height=1.5; - rank=same; - pylammps [shape=box style=filled height=1 color="#729fcf" label="(I)PyLammps"]; - lammps [shape=box style=filled height=1 color="#729fcf" label="lammps\n\nptr: 0x01abcdef"]; - pylammps -> lammps [dir=both]; - label="LAMMPS Python Module"; - } - subgraph cluster1 { - style=filled; - color="#e5e5e5"; - height=1.5; - capi [shape=box style=filled height=1 color="#666666" fontcolor=white label="LAMMPS\nC Library API"]; - instance [shape=box style=filled height=1 color="#3465a4" fontcolor=white label="LAMMPS\ninstance\n\n0x01abcdef"]; - capi -> instance [dir=both constraint=true]; - label="LAMMPS Shared Library"; - } - python -> pylammps [dir=both constraint=true]; - lammps -> capi [dir=both label=ctypes constraint=true]; - - pylammps:e -> instance:ne [dir=back, style=dashed label="captured standard output"]; -} - diff --git a/doc/lammps.1 b/doc/lammps.1 index 01dba5b277..28f639362b 100644 --- a/doc/lammps.1 +++ b/doc/lammps.1 @@ -1,7 +1,7 @@ -.TH LAMMPS "1" "2 April 2025" "2025-04-02" +.TH LAMMPS "1" "10 September 2025" "2025-09-10" .SH NAME .B LAMMPS -\- Molecular Dynamics Simulator. Version 2 April 2025 +\- Molecular Dynamics Simulator. Version 10 September 2025 .SH SYNOPSIS .B lmp diff --git a/doc/src/Bibliography.rst b/doc/src/Bibliography.rst index 0df3cf2f83..d38c8d9bb8 100644 --- a/doc/src/Bibliography.rst +++ b/doc/src/Bibliography.rst @@ -55,9 +55,6 @@ Bibliography **(Andersen)** H.\ Andersen, J of Comp Phys, 52, 24-34 (1983). -**(Anderson)** - Anderson, Mukherjee, Critchley, Ziegler, and Lipton "POEMS: Parallelizable Open-source Efficient Multibody Software ", Engineering With Computers (2006). - **(Appshaw)** Appshaw, Seddon, Hanna, Soft. Matter,18, 1747(2022). @@ -380,7 +377,7 @@ Bibliography Eike and Maginn, Journal of Chemical Physics, 124, 164503 (2006). **(Elliott)** - Elliott, Tadmor and Bernstein, `https://openkim.org/kim-api `_ (2011) doi: `https://doi.org/10.25950/FF8F563A `_ + Elliott, Tadmor and Bernstein, `https://openkim.org/kim-api/ `_ (2011) doi: `https://doi.org/10.25950/FF8F563A `_ **(Ellis)** Ellis, Fiedler, Popoola, Modine, Stephens, Thompson, Cangi, Rajamanickam, Phys Rev B, 104, 035120, (2021) @@ -1098,9 +1095,6 @@ Bibliography **(Parrinello)** Parrinello and Rahman, J Appl Phys, 52, 7182 (1981). -**(PASS)** - PASS webpage: https://www.sdu.dk/en/DPASS - **(Paula Leite2016)** Paula Leite , Freitas, Azevedo, and de Koning, J Chem Phys, 126, 044509 (2016). @@ -1213,7 +1207,7 @@ Bibliography S. W. Rick, S. J. Stuart, B. J. Berne, J Chem Phys 101, 6141 **(Roberts)** - R. Roberts (2019) "Evenly Distributing Points in a Triangle." Extreme Learning. ``_ + R. Roberts (2019) "Evenly Distributing Points in a Triangle." Extreme Learning. ``_ **(Rohart)** Rohart and Thiaville, Physical Review B, 88(18), 184422. (2013). diff --git a/doc/src/Build.rst b/doc/src/Build.rst index ff09ee5678..c76ce2f434 100644 --- a/doc/src/Build.rst +++ b/doc/src/Build.rst @@ -14,32 +14,10 @@ As an alternative, you can download a package with pre-built executables or automated build trees, as described in the :doc:`Install ` section of the manual. -Prerequisites -------------- - -Which software you need to compile and use LAMMPS strongly depends on -which :doc:`features and settings ` and which -:doc:`optional packages ` you are trying to include. -Common to all is that you need a C++ and C compiler, where the C++ -compiler has to support at least the C++11 standard (note that some -compilers require command-line flag to activate C++11 support). -Furthermore, if you are building with CMake, you need at least CMake -version 3.20 and a compatible build tool (make or ninja-build); if you -are building the the legacy GNU make based build system you need GNU -make (other make variants are not going to work since the build system -uses features unique to GNU make) and a Unix-like build environment with -a Bourne shell, and shell tools like "sed", "grep", "touch", "test", -"tr", "cp", "mv", "rm", "ln", "diff" and so on. Parts of LAMMPS -interface with or use Python version 3.6 or later. - -The LAMMPS developers aim to keep LAMMPS very portable and usable - -at least in parts - on most operating systems commonly used for -running MD simulations. Please see the :doc:`section on portablility -` for more details. - .. toctree:: :maxdepth: 1 + Build_prerequisites Build_cmake Build_make Build_link diff --git a/doc/src/Build_basics.rst b/doc/src/Build_basics.rst index f9cf251688..3747b704a3 100644 --- a/doc/src/Build_basics.rst +++ b/doc/src/Build_basics.rst @@ -202,12 +202,7 @@ LAMMPS. ` you have included in the build. CMake will check if the detected or selected compiler is compatible with the C++ support requirements of LAMMPS and stop with an error, if this - is not the case. A C++11 compatible compiler is currently - required, but a transition to require C++17 is in progress and - planned to be completed in Summer 2025. Currently, setting - ``-DLAMMPS_CXX11=yes`` is required when configuring with CMake while - using a C++11 compatible compiler that does not support C++17, - otherwise setting ``-DCMAKE_CXX_STANDARD=17`` is preferred. + is not the case. A C++17 compatible compiler is required. You can tell CMake to look for a specific compiler with setting CMake variables (listed below) during configuration. For a few @@ -225,7 +220,6 @@ LAMMPS. -D CMAKE_Fortran_COMPILER=name # name of Fortran compiler -D CMAKE_CXX_STANDARD=17 # put compiler in C++17 mode - -D LAMMPS_CXX11=yes # enforce compilation in C++11 mode -D CMAKE_CXX_FLAGS=string # flags to use with C++ compiler -D CMAKE_C_FLAGS=string # flags to use with C compiler -D CMAKE_Fortran_FLAGS=string # flags to use with Fortran compiler @@ -310,23 +304,14 @@ LAMMPS. In file included from ../pointers.h:24:0, from ../input.h:17, from ../main.cpp:16: - ../lmptype.h:34:2: error: #error LAMMPS requires a C++11 (or later) compliant compiler. Enable C++11 compatibility or upgrade the compiler. + ../lmptype.h:34:2: error: #error LAMMPS requires a C++17 (or later) compliant compiler. Enable C++17 compatibility or upgrade the compiler. then you have either an unsupported (old) compiler or you have - to turn on C++11 mode. The latter applies to GCC 4.8.x shipped - with RHEL 7.x and CentOS 7.x or GCC 5.4.x shipped with Ubuntu16.04. - For those compilers, you need to add the ``-std=c++11`` flag. - If there is no compiler that supports this flag (or equivalent), - you would have to install a newer compiler that supports C++11; - either as a binary package or through compiling from source. - - While a C++11 compatible compiler is currently sufficient to compile - LAMMPS, a transition to require C++17 is in progress and planned to - be completed in Summer 2025. Currently, setting ``-DLAMMPS_CXX11`` - in the ``LMP_INC =`` line in the machine makefile is required when - using a C++11 compatible compiler that does not support C++17. - Otherwise, to enable C++17 support (if not enabled by default) using - a compiler flag like ``-std=c++17`` in CCFLAGS may needed. + to turn on C++17 mode. For those compilers, you need to add + the ``-std=c++17`` flag. If there is no compiler that supports + this flag (or equivalent), you would have to install a newer + compiler that supports C++17; either as a binary package or + through compiling from source. If you build LAMMPS with any :doc:`Speed_packages` included, there may be specific compiler or linker flags that are either @@ -494,7 +479,7 @@ the debug information from the LAMMPS executable: .. _tools: Build LAMMPS tools ------------------------------- +------------------ Some tools described in :doc:`Auxiliary tools ` can be built directly using CMake or Make. @@ -527,7 +512,7 @@ using CMake or Make. .. note:: - Building the LAMMPS-GUI *requires* building LAMMPS with CMake. + Building LAMMPS-GUI *requires* building LAMMPS with CMake. ---------- diff --git a/doc/src/Build_cmake.rst b/doc/src/Build_cmake.rst index 2349eebf62..05fe976e80 100644 --- a/doc/src/Build_cmake.rst +++ b/doc/src/Build_cmake.rst @@ -217,6 +217,5 @@ Most Linux distributions offer pre-compiled cmake packages through their package management system. If you do not have CMake or a recent enough version (Note: for CentOS 7.x you need to enable the EPEL repository), you can download the latest version from `https://cmake.org/download/ -`_. Instructions on how to install it on -various platforms can be found `on this page -`_. +`_. Links to more details on CMake can +be found `on this page `_. diff --git a/doc/src/Build_development.rst b/doc/src/Build_development.rst index 5c6475c7fa..6845079f8f 100644 --- a/doc/src/Build_development.rst +++ b/doc/src/Build_development.rst @@ -28,28 +28,6 @@ variable VERBOSE set to 1: ---------- -.. _clang-tidy: - -Enable static code analysis with clang-tidy (CMake only) --------------------------------------------------------- - -The `clang-tidy tool `_ is a -static code analysis tool to diagnose (and potentially fix) typical -programming errors or coding style violations. It has a modular framework -of tests that can be adjusted to help identifying problems before they -become bugs and also assist in modernizing large code bases (like LAMMPS). -It can be enabled for all C++ code with the following CMake flag - -.. code-block:: bash - - -D ENABLE_CLANG_TIDY=value # value = no (default) or yes - -With this flag enabled all source files will be processed twice, first to -be compiled and then to be analyzed. Please note that the analysis can be -significantly more time-consuming than the compilation itself. - ----------- - .. _iwyu_processing: Report missing and unneeded '#include' statements (CMake only) @@ -523,7 +501,7 @@ to do this to install it via pip: .. code-block:: bash - pip install git+https://github.com/gcovr/gcovr.git + python3 -m pip install gcovr After post-processing with ``gen_coverage_html`` the results are in a folder ``coverage_html`` and can be viewed with a web browser. diff --git a/doc/src/Build_extras.rst b/doc/src/Build_extras.rst index 26cf776f4d..f8ff692b5e 100644 --- a/doc/src/Build_extras.rst +++ b/doc/src/Build_extras.rst @@ -23,10 +23,12 @@ in addition to as described on the :doc:`Build_package ` page. For a CMake build there may be additional optional or required -variables to set. For a build with make, a provided library under the -lammps/lib directory may need to be built first. Or an external -library may need to exist on your system or be downloaded and built. -You may need to tell LAMMPS where it is found on your system. +variables to set. + +.. versionchanged:: 10Sep2025 + +The traditional build system with GNU make no longer supports packages +that require extra steps in the ``lammps/lib`` directory. This is the list of packages that may require additional steps. @@ -35,8 +37,7 @@ This is the list of packages that may require additional steps. :columns: 6 * :ref:`ADIOS ` - * :ref:`ATC ` - * :ref:`AWPMD ` + * :ref:`APIP ` * :ref:`COLVARS ` * :ref:`COMPRESS ` * :ref:`ELECTRODE ` @@ -59,7 +60,6 @@ This is the list of packages that may require additional steps. * :ref:`OPENMP ` * :ref:`OPT ` * :ref:`PLUMED ` - * :ref:`POEMS ` * :ref:`PYTHON ` * :ref:`QMMM ` * :ref:`RHEO ` @@ -103,11 +103,10 @@ versions use an incompatible API and thus LAMMPS will fail to compile. .. tab:: Traditional make - To include support for Zstandard compression, ``-DLAMMPS_ZSTD`` - must be added to the compiler flags. If make cannot find the - libraries, you can edit the file ``lib/compress/Makefile.lammps`` - to specify the paths and library names. This must be done - **before** the package is installed. + .. versionchanged:: 10Sep2025 + + The COMPRESS package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -129,7 +128,7 @@ CMake build -D GPU_PREC=value # precision setting # value = double or mixed (default) or single -D GPU_ARCH=value # primary GPU hardware choice for GPU_API=cuda - # value = sm_XX (see below, default is sm_50) + # value = sm_XX (see below, default is sm_75) -D GPU_DEBUG=value # enable debug code in the GPU package library, # mostly useful for developers # value = yes or no (default) @@ -137,7 +136,7 @@ CMake build # GPU_API=HIP -D HIP_ARCH=value # primary GPU hardware choice for GPU_API=hip # value depends on selected HIP_PLATFORM - # default is 'gfx906' for HIP_PLATFORM=amd and 'sm_50' for + # default is 'gfx906' for HIP_PLATFORM=amd and 'sm_75' for # HIP_PLATFORM=nvcc -D HIP_USE_DEVICE_SORT=value # enables GPU sorting # value = yes (default) or no @@ -155,6 +154,26 @@ CMake build # no local OpenCL headers/libs needed # value = yes (default) or no +The GPU package supports 3 precision modes: single, double, and mixed, with +the latter being the default. In the double precision mode, atom positions, +forces and energies are stored, computed and accumulated in double precision. +In the mixed precision mode, forces and energies are accumulated in double precision +while atom coordinates are stored and arithmetic operations are performed +in single precision. In the single precision mode, all are stored, executed +and accumulated in single precision. + +To specify the precision mode (output to the screen before LAMMPS runs for +verification), set ``GPU_PREC`` to one of ``single``, ``double``, or ``mixed``. + +Some accelerators or OpenCL implementations only support single precision. +This mode should be used with care and appropriate validation as the errors +can scale with system size in this implementation. This can be useful for +accelerating test runs when setting up a simulation for production runs on +another machine. In the case where only single precision is supported, either +LAMMPS must be compiled with ``-DFFT_SINGLE`` to use PPPM with GPU acceleration +or GPU acceleration should be disabled for PPPM (e.g. suffix off or ``pair/only`` +as described in the LAMMPS documentation). + ``GPU_ARCH`` settings for different GPU hardware is as follows: * ``sm_30`` for Kepler (supported since CUDA 5 and until CUDA 10.x) @@ -163,9 +182,12 @@ CMake build * ``sm_60`` or ``sm_61`` for Pascal (supported since CUDA 8) * ``sm_70`` for Volta (supported since CUDA 9) * ``sm_75`` for Turing (supported since CUDA 10) -* ``sm_80`` or sm_86 for Ampere (supported since CUDA 11, sm_86 since CUDA 11.1) +* ``sm_80`` or ``sm_86`` for Ampere (supported since CUDA 11, ``sm_86`` since CUDA 11.1) * ``sm_89`` for Lovelace (supported since CUDA 11.8) -* ``sm_90`` for Hopper (supported since CUDA 12.0) +* ``sm_90`` or ``sm_90a`` for Hopper (supported since CUDA 12.0) +* ``sm_100`` or ``sm_103`` for Blackwell B100/B200/B300 (supported since CUDA 12.8) +* ``sm_120`` for Blackwell B20x/B40 (supported since CUDA 12.8) +* ``sm_121`` for Blackwell (supported since CUDA 12.9) A more detailed list can be found, for example, at `Wikipedia's CUDA article `_ @@ -185,10 +207,19 @@ CUDA driver in use. When compiling for OpenCL, OpenCL version 1.2 or later is required and the GPU must be supported by the GPU driver and OpenCL runtime bundled with the driver. -When building with CMake, you **must NOT** build the GPU library in -``lib/gpu`` using the traditional build procedure. CMake will detect -files generated by that process and will terminate with an error and a -suggestion for how to remove them. +Please note that the GPU library accesses the CUDA driver library +directly, so it needs to be linked with the CUDA driver library +(``libcuda.so``) that ships with the Nvidia driver. If you are +compiling LAMMPS on the head node of a GPU cluster, this library may not +be installed, so you may need to copy it over from one of the compute +nodes (best into this directory). Recent versions of the CUDA toolkit +starting from CUDA 9 provide a dummy ``libcuda.so`` library (typically +under ``$(CUDA_HOME)/lib64/stubs``), that can be used for linking. + +To support the CUDA multi-process server (MPS) you can set the define +``-DCUDA_MPS_SUPPORT``. Please note that in this case you must **not** +use the CUDA performance primitives and thus set the variable +``CUDPP_OPT`` to empty. If you are compiling for OpenCL, the default setting is to download, build, and link with a static OpenCL ICD loader library and standard @@ -197,14 +228,29 @@ needs to be present and only OpenCL compatible drivers need to be installed to use OpenCL. If this is not desired, you can set ``USE_STATIC_OPENCL_LOADER`` to ``no``. -The GPU library has some multi-thread support using OpenMP. If LAMMPS -is built with ``-D BUILD_OMP=on`` this will also be enabled. +If ``GERYON_NUMA_FISSION`` is defined at build time (``-DGPU_DEBUG=no``), +LAMMPS will consider separate NUMA nodes on GPUs or accelerators as +separate devices. For example, a 2-socket CPU would appear as two separate +devices for OpenCL (and LAMMPS would require two MPI processes to use both +sockets with the GPU library - each with its own device ID as output by +ocl_get_devices). OpenCL version 1.2 or later is required. If you are compiling with HIP, note that before running CMake you will have to set appropriate environment variables. Some variables such as ``HCC_AMDGPU_TARGET`` (for ROCm <= 4.0) or ``CUDA_PATH`` are necessary for ``hipcc`` and the linker to work correctly. +When compiling for HIP ROCm, GPU sorting with ``-D +HIP_USE_DEVICE_SORT=on`` requires installing the ``hipcub`` library +(https://github.com/ROCmSoftwarePlatform/hipCUB). The HIP CUDA-backend +additionally requires cub (https://nvlabs.github.io/cub). Setting +``-DDOWNLOAD_CUB=yes`` will download and compile CUB. + +The GPU library has some multi-thread support using OpenMP. If LAMMPS +is built with ``-D BUILD_OMP=on`` this will also be enabled. + +For a debug build, set ``GPU_DEBUG`` to be ``yes``. + .. versionadded:: 3Aug2022 Using the CHIP-SPV implementation of HIP is supported. It allows one to @@ -250,80 +296,6 @@ option in preparations to run on Aurora system at Argonne. cmake -D PKG_GPU=on -D GPU_API=HIP .. make -j 4 -Traditional make -^^^^^^^^^^^^^^^^ - -Before building LAMMPS, you must build the GPU library in ``lib/gpu``\ . -You can do this manually if you prefer; follow the instructions in -``lib/gpu/README``. Note that the GPU library uses MPI calls, so you -must use the same MPI library (or the STUBS library) settings as the -main LAMMPS code. This also applies to the ``-DLAMMPS_BIGBIG`` or -``-DLAMMPS_SMALLBIG`` settings in whichever Makefile you use. - -You can also build the library in one step from the ``lammps/src`` dir, -using a command like these, which simply invokes the ``lib/gpu/Install.py`` -script with the specified args: - -.. code-block:: bash - - # print help message - make lib-gpu - - # build GPU library with default Makefile.linux - make lib-gpu args="-b" - - # create new Makefile.xk7.single, altered for single-precision - make lib-gpu args="-m xk7 -p single -o xk7.single" - - # build GPU library with mixed precision and P100 using other settings in Makefile.mpi - make lib-gpu args="-m mpi -a sm_60 -p mixed -b" - -Note that this procedure starts with a Makefile.machine in lib/gpu, as -specified by the ``-m`` switch. For your convenience, machine makefiles -for "mpi" and "serial" are provided, which have the same settings as -the corresponding machine makefiles in the main LAMMPS source -folder. In addition you can alter 4 important settings in the -Makefile.machine you start from via the corresponding ``-c``, ``-a``, ``-p``, ``-e`` -switches (as in the examples above), and also save a copy of the new -Makefile if desired: - -* ``CUDA_HOME`` = where NVIDIA CUDA software is installed on your system -* ``CUDA_ARCH`` = ``sm_XX``, what GPU hardware you have, same as CMake ``GPU_ARCH`` above -* ``CUDA_PRECISION`` = precision (double, mixed, single) -* ``EXTRAMAKE`` = which ``Makefile.lammps.*`` file to copy to Makefile.lammps - -The file ``Makefile.cuda`` is set up to include support for multiple -GPU architectures as supported by the CUDA toolkit in use. This is done -through using the ``--gencode`` flag, which can be used multiple times and -thus support all GPU architectures supported by your CUDA compiler. - -To enable GPU binning via CUDA performance primitives set the Makefile variable -``CUDPP_OPT = -DUSE_CUDPP -Icudpp_mini``. This should **not** be used with -most modern GPUs. - -To support the CUDA multiprocessor server you can set the define -``-DCUDA_MPS_SUPPORT``. Please note that in this case you must **not** use -the CUDA performance primitives and thus set the variable ``CUDPP_OPT`` -to empty. - -The GPU library has some multi-thread support using OpenMP. You need to add -the compiler flag that enables OpenMP to the ``CUDR_OPTS`` Makefile variable. - -If the library build is successful, 3 files should be created: -``lib/gpu/libgpu.a``\ , ``lib/gpu/nvc_get_devices``\ , and -``lib/gpu/Makefile.lammps``\ . The latter has settings that enable LAMMPS -to link with CUDA libraries. If the settings in ``Makefile.lammps`` for -your machine are not correct, the LAMMPS build will fail, and -``lib/gpu/Makefile.lammps`` may need to be edited. - -.. note:: - - If you re-build the GPU library in ``lib/gpu``, you should always - uninstall the GPU package in ``lammps/src``, then re-install it and - re-build LAMMPS. This is because the compilation of files in the GPU - package uses the library settings from the ``lib/gpu/Makefile.machine`` - used to build the GPU library. - ---------- .. _kim: @@ -388,57 +360,10 @@ minutes to hours) to build. Of course you only need to do that once.) .. tab:: Traditional make - You can download and build the KIM library manually if you prefer; - follow the instructions in ``lib/kim/README``. You can also do - this in one step from the lammps/src directory, using a command like - these, which simply invokes the ``lib/kim/Install.py`` script with - the specified args. - - .. code-block:: bash - - # print help message - make lib-kim - - # (re-)install KIM API lib with only example models - make lib-kim args="-b" - - # ditto plus one model - make lib-kim args="-b -a Glue_Ercolessi_Adams_Al__MO_324507536345_001" - - # install KIM API lib with all models - make lib-kim args="-b -a everything" - - # add one model or model driver - make lib-kim args="-n -a EAM_Dynamo_Ackland_W__MO_141627196590_002" - - # use an existing KIM API installation at the provided location - make lib-kim args="-p " - - # ditto but add one model or driver - make lib-kim args="-p -a EAM_Dynamo_Ackland_W__MO_141627196590_002" - - When using the ``-b`` option, the KIM library is built using its native - cmake build system. The ``lib/kim/Install.py`` script supports a - ``CMAKE`` environment variable if the cmake executable is named other - than ``cmake`` on your system. Additional environment variables may be - set with the ``make`` command for use by cmake. For example, to use the - ``cmake3`` executable and tell it to use the GNU version 11 compilers - called ``g++-11``, ``gcc-11`` and ``gfortran-11`` to build KIM, one - could use the following command. - - .. code-block:: bash - - # (re-)install KIM API lib using cmake3 and gnu v11 compilers - # with only example models - CMAKE=cmake3 CXX=g++-11 CC=gcc-11 FC=gfortran-11 make lib-kim args="-b" + .. versionchanged:: 10Sep2025 - Settings for debugging OpenKIM web queries discussed below need to - be applied by adding them to the ``LMP_INC`` variable through - editing the ``Makefile.machine`` you are using. For example: - - .. code-block:: make - - LMP_INC = -DLMP_NO_SSL_CHECK + The KIM package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. Debugging OpenKIM web queries in LAMMPS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -521,7 +446,7 @@ to have an executable that will run on this and newer architectures. the new hardware. This is, however, only supported for GPUs of the **same** major hardware version and different minor hardware versions, e.g. 5.0 and 5.2 but not 5.2 and 6.0. LAMMPS will abort with an - error message indicating a mismatch, if that happens. + error message indicating a mismatch, if the major version differs. The settings discussed below have been tested with LAMMPS and are confirmed to work. Kokkos is an active project with ongoing improvements @@ -614,6 +539,9 @@ They must be specified in uppercase. * - ZEN4 - HOST - AMD Zen4 architecture + * - ZEN5 + - HOST + - AMD Zen5 architecture * - RISCV_SG2042 - HOST - SG2042 (RISC-V) CPUs @@ -668,6 +596,12 @@ They must be specified in uppercase. * - HOPPER90 - GPU - NVIDIA Hopper generation CC 9.0 + * - BLACKWELL100 + - GPU + - NVIDIA Blackwell generation CC 10.0 + * - BLACKWELL120 + - GPU + - NVIDIA Blackwell generation CC 12.0 * - AMD_GFX906 - GPU - AMD GPU MI50/60 @@ -716,8 +650,11 @@ They must be specified in uppercase. * - INTEL_PVC - GPU - Intel GPU Ponte Vecchio + * - INTEL_DG2 + - GPU + - Intel GPU DG2 -This list was last updated for version 4.6.0 of the Kokkos library. +This list was last updated for version 4.6.2 of the Kokkos library. .. tabs:: @@ -781,24 +718,30 @@ This list was last updated for version 4.6.0 of the Kokkos library. This will enable FFTs on the GPU using the oneMKL library. - To simplify compilation, six preset files are included in the + To simplify compilation, seven preset files are included in the ``cmake/presets`` folder, ``kokkos-serial.cmake``, ``kokkos-openmp.cmake``, ``kokkos-cuda.cmake``, - ``kokkos-hip.cmake``, ``kokkos-sycl-nvidia.cmake``, and - ``kokkos-sycl-intel.cmake``. They will enable the KOKKOS - package and enable some hardware choices. For GPU support those - preset files must be customized to match the hardware used. So - to compile with CUDA device parallelization with some common - packages enabled, you can do the following: + ``kokkos-cuda-nowrapper.cmake``, ``kokkos-hip.cmake``, + ``kokkos-sycl-nvidia.cmake``, and ``kokkos-sycl-intel.cmake``. + They will enable the KOKKOS package and enable some hardware + choices. For GPU support those preset files may need to be + customized to match the hardware used. For some platforms, + e.g. CUDA, the Kokkos library will try to auto-detect a suitable + configuration. So to compile with CUDA device parallelization + with some common packages enabled, you can do the following: .. code-block:: bash mkdir build-kokkos-cuda cd build-kokkos-cuda cmake -C ../cmake/presets/basic.cmake \ - -C ../cmake/presets/kokkos-cuda.cmake ../cmake + -C ../cmake/presets/kokkos-cuda-nowrapper.cmake ../cmake cmake --build . + The ``kokkos-openmp.cmake`` preset can be combined with any of the + others, but it is not possible to combine multiple GPU + acceleration settings (CUDA, HIP, SYCL) into a single executable. + .. tab:: Basic traditional make settings: Choose which hardware to support in ``Makefile.machine`` via @@ -911,9 +854,27 @@ transparently use RAM on the host to supplement the memory used on the GPU (with some performance penalty) and thus enables running larger problems that would otherwise not fit into the RAM on the GPU. -Please note, that the LAMMPS KOKKOS package must **always** be compiled -with the *enable_lambda* option when using GPUs. The CMake configuration -will thus always enable it. +The CMake option ``-D KOKKOS_PREC=value`` sets the floating point +precision of the calculations, where ``value`` can be one of: +``double`` (FP64, default) or ``mixed`` (FP64 for accumulation of +forces, energy, and virial, FP32 otherwise) or ``single`` (FP32). +Similarly the makefile settings ``-DLMP_KOKKOS_DOUBLE_DOUBLE`` +(default), ``-DLMP_KOKKOS_SINGLE_DOUBLE``, and +``-DLMP_KOKKOS_SINGLE_SINGLE`` set double, mixed, single precision +respectively. When using reduced precision (single or mixed), the +simulation should be carefully checked to ensure it is stable and that +energy is acceptably conserved. + +The CMake option ``-D KOKKOS_LAYOUT=value`` sets the array layout of +Kokkos views (e.g. forces, velocities, etc.) on GPUs, where ``value`` +can be one of: ``legacy`` (mostly LayoutRight, default) or ``default`` +(mostly LayoutLeft). Similarly the makefile settings +``-DLMP_KOKKOS_LAYOUT_LEGACY`` (default) and +``-DLMP_KOKKOS_LAYOUT_DEFAULT`` set legacy or default layouts +respectively. Using the default layout (LayoutLeft) can give speedup +on GPUs for some models, but a slowdown for others. LayoutRight is +always used for positions on GPUs since it has been found to be +faster, and when compiling exclusively for CPUs. ---------- @@ -940,34 +901,10 @@ included in the LAMMPS source distribution in the ``lib/lepton`` folder. .. tab:: Traditional make - Before building LAMMPS, one must build the Lepton library in lib/lepton. - - This can be done manually in the same folder by using or adapting - one of the provided Makefiles: for example, ``Makefile.serial`` for - the GNU C++ compiler, or ``Makefile.mpi`` for the MPI compiler wrapper. - The Lepton library is written in C++-11 and thus the C++ compiler - may need to be instructed to enable support for that. - - In general, it is safer to use build setting consistent with the - rest of LAMMPS. This is best carried out from the LAMMPS src - directory using a command like these, which simply invokes the - ``lib/lepton/Install.py`` script with the specified args: - - .. code-block:: bash + .. versionchanged:: 10Sep2025 - # print help message - make lib-lepton - - # build with GNU g++ compiler (settings as with "make serial") - make lib-lepton args="-m serial" - - # build with default MPI compiler (settings as with "make mpi") - make lib-lepton args="-m mpi" - - The "machine" argument of the ``-m`` flag is used to find a - Makefile.machine to use as build recipe. - - The build should produce a ``build`` folder and the library ``lib/lepton/liblmplepton.a`` + The LEPTON package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -997,27 +934,10 @@ Eigen3 is a template library, so you do not need to build it. .. tab:: Traditional make - You can download the Eigen3 library manually if you prefer; follow - the instructions in ``lib/machdyn/README``. You can also do it in one - step from the ``lammps/src`` dir, using a command like these, - which simply invokes the ``lib/machdyn/Install.py`` script with the - specified args: - - .. code-block:: bash - - # print help message - make lib-machdyn + .. versionchanged:: 10Sep2025 - # download to lib/machdyn/eigen3 - make lib-machdyn args="-b" - - # use existing Eigen installation in /usr/include/eigen3 - make lib-machdyn args="-p /usr/include/eigen3" - - Note that a symbolic (soft) link named ``includelink`` is created - in ``lib/machdyn`` to point to the Eigen dir. When LAMMPS builds it - will use this link. You should not need to edit the - ``lib/machdyn/Makefile.lammps`` file. + The MACHDYN package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -1091,51 +1011,6 @@ OPT package ---------- -.. _poems: - -POEMS package -------------------------- - -.. tabs:: - - .. tab:: CMake build - - No additional settings are needed besides ``-D PKG_OPT=yes`` - - .. tab:: Traditional make - - Before building LAMMPS, you must build the POEMS library in - ``lib/poems``\ . You can do this manually if you prefer; follow - the instructions in ``lib/poems/README``\ . You can also do it in - one step from the ``lammps/src`` dir, using a command like these, - which simply invokes the ``lib/poems/Install.py`` script with the - specified args: - - .. code-block:: bash - - # print help message - make lib-poems - - # build with GNU g++ compiler (settings as with "make serial") - make lib-poems args="-m serial" - - # build with default MPI C++ compiler (settings as with "make mpi") - make lib-poems args="-m mpi" - - # build with Intel Classic compiler - make lib-poems args="-m icc" - - The build should produce two files: ``lib/poems/libpoems.a`` and - ``lib/poems/Makefile.lammps``. The latter is copied from an - existing ``Makefile.lammps.*`` and has settings needed to build - LAMMPS with the POEMS library (though typically the settings are - just blank). If necessary, you can edit/create a new - ``lib/poems/Makefile.machine`` file for your system, which should - define an ``EXTRAMAKE`` variable to specify a corresponding - ``Makefile.lammps.machine`` file. - ----------- - .. _python: PYTHON package @@ -1164,10 +1039,10 @@ for additional details. .. tab:: Traditional make - The build uses the ``lib/python/Makefile.lammps`` file in the - compile/link process to find Python. You should only need to - create a new ``Makefile.lammps.*`` file (and copy it to - ``Makefile.lammps``) if the LAMMPS build fails. + .. versionchanged:: 10Sep2025 + + The PYTHON package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -1203,31 +1078,10 @@ binary package provided by your operating system. .. tab:: Traditional make - You can download and build the Voro++ library manually if you - prefer; follow the instructions in ``lib/voronoi/README``. You - can also do it in one step from the ``lammps/src`` dir, using a - command like these, which simply invokes the - ``lib/voronoi/Install.py`` script with the specified args: - - .. code-block:: bash - - # print help message - make lib-voronoi - - # download and build the default version in lib/voronoi/voro++- - make lib-voronoi args="-b" - - # use existing Voro++ installation in $HOME/voro++ - make lib-voronoi args="-p $HOME/voro++" + .. versionchanged:: 10Sep2025 - # download and build the 0.4.6 version in lib/voronoi/voro++-0.4.6 - make lib-voronoi args="-b -v voro++0.4.6" - - Note that two symbolic (soft) links, ``includelink`` and - ``liblink``, are created in lib/voronoi to point to the Voro++ - source dir. When LAMMPS builds in ``src`` it will use these - links. You should not need to edit the - ``lib/voronoi/Makefile.lammps`` file. + The VORONOI package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -1272,136 +1126,32 @@ systems. ---------- -.. _atc: +.. _apip: -ATC package -------------------------------- +APIP package +----------------------------- -The ATC package requires the MANYBODY package also be installed. +The APIP package depends on the library of the +:ref:`ML-PACE ` package. +The code for the library can be found +at: `https://github.com/ICAMS/lammps-user-pace/ `_ .. tabs:: .. tab:: CMake build - No additional settings are needed besides ``-D PKG_ATC=yes`` - and ``-D PKG_MANYBODY=yes``. - - .. tab:: Traditional make - - Before building LAMMPS, you must build the ATC library in - ``lib/atc``. You can do this manually if you prefer; follow the - instructions in ``lib/atc/README``. You can also do it in one - step from the ``lammps/src`` dir, using a command like these, - which simply invokes the ``lib/atc/Install.py`` script with the - specified args: - - .. code-block:: bash - - # print help message - make lib-atc - - # build with GNU g++ compiler and MPI STUBS (settings as with "make serial") - make lib-atc args="-m serial" - - # build with default MPI compiler (settings as with "make mpi") - make lib-atc args="-m mpi" - - # build with Intel Classic compiler - make lib-atc args="-m icc" - - The build should produce two files: ``lib/atc/libatc.a`` and - ``lib/atc/Makefile.lammps``. The latter is copied from an - existing ``Makefile.lammps.*`` and has settings needed to build - LAMMPS with the ATC library. If necessary, you can edit/create a - new ``lib/atc/Makefile.machine`` file for your system, which - should define an ``EXTRAMAKE`` variable to specify a corresponding - ``Makefile.lammps.`` file. - - Note that the Makefile.lammps file has settings for the BLAS and - LAPACK linear algebra libraries. As explained in - ``lib/atc/README`` these can either exist on your system, or you - can use the files provided in ``lib/linalg``. In the latter case - you also need to build the library in ``lib/linalg`` with a - command like these: - - .. code-block:: bash - - # print help message - make lib-linalg - - # build with GNU C++ compiler (settings as with "make serial") - make lib-linalg args="-m serial" - - # build with default MPI C++ compiler (settings as with "make mpi") - make lib-linalg args="-m mpi" + No additional settings are needed besides ``-D PKG_APIP=yes`` + and ``-D PKG_ML-PACE=yes``. + One can use a local version of the ML-PACE library instead of + automatically downloading the library as described :ref:`here `. - # build with GNU Fortran compiler - make lib-linalg args="-m g++" - ----------- - -.. _awpmd: - -AWPMD package -------------- - -.. tabs:: - - .. tab:: CMake build - - No additional settings are needed besides ``-D PKG_AQPMD=yes``. .. tab:: Traditional make - Before building LAMMPS, you must build the AWPMD library in - ``lib/awpmd``. You can do this manually if you prefer; follow the - instructions in ``lib/awpmd/README``. You can also do it in one - step from the ``lammps/src`` dir, using a command like these, - which simply invokes the ``lib/awpmd/Install.py`` script with the - specified args: - - .. code-block:: bash - - # print help message - make lib-awpmd - - # build with GNU g++ compiler and MPI STUBS (settings as with "make serial") - make lib-awpmd args="-m serial" - - # build with default MPI compiler (settings as with "make mpi") - make lib-awpmd args="-m mpi" + .. versionchanged:: 10Sep2025 - # build with Intel Classic compiler - make lib-awpmd args="-m icc" - - The build should produce two files: ``lib/awpmd/libawpmd.a`` and - ``lib/awpmd/Makefile.lammps``. The latter is copied from an - existing ``Makefile.lammps.*`` and has settings needed to build - LAMMPS with the AWPMD library. If necessary, you can edit/create - a new ``lib/awpmd/Makefile.machine`` file for your system, which - should define an ``EXTRAMAKE`` variable to specify a corresponding - ``Makefile.lammps.`` file. - - Note that the ``Makefile.lammps`` file has settings for the BLAS - and LAPACK linear algebra libraries. As explained in - ``lib/awpmd/README`` these can either exist on your system, or you - can use the files provided in ``lib/linalg``. In the latter case - you also need to build the library in ``lib/linalg`` with a - command like these: - - .. code-block:: bash - - # print help message - make lib-linalg - - # build with GNU C++ compiler (settings as with "make serial") - make lib-linalg args="-m serial" - - # build with default MPI C++ compiler (settings as with "make mpi") - make lib-linalg args="-m mpi" - - # build with GNU C++ compiler - make lib-linalg args="-m g++" + The APIP package no longer supports the the traditional make + build. You need to build LAMMPS with CMake. ---------- @@ -1419,60 +1169,21 @@ module included in the LAMMPS source distribution. .. tab:: CMake build This is the recommended build procedure for using Colvars in - LAMMPS. No additional settings are normally needed besides - ``-D PKG_COLVARS=yes``. - - .. tab:: Traditional make - - As with other libraries distributed with LAMMPS, the Colvars library - needs to be built before building the LAMMPS program with the COLVARS - package enabled. - - From the LAMMPS ``src`` directory, this is most easily and safely done - via one of the following commands, which implicitly rely on the - ``lib/colvars/Install.py`` script with optional arguments: + LAMMPS. No additional settings are normally needed besides ``-D + PKG_COLVARS=yes``. The following CMake variables are available. .. code-block:: bash - # print help message - make lib-colvars - - # build with GNU g++ compiler (settings as with "make serial") - make lib-colvars args="-m serial" + -D PKG_COLVARS=yes # enable the package itself + -D COLVARS_LEPTON=yes # use the Lepton library for custom expression (on by defaul) + -D COLVARS_DEBUG=no # eneable debugging message (verbose, off by default) - # build with default MPI compiler (settings as with "make mpi") - make lib-colvars args="-m mpi" - - # build with GNU g++ compiler and colvars debugging enabled - make lib-colvars args="-m g++-debug" - - The "machine" argument of the "-m" flag is used to find a - ``Makefile.machine`` file to use as build recipe. If such recipe does - not already exist in ``lib/colvars``, suitable settings will be - auto-generated consistent with those used in the core LAMMPS makefiles. - - - .. versionchanged:: 8Feb2023 - - Please note that Colvars uses the Lepton library, which is now - included with the LEPTON package; if you use anything other than - the ``make lib-colvars`` command, please make sure to :ref:`build - Lepton beforehand `. - - Optional flags may be specified as environment variables: - - .. code-block:: bash - - # Build with debug code (much slower) - COLVARS_DEBUG=yes make lib-colvars args="-m machine" + .. tab:: Traditional make - # Build without Lepton (included otherwise) - COLVARS_LEPTON=no make lib-colvars args="-m machine" + .. versionchanged:: 10Sep2025 - The build should produce two files: the library - ``lib/colvars/libcolvars.a`` and the specification file - ``lib/colvars/Makefile.lammps``. The latter is auto-generated, - and normally does not need to be edited. + The COLVARS package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -1505,47 +1216,8 @@ This package depends on the KSPACE package. .. tab:: Traditional make - Before building LAMMPS, you must configure the ELECTRODE support - libraries and settings in ``lib/electrode``. You can do this - manually, if you prefer, or do it in one step from the - ``lammps/src`` dir, using a command like these, which simply - invokes the ``lib/electrode/Install.py`` script with the specified - args: - - .. code-block:: bash - - # print help message - make lib-electrode - - # build with GNU g++ compiler and MPI STUBS (settings as with "make serial") - make lib-electrode args="-m serial" - - # build with default MPI compiler (settings as with "make mpi") - make lib-electrode args="-m mpi" - - - Note that the ``Makefile.lammps`` file has settings for the BLAS - and LAPACK linear algebra libraries. These can either exist on - your system, or you can use the files provided in ``lib/linalg``. - In the latter case you also need to build the library in - ``lib/linalg`` with a command like these: - - .. code-block:: bash - - # print help message - make lib-linalg - - # build with GNU C++ compiler (settings as with "make serial") - make lib-linalg args="-m serial" - - # build with default MPI C++ compiler (settings as with "make mpi") - make lib-linalg args="-m mpi" - - # build with GNU C++ compiler - make lib-linalg args="-m g++" - - The package itself is activated with ``make yes-KSPACE`` and - ``make yes-ELECTRODE`` + The ELECTRODE package no longer supports the the traditional make + build. You need to build LAMMPS with CMake. ---------- @@ -1581,19 +1253,10 @@ folder and then load this plugin at runtime with the :doc:`plugin command `_. The PLUMED package has been tested to work with Plumed versions -2.4.x, 2.5.x, and 2.6.x and will error out, when trying to run calculations +2.4.x, to 2.9.x and will error out, when trying to run calculations with a different version of the Plumed kernel. PLUMED can be linked into MD codes in three different modes: static, @@ -1794,61 +1417,10 @@ folder and then load this plugin at runtime with the :doc:`plugin command `` file. + The H5MD package no longer supports the the traditional make + build. You need to build LAMMPS with CMake. ---------- @@ -1934,29 +1491,10 @@ details please see ``lib/hdnnp/README`` and the `n2p2 build documentation .. tab:: Traditional make - You can download and build the *n2p2* library manually if you prefer; - follow the instructions in ``lib/hdnnp/README``\ . You can also do it in - one step from the ``lammps/src`` dir, using a command like these, which - simply invokes the ``lib/hdnnp/Install.py`` script with the specified args: - - .. code-block:: bash - - # print help message - make lib-hdnnp - - # download and build in lib/hdnnp/n2p2-... - make lib-hdnnp args="-b" - - # download and build specific version - make lib-hdnnp args="-b -v 2.1.4" + .. versionchanged:: 10Sep2025 - # use the existing n2p2 installation in /usr/local/n2p2 - make lib-hdnnp args="-p /usr/local/n2p2" - - Note that three symbolic (soft) links, ``includelink``, ``liblink`` and - ``Makefile.lammps``, will be created in ``lib/hdnnp`` to point to - ``n2p2/include``, ``n2p2/lib`` and ``n2p2/lib/Makefile.lammps-extra``, - respectively. When LAMMPS is built in ``src`` it will use these links. + The ML-HDNNP package no longer supports the the traditional make + build. You need to build LAMMPS with CMake. ---------- @@ -1987,7 +1525,7 @@ code when using features from the INTEL package. .. code-block:: bash -D INTEL_ARCH=value # value = cpu (default) or knl - -D INTEL_LRT_MODE=value # value = threads, none, or c++11 + -D INTEL_LRT_MODE=value # value = threads, none, or c++17 .. tab:: Traditional make @@ -2018,8 +1556,8 @@ In Long-range thread mode (LRT) a modified verlet style is used, that operates the Kspace calculation in a separate thread concurrently to other calculations. This has to be enabled in the :doc:`package intel ` command at runtime. With the setting "threads" it used the -pthreads library, while "c++11" will use the built-in thread support -of C++11 compilers. The option "none" skips compilation of this +pthreads library, while "c++17" will use the built-in thread support +of C++17 compilers. The option "none" skips compilation of this feature. The default is to use "threads" if pthreads is available and otherwise "none". @@ -2046,17 +1584,10 @@ MDI package .. tab:: Traditional make - Before building LAMMPS, you must build the MDI Library in - ``lib/mdi``\ . You can do this by executing a command like one - of the following from the ``lib/mdi`` directory: - - .. code-block:: bash - - python Install.py -m gcc # build using gcc compiler - python Install.py -m icc # build using icc compiler + .. versionchanged:: 10Sep2025 - The build should produce two files: ``lib/mdi/includelink/mdi.h`` - and ``lib/mdi/liblink/libmdi.so``\ . + The MDI package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -2116,17 +1647,10 @@ MOLFILE package .. tab:: Traditional make - The ``lib/molfile/Makefile.lammps`` file has a setting for a - dynamic loading library libdl.a that is typically present on all - systems. It is required for LAMMPS to link with this package. If - the setting is not valid for your system, you will need to edit - the Makefile.lammps file. See ``lib/molfile/README`` and - ``lib/molfile/Makefile.lammps`` for details. It is also possible - to configure a different folder with the VMD molfile plugin header - files. LAMMPS ships with a couple of default headers, but these - are not compatible with all VMD versions, so it is often best to - change this setting to the location of the same include files of - the local VMD installation in use. + .. versionchanged:: 10Sep2025 + + The MOLFILE package no longer supports the the traditional make + build. You need to build LAMMPS with CMake. ---------- @@ -2153,11 +1677,10 @@ on your system. .. tab:: Traditional make - The ``lib/netcdf/Makefile.lammps`` file has settings for NetCDF - include and library files which LAMMPS needs to build with this - package. If the settings are not valid for your system, you will - need to edit the ``Makefile.lammps`` file. See - ``lib/netcdf/README`` for details. + .. versionchanged:: 10Sep2025 + + The NETCDF package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -2235,7 +1758,7 @@ verified to work in February 2020 with Quantum Espresso versions 6.3 to When using CMake, building a LAMMPS library is required and it is recommended to build a shared library, since any libraries built from the sources in the *lib* folder (including the essential - libqmmm.a) are not included in the static LAMMPS library and + libqmmm.a) are not included in the static LAMMPS library and are (currently) not installed, while their code is included in the shared LAMMPS library. Thus a typical command to configure building LAMMPS for QMMM would be: @@ -2253,42 +1776,10 @@ verified to work in February 2020 with Quantum Espresso versions 6.3 to .. tab:: Traditional make - Before building LAMMPS, you must build the QMMM library in - ``lib/qmmm``. You can do this manually if you prefer; follow the - first two steps explained in ``lib/qmmm/README``. You can also do - it in one step from the ``lammps/src`` dir, using a command like - these, which simply invokes the ``lib/qmmm/Install.py`` script with - the specified args: + .. versionchanged:: 10Sep2025 - .. code-block:: bash - - # print help message - make lib-qmmm - - # build with GNU Fortran compiler (settings as in "make serial") - make lib-qmmm args="-m serial" - - # build with default MPI compiler (settings as in "make mpi") - make lib-qmmm args="-m mpi" - - # build with GNU Fortran compiler - make lib-qmmm args="-m gfortran" - - The build should produce two files: ``lib/qmmm/libqmmm.a`` and - ``lib/qmmm/Makefile.lammps``. The latter is copied from an - existing ``Makefile.lammps.*`` and has settings needed to build - LAMMPS with the QMMM library (though typically the settings are - just blank). If necessary, you can edit/create a new - ``lib/qmmm/Makefile.`` file for your system, which should - define an ``EXTRAMAKE`` variable to specify a corresponding - ``Makefile.lammps.`` file. - - You can then install QMMM package and build LAMMPS in the usual - manner. After completing the LAMMPS build and compiling Quantum - ESPRESSO with external library support (via ``make couple``), go - back to the ``lib/qmmm`` folder and follow the instructions in the - README file to build the combined LAMMPS/QE QM/MM executable - (``pwqmmm.x``) in the ``lib/qmmm`` folder. + The QMMM package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -2322,13 +1813,10 @@ This package depends on the BPM package. .. tab:: Traditional make - The RHEO package requires LAPACK (and BLAS) which can be either - a system provided library or the bundled "linalg" library. This - is a subset of LAPACK translated to C++. For that, one of the - provided ``Makefile.lammps.`` files needs to be copied - to ``Makefile.lammps`` and edited as needed. The default file - uses the bundled "linalg" library, which can be built by - ``make lib-linalg args='-m serial'`` in the ``src`` folder. + .. versionchanged:: 10Sep2025 + + The RHEO package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -2338,7 +1826,7 @@ SCAFACOS package ----------------------------------------- To build with this package, you must download and build the -`ScaFaCoS Coulomb solver library `_ +`ScaFaCoS Coulomb solver library `_ .. tabs:: @@ -2360,27 +1848,10 @@ To build with this package, you must download and build the .. tab:: Traditional make - You can download and build the ScaFaCoS library manually if you - prefer; follow the instructions in ``lib/scafacos/README``. You - can also do it in one step from the ``lammps/src`` dir, using a - command like these, which simply invokes the - ``lib/scafacos/Install.py`` script with the specified args: + .. versionchanged:: 10Sep2025 - .. code-block:: bash - - # print help message - make lib-scafacos - - # download and build in lib/scafacos/scafacos- - make lib-scafacos args="-b" - - # use existing ScaFaCoS installation in $HOME/scafacos - make lib-scafacos args="-p $HOME/scafacos - - Note that two symbolic (soft) links, ``includelink`` and ``liblink``, are - created in ``lib/scafacos`` to point to the ScaFaCoS src dir. When LAMMPS - builds in src it will use these links. You should not need to edit - the ``lib/scafacos/Makefile.lammps`` file. + The SCAFACOS package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. ---------- @@ -2406,10 +1877,7 @@ your system. .. tab:: Traditional make - The ``lib/vtk/Makefile.lammps`` file has settings for accessing - VTK files and its library, which LAMMPS needs to build with this - package. If the settings are not valid for your system, check if - one of the other ``lib/vtk/Makefile.lammps.*`` files is compatible - and copy it to Makefile.lammps. If none of the provided files - work, you will need to edit the ``Makefile.lammps`` file. See - ``lib/vtk/README`` for details. + .. versionchanged:: 10Sep2025 + + The VTK package no longer supports the the traditional make build. + You need to build LAMMPS with CMake. diff --git a/doc/src/Build_link.rst b/doc/src/Build_link.rst index 255a451b29..fd42aa92f4 100644 --- a/doc/src/Build_link.rst +++ b/doc/src/Build_link.rst @@ -93,47 +93,10 @@ executable, that are also required to link the LAMMPS executable. .. tab:: Traditional make - After you have compiled a static LAMMPS library using the - conventional build system for example with "make mode=static - serial". And you also have installed the ``POEMS`` package after - building its bundled library in ``lib/poems``. Then the commands - to build and link the coupled executable change to: + .. versionchanged:: 10Sep2025 - .. code-block:: bash - - gcc -c -O -I${HOME}/lammps/src -caller.c - g++ -o caller caller.o -L${HOME}/lammps/lib/poems \ - -L${HOME}/lammps/src/STUBS -L${HOME}/lammps/src \ - -llammps_serial -lpoems -lmpi_stubs - - Note, that you need to link with ``g++`` instead of ``gcc`` even - if you have written your code in C, since LAMMPS itself is C++ - code. You can display the currently applied settings for building - LAMMPS for the "serial" machine target by using the command: - - .. code-block:: bash - - make mode=print serial - - Which should output something like: - - .. code-block:: bash - - # Compiler: - CXX=g++ - # Linker: - LD=g++ - # Compilation: - CXXFLAGS=-g -O3 -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -I${HOME}/compile/lammps/lib/poems -I${HOME}/compile/lammps/src/STUBS - # Linking: - LDFLAGS=-g -O - # Libraries: - LDLIBS=-L${HOME}/compile/lammps/src -llammps_serial -L${HOME}/compile/lammps/lib/poems -L${HOME}/compile/lammps/src/STUBS -lpoems -lmpi_stubs - - From this you can gather the necessary paths and flags. With - makefiles for other *machine* configurations you need to do the - equivalent and replace "serial" with the corresponding "machine" - name of the makefile. + The traditional make build process no longer supports building + packages that require extra build steps in the ``lib`` folder. Link with LAMMPS as a shared library ------------------------------------ diff --git a/doc/src/Build_make.rst b/doc/src/Build_make.rst index 477c1c6e34..34d696cafb 100644 --- a/doc/src/Build_make.rst +++ b/doc/src/Build_make.rst @@ -24,9 +24,9 @@ with :doc:`CMake `. The makefiles of the traditional make based build process and the scripts they are calling expect a few additional tools to be available and functioning. - * A working C/C++ compiler toolchain supporting the C++11 standard; on + * A working C/C++ compiler toolchain supporting the C++17 standard; on Linux, these are often the GNU compilers. Some older compiler versions - require adding flags like ``-std=c++11`` to enable the C++11 mode. + require adding flags like ``-std=c++17`` to enable C++17 mode. * A Bourne shell compatible "Unix" shell program (frequently this is ``bash``) * A few shell utilities: ``ls``, ``mv``, ``ln``, ``rm``, ``grep``, ``sed``, ``tr``, ``cat``, ``touch``, ``diff``, ``dirname`` * Python (optional, required for ``make lib-`` in the ``src`` diff --git a/doc/src/Build_manual.rst b/doc/src/Build_manual.rst index 2fc29f584b..fac2c19949 100644 --- a/doc/src/Build_manual.rst +++ b/doc/src/Build_manual.rst @@ -57,6 +57,8 @@ Python interpreter version 3.8 or later, the ``doxygen`` tools and internet access to download additional files and tools are required. This download is usually only required once or after the documentation folder is returned to a pristine state with ``make clean-all``. +You can also upgrade those packages to their latest available versions +with ``make upgrade``. For the documentation build a python virtual environment is set up in the folder ``doc/docenv`` and various python packages are installed into @@ -80,13 +82,18 @@ folder. The following ``make`` commands are available: make fasthtml # generate approximate HTML in fasthtml dir using pandoc + make upgrade # upgrade sphinx, extensions, and dependencies to latest supported versions make clean # remove intermediate RST files created by HTML build make clean-all # remove entire build folder and any cached data + make upgrade # upgrade the python packages in the virtual environment make anchor_check # check for duplicate anchor labels make style_check # check for complete and consistent style lists make package_check # check for complete and consistent package lists - make link_check # check for broken or outdated URLs + make char_check # check for non-ASCII characters + make role_check # check for misformatted role keywords + + make link_check # check for broken external URLs make spelling # spell-check the manual ---------- @@ -300,7 +307,7 @@ be multiple tests run automatically: In addition, there is the option to run a spellcheck on the entire manual with ``make spelling``. This requires `a library called enchant -`_. To avoid printing out *false +`_. To avoid printing out *false positives* (e.g. keywords, names, abbreviations) those can be added to the file ``lammps/doc/utils/sphinx-config/false_positives.txt``. diff --git a/doc/src/Build_package.rst b/doc/src/Build_package.rst index c4c4889806..3b707c6725 100644 --- a/doc/src/Build_package.rst +++ b/doc/src/Build_package.rst @@ -36,8 +36,7 @@ packages: :columns: 6 * :ref:`ADIOS ` - * :ref:`ATC ` - * :ref:`AWPMD ` + * :ref:`APIP ` * :ref:`COLVARS ` * :ref:`COMPRESS ` * :ref:`ELECTRODE ` @@ -60,7 +59,6 @@ packages: * :ref:`OPENMP ` * :ref:`OPT ` * :ref:`PLUMED ` - * :ref:`POEMS ` * :ref:`PYTHON ` * :ref:`QMMM ` * :ref:`RHEO ` @@ -68,8 +66,8 @@ packages: * :ref:`VORONOI ` * :ref:`VTK ` -The mechanism for including packages is simple but different for CMake -versus make. +The mechanism for including packages is simple but different for the CMake +build system in comparison to the traditional make build. .. tabs:: @@ -146,11 +144,14 @@ other files dependent on that package are also excluded. .. note:: - By default no packages are installed. Prior to August 2018, however, - if you downloaded a tarball, 3 packages (KSPACE, MANYBODY, MOLECULE) - were pre-installed via the traditional make procedure in the ``src`` - directory. That is no longer the case, so that CMake will build - as-is without needing to uninstall those packages. + By default **no** packages are installed. Prior to August 2018, + however, if you downloaded a tarball, 3 packages (KSPACE, MANYBODY, + MOLECULE) were pre-installed via the traditional make procedure in + the ``src`` directory. That is no longer the case, so that CMake + will build as-is without needing to first uninstall those + packages. You can quickly include those packages (plus RIGID) by + using the "basic" preset with CMake or ``make yes-basic`` with + traditional make as discussed below. ---------- @@ -263,10 +264,6 @@ These commands install/uninstall sets of packages: make no-basic # remove a few commonly used packages' make yes-most # install most packages w/o libs' make no-most # remove most packages w/o libs' - make yes-lib # install packages that require extra libraries - make no-lib # uninstall packages that require extra libraries - make yes-ext # install packages that require external libraries - make no-ext # uninstall packages that require external libraries which install/uninstall various sets of packages. Typing ``make package`` will list all the these commands. @@ -276,7 +273,7 @@ package`` will list all the these commands. Installing or uninstalling a package for the make based build process works by simply copying files back and forth between the main source directory src and the subdirectories with the package name (e.g. - src/KSPACE, src/ATC), so that the files are included or excluded + src/KSPACE, src/MANYBODY), so that the files are included or excluded when LAMMPS is built. Only source files in the src folder will be compiled. diff --git a/doc/src/Build_prerequisites.rst b/doc/src/Build_prerequisites.rst new file mode 100644 index 0000000000..874daa82a3 --- /dev/null +++ b/doc/src/Build_prerequisites.rst @@ -0,0 +1,22 @@ +Prerequisites +------------- + +Which software you need to compile and use LAMMPS strongly depends on +which :doc:`features and settings ` and which +:doc:`optional packages ` you are trying to include. +Common to all is that you need a C++ and C compiler, where the C++ +compiler has to support at least the C++17 standard (note that some +compilers require a command-line flag to activate C++17 support). +Furthermore, if you are building with CMake, you need at least CMake +version 3.20 and a compatible build tool (make or ninja-build); if you +are building the the legacy GNU make based build system you need GNU +make (other make variants are not going to work since the build system +uses features unique to GNU make) and a Unix-like build environment with +a Bourne shell, and shell tools like "sed", "grep", "touch", "test", +"tr", "cp", "mv", "rm", "ln", "diff" and so on. Parts of LAMMPS +interface with or use Python version 3.6 or later. + +The LAMMPS developers aim to keep LAMMPS very portable and usable - +at least in parts - on most operating systems commonly used for +running MD simulations. Please see the :doc:`section on portablility +` for more details. diff --git a/doc/src/Build_settings.rst b/doc/src/Build_settings.rst index 7c16409995..d9bd2cbc95 100644 --- a/doc/src/Build_settings.rst +++ b/doc/src/Build_settings.rst @@ -8,7 +8,7 @@ Optional build settings LAMMPS can be built with several optional settings. Each subsection explains how to do this for building both with CMake and make. -* `C++11 and C++17 standard compliance`_ when building all of LAMMPS +* `C++17 standard compliance`_ when building all of LAMMPS * `FFT library`_ for use with the :doc:`kspace_style pppm ` command * `Size of LAMMPS integer types and size limits`_ * `Read or write compressed files`_ @@ -21,37 +21,28 @@ explains how to do this for building both with CMake and make. ---------- -.. _cxx11: +.. _cxx17: -C++11 and C++17 standard compliance ------------------------------------ +C++17 standard compliance +------------------------- -A C++11 standard compatible compiler is currently the minimum -requirement for compiling LAMMPS. LAMMPS version 3 March 2020 is the -last version compatible with the previous C++98 standard for the core -code and most packages. Most currently used C++ compilers are compatible -with C++11, but some older ones may need extra flags to enable C++11 -compliance. Example for GNU c++ 4.8.x: +.. versionchanged:: 10Sep2025 + +A C++17 standard compatible compiler is currently the minimum +requirement for compiling LAMMPS. LAMMPS version 22 July 2025 is the +last version compatible with the C++11 standard for the core code and +most packages. Most currently used C++ compilers are compatible with +C++17, but some older ones may need extra flags to enable C++17 +compliance. .. code-block:: make - CCFLAGS = -g -O3 -std=c++11 + CCFLAGS = -g -O3 -std=c++17 Individual packages may require compliance with a later C++ standard -like C++14 or C++17. These requirements will be documented with the +like C++20. These requirements will be documented with the :doc:`individual packages `. -.. versionchanged:: 4Feb2025 - -Starting with LAMMPS version 4 February 2025 we are starting a -transition to require the C++17 standard. Most current compilers are -compatible and if the C++17 standard is available by default, LAMMPS -will enable C++17 and will compile normally. If the chosen compiler is -not compatible with C++17, but only supports C++11, then the define --DLAMMPS_CXX11 is required to fall back to compiling with a C++11 -compiler. After the next stable release of LAMMPS in summer 2025, the -LAMMPS development branch and future releases will require C++17. - ---------- .. _fft: @@ -286,7 +277,7 @@ find a heFFTe installation with the correct back end (e.g., FFTW or MKL), it will attempt to download and build the library automatically. In this case, LAMMPS CMake will also accept all heFFTe specific variables listed in the `heFFTe documentation -`_ +`_ and those variables will be passed into the heFFTe build. ---------- @@ -358,12 +349,10 @@ The "bigbig" setting increases the size of image flags and atom IDs over the default "smallbig" setting. These are limits for the core of the LAMMPS code, specific features or -some styles may impose additional limits. The :ref:`ATC -` package cannot be compiled with the "bigbig" setting. -Also, there are limitations when using the library interface where some -functions with known issues have been replaced by dummy calls printing a -corresponding error message rather than crashing randomly or corrupting -data. +some styles may impose additional limits. Also, there are limitations +when using the library interface where some functions with known issues +have been replaced by dummy calls printing a corresponding error message +rather than crashing randomly or corrupting data. Atom IDs are not required for atomic systems which do not store bond topology information, though IDs are enabled by default. The @@ -503,7 +492,7 @@ during a run. library and lead to simulations using compressed output or input to hang or crash. For selected operations, compressed file I/O is also available using a compression library instead, which is what the - :ref:`COMPRESS package ` enables. + :ref:`COMPRESS package ` provides. -------------------------------------------------- @@ -565,7 +554,7 @@ folder as examples of how those kinds of potential files look like and for use with the provided input examples in the ``examples`` tree. To keep the size of the distributed LAMMPS source package small, very large potential files (> 5 MBytes) are not bundled, but only downloaded on -demand when the :doc:`corresponding package ` is +demand when the :doc:`corresponding package ` is installed. This automatic download can be prevented when :doc:`building LAMMPS with CMake ` by adding the setting `-D DOWNLOAD_POTENTIALS=off` when configuring. diff --git a/doc/src/Commands_compute.rst b/doc/src/Commands_compute.rst index b53d9d6820..c61501e693 100644 --- a/doc/src/Commands_compute.rst +++ b/doc/src/Commands_compute.rst @@ -30,6 +30,7 @@ KOKKOS, o = OPENMP, t = OPT. * :doc:`cnp/atom ` * :doc:`com ` * :doc:`com/chunk ` + * :doc:`composition/atom (k) ` * :doc:`contact/atom ` * :doc:`coord/atom (k) ` * :doc:`count/type ` @@ -78,7 +79,6 @@ KOKKOS, o = OPENMP, t = OPT. * :doc:`ke/atom/eff ` * :doc:`ke/eff ` * :doc:`ke/rigid ` - * :doc:`composition/atom (k) ` * :doc:`mliap ` * :doc:`momentum ` * :doc:`msd ` diff --git a/doc/src/Commands_fix.rst b/doc/src/Commands_fix.rst index 35c3804969..628fde8549 100644 --- a/doc/src/Commands_fix.rst +++ b/doc/src/Commands_fix.rst @@ -20,8 +20,8 @@ OPT. * :doc:`amoeba/bitorsion ` * :doc:`amoeba/pitorsion ` * :doc:`append/atoms ` - * :doc:`atc ` * :doc:`atom/swap ` + * :doc:`atom_weight/apip ` * :doc:`ave/atom ` * :doc:`ave/chunk ` * :doc:`ave/correlate ` @@ -29,6 +29,7 @@ OPT. * :doc:`ave/grid ` * :doc:`ave/histo ` * :doc:`ave/histo/weight ` + * :doc:`ave/moments ` * :doc:`ave/time ` * :doc:`aveforce ` * :doc:`balance ` @@ -64,7 +65,7 @@ OPT. * :doc:`electrode/conp (i) ` * :doc:`electrode/conq (i) ` * :doc:`electrode/thermo (i) ` - * :doc:`electron/stopping ` + * :doc:`electron/stopping (k) ` * :doc:`electron/stopping/fit ` * :doc:`enforce2d (k) ` * :doc:`eos/cv ` @@ -77,6 +78,7 @@ OPT. * :doc:`flow/gauss ` * :doc:`freeze (k) ` * :doc:`gcmc ` + * :doc:`gjf ` * :doc:`gld ` * :doc:`gle ` * :doc:`gravity (ko) ` @@ -84,11 +86,14 @@ OPT. * :doc:`halt ` * :doc:`heat ` * :doc:`heat/flow ` + * :doc:`hmc ` * :doc:`hyper/global ` * :doc:`hyper/local ` * :doc:`imd ` * :doc:`indent ` * :doc:`ipi ` + * :doc:`lambda/apip ` + * :doc:`lambda_thermostat/apip ` * :doc:`langevin (k) ` * :doc:`langevin/drude ` * :doc:`langevin/eff ` @@ -111,6 +116,7 @@ OPT. * :doc:`mvv/tdpd ` * :doc:`neb ` * :doc:`neb/spin ` + * :doc:`neighbor/swap ` * :doc:`nonaffine/displacement ` * :doc:`nph (ko) ` * :doc:`nph/asphere (o) ` @@ -130,7 +136,6 @@ OPT. * :doc:`nve (giko) ` * :doc:`nve/asphere (gi) ` * :doc:`nve/asphere/noforce ` - * :doc:`nve/awpmd ` * :doc:`nve/body ` * :doc:`nve/dot ` * :doc:`nve/dotc/langevin ` @@ -166,7 +171,6 @@ OPT. * :doc:`pimd/nvt/bosonic ` * :doc:`planeforce ` * :doc:`plumed ` - * :doc:`poems ` * :doc:`polarize/bem/gmres ` * :doc:`polarize/bem/icc ` * :doc:`polarize/functional ` @@ -216,6 +220,7 @@ OPT. * :doc:`rigid/small (o) ` * :doc:`rx (k) ` * :doc:`saed/vtk ` + * :doc:`set ` * :doc:`setforce (k) ` * :doc:`setforce/spin ` * :doc:`sgcmc ` diff --git a/doc/src/Commands_kspace.rst b/doc/src/Commands_kspace.rst index 0d9b34a2cc..c37d9eee48 100644 --- a/doc/src/Commands_kspace.rst +++ b/doc/src/Commands_kspace.rst @@ -31,3 +31,5 @@ OPT. * :doc:`pppm/dielectric ` * :doc:`pppm/electrode (i) ` * :doc:`scafacos ` + * :doc:`zero ` + diff --git a/doc/src/Commands_pair.rst b/doc/src/Commands_pair.rst index 362bccb9e4..0e2eb49003 100644 --- a/doc/src/Commands_pair.rst +++ b/doc/src/Commands_pair.rst @@ -28,7 +28,6 @@ OPT. * :doc:`airebo/morse (io) ` * :doc:`amoeba (g) ` * :doc:`atm ` - * :doc:`awpmd/cut ` * :doc:`beck (go) ` * :doc:`body/nparticle ` * :doc:`body/rounded/polygon ` @@ -96,7 +95,9 @@ OPT. * :doc:`eam/cd ` * :doc:`eam/cd/old ` * :doc:`eam/fs (gikot) ` + * :doc:`eam/fs/apip ` * :doc:`eam/he ` + * :doc:`eam/apip ` * :doc:`edip (o) ` * :doc:`edip/multi ` * :doc:`edpd (g) ` @@ -124,6 +125,9 @@ OPT. * :doc:`ilp/tmd (t) ` * :doc:`kolmogorov/crespi/full ` * :doc:`kolmogorov/crespi/z ` + * :doc:`lambda/input/apip ` + * :doc:`lambda/input/csp/apip ` + * :doc:`lambda/zone/apip ` * :doc:`lcbop ` * :doc:`lebedeva/z ` * :doc:`lennard/mdf ` @@ -237,6 +241,9 @@ OPT. * :doc:`oxrna2/coaxstk ` * :doc:`pace (k) ` * :doc:`pace/extrapolation (k) ` + * :doc:`pace/apip ` + * :doc:`pace/fast/apip ` + * :doc:`pace/precise/apip ` * :doc:`pedone (o) ` * :doc:`pod (k) ` * :doc:`peri/eps ` diff --git a/doc/src/Commands_removed.rst b/doc/src/Commands_removed.rst index cea964fe79..e48078ce9b 100644 --- a/doc/src/Commands_removed.rst +++ b/doc/src/Commands_removed.rst @@ -1,7 +1,7 @@ Removed commands and packages ============================= -.. contents:: \ +.. contents:: ------ @@ -12,10 +12,51 @@ stop LAMMPS and print a suitable error message in most cases, when a style/command is used that has been removed or will replace the command with the direct alternative (if available) and print a warning. +ATC, AWPMD, and POEMS packages +------------------------------ + +.. deprecated:: 10Sep2025 + +The ATC, AWPMD, and POEMS packages are removed.because there were +unmaintained for a long time and their legacy C++ programming style +started to create problems with modern C++ compilers. LAMMPS version +22 July 2025 is the last version that contains them. You have to +download and compile this version, if you want to use any of these +packages. + +Neighbor style and comm mode multi/old +-------------------------------------- + +.. deprecated:: 10Sep2025 + +The original implementation of neighbor style multi and comm mode multi, +most recently available under "multi/old" has been removed. The new +implementation should be used instead. + +LAMMPS-GUI source code +---------------------- + +.. deprecated:: 10Sep2025 + +The LAMMPS-GUI sources used to be included in LAMMPS but they are now +hosted in their own git repository at +https://github.com/akohlmey/lammps-gui/ and the corresponding online +documentation is at https://lammps-gui.lammps.org/ + +GJF formulation in fix langevin +------------------------------- + +.. deprecated:: 22Jul2025 + +The *gjf* keyword in fix langevin has been removed. The GJF +functionality has been moved to its own fix style :doc:`fix gjf +`. + + LAMMPS shell ------------ -.. versionchanged:: 29Aug2024 +.. deprecated:: 29Aug2024 The LAMMPS shell has been removed from the LAMMPS distribution. Users are encouraged to use the :ref:`LAMMPS-GUI ` tool instead. @@ -23,7 +64,7 @@ are encouraged to use the :ref:`LAMMPS-GUI ` tool instead. i-PI tool --------- -.. versionchanged:: 27Jun2024 +.. deprecated:: 27Jun2024 The i-PI tool has been removed from the LAMMPS distribution. Instead, instructions to install i-PI from PyPI via pip are provided. diff --git a/doc/src/Developer_code_design.rst b/doc/src/Developer_code_design.rst index 9213efa18f..e38f843cdc 100644 --- a/doc/src/Developer_code_design.rst +++ b/doc/src/Developer_code_design.rst @@ -27,11 +27,13 @@ then, we have begun to replace C-style constructs with equivalent C++ functionality. This was taken either from the C++ standard library or implemented as custom classes or functions. The goal is to improve readability of the code and to increase code reuse through abstraction -of commonly used functionality. +of commonly used functionality. In summer 2025, after the 22 July 2025 +stable release, the minimum required C++ language standard was raised to +C++17. .. note:: - Please note that as of spring 2023 there is still a sizable chunk of + Please note that as of summer 2025 there is still a sizable chunk of legacy code in LAMMPS that has not yet been refactored to reflect these style conventions in full. LAMMPS has a large code base and many contributors. There is also a hierarchy of precedence in which @@ -276,10 +278,12 @@ I/O and output formatting C-style stdio versus C++ style iostreams ======================================== -LAMMPS uses the "stdio" library of the standard C library for reading -from and writing to files and console instead of C++ "iostreams". -This is mainly motivated by better performance, better control over -formatting, and less effort to achieve specific formatting. +LAMMPS uses the `stdio ` +library of the standard C library for reading from and writing to files +and console instead of C++ `iostreams +`_. This is mainly motivated +by better performance, better control over formatting, and less effort +to achieve specific formatting. Since mixing "stdio" and "iostreams" can lead to unexpected behavior, use of the latter is strongly discouraged. Output to the screen should @@ -290,11 +294,12 @@ Furthermore, output should generally only be done by MPI rank 0 ``logfile`` should use the :cpp:func:`utils::logmesg() convenience function `. -We discourage the use of stringstreams because the bundled {fmt} library -and the customized tokenizer classes provide the same functionality in a -cleaner way with better performance. This also helps maintain a -consistent programming syntax with code from many different -contributors. +We discourage the use of `stringstreams +`_ because +the bundled {fmt} library and the customized tokenizer classes provide +the same functionality in a cleaner way with better performance. This +also helps maintain a consistent programming syntax with code from many +different contributors. Formatting with the {fmt} library =================================== @@ -327,11 +332,13 @@ Formatted strings are frequently created by calling the In the simplest case, no additional characters are needed, as {fmt} will choose the default format based on the data type of the argument. Otherwise, the :cpp:func:`utils::print() ` -function may be used instead of ``printf()`` or ``fprintf()``. In -addition, several LAMMPS output functions, that originally accepted a -single string as argument have been overloaded to accept a format string -with optional arguments as well (e.g., ``Error::all()``, -``Error::one()``, :cpp:func:`utils::logmesg() +function may be used instead of ``printf()`` or ``fprintf()``. The +equivalent `std::print() function +`_ will become +available in C++ 23. In addition, several LAMMPS output functions, that +originally accepted a single string as argument have been overloaded to +accept a format string with optional arguments as well (e.g., +``Error::all()``, ``Error::one()``, :cpp:func:`utils::logmesg() `). Summary of the {fmt} format syntax @@ -397,8 +404,24 @@ value, for example "{:{}d}" will consume two integer arguments, the first will be the value shown and the second the minimum width. For more details and examples, please consult the `{fmt} syntax -documentation `_ website. - +documentation `_ website. Since we +plan to eventually transition from {fmt} to using ``std::format()`` +of the C++ standard library, it is advisable to avoid using any +extensions beyond what the `C++20 standard offers +`_. + +JSON format input and output +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Since LAMMPS version 12 June 2025, the LAMMPS source code includes a +copy of the header-only JSON C++ library from https://json.nlohmann.me/. +Same as with the {fmt} library described above some modification to the +namespace has been made to avoid collisions with other uses of the same +library, which may use a different, incompatible version. To have a +uniform interface with other parts of LAMMPS, you should be using +``#include "json.h"`` or ``#include "json_fwd.h"`` (in header files). +See the implementation of the :doc:`molecule command ` for an +example of using this library. Memory management ^^^^^^^^^^^^^^^^^ diff --git a/doc/src/Developer_notes.rst b/doc/src/Developer_notes.rst index 4c789abb3a..9a1b3899f6 100644 --- a/doc/src/Developer_notes.rst +++ b/doc/src/Developer_notes.rst @@ -248,7 +248,7 @@ caught by the LAMMPS ``main()`` program and then handled accordingly. The reason for this approach is to support applications, especially graphical applications like :ref:`LAMMPS-GUI `, that are linked to the LAMMPS library and have a mechanism to avoid that an error -in LAMMPS terminates the application. By catching the exceptions, the +in LAMMPS terminates the application. By catching the exceptions, the application can delete the failing LAMMPS class instance and create a new one to try again. In a similar fashion, the :doc:`LAMMPS Python module ` checks for this and then re-throws corresponding @@ -292,16 +292,21 @@ processing similar to the "format()" functionality in Python. .. note:: - For commands like :doc:`fix ave/time ` that accept - wildcard arguments, the :cpp:func:`utils::expand_args` function - may be passed as an optional argument where the function will provide - a map to the original arguments from the expanded argument indices. + Commands that accept wildcard arguments, for example + :doc:`fix ave/time `, use + :cpp:func:`utils::expand_args() ` + to convert the wildcards into a list of explicit arguments. + This function accepts a pointer address as an optional argument, + which will be set to a map to the original arguments from the + expanded argument indices. Please see the corresponding source + code for details on how to apply this map in error messages. For complex errors, that can have multiple causes and which cannot be explained in a single line, you can append to the error message, the -string created by :cpp:func:`utils::errorurl`, which then provides a -URL pointing to a paragraph of the :doc:`Errors_details` that -corresponds to the number provided. Example: +string created by :cpp:func:`utils::errorurl() +`, which then provides a URL pointing to a +paragraph of the :doc:`Errors_details` that corresponds to the number +provided. Example: .. code-block:: c++ diff --git a/doc/src/Developer_org.rst b/doc/src/Developer_org.rst index 93632ae03f..5ad8f94993 100644 --- a/doc/src/Developer_org.rst +++ b/doc/src/Developer_org.rst @@ -28,7 +28,7 @@ The ``lib`` directory contains the source code for several supporting libraries or files with configuration settings to use globally installed libraries, that are required by some optional packages. They may include python scripts that can transparently download additional source -code on request. Each subdirectory, like ``lib/poems`` or ``lib/gpu``, +code on request. Each subdirectory, like ``lib/colvars`` or ``lib/gpu``, contains the source files, some of which are in different languages such as Fortran or CUDA. These libraries included in the LAMMPS build, if the corresponding package is installed. diff --git a/doc/src/Developer_platform.rst b/doc/src/Developer_platform.rst index 9b05299146..ad8899478a 100644 --- a/doc/src/Developer_platform.rst +++ b/doc/src/Developer_platform.rst @@ -49,6 +49,16 @@ Platform information functions File and path functions and global constants ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Since we are requiring C++17 to compile LAMMPS, you can also make use of +the functionality of the `C++ filesystem library +`_. The following +functions are in part convenience functions or emulate the behavior of +similar Python functions or Unix shell commands. Please note that the +you need to use the ``string()`` member function of the +`std::filesystem::path class +`_ to get access +to the path as a C++ string class instance. + .. doxygenvariable:: filepathsep :project: progguide @@ -61,6 +71,9 @@ File and path functions and global constants .. doxygenfunction:: path_basename :project: progguide +.. doxygenfunction:: path_dirname + :project: progguide + .. doxygenfunction:: path_join :project: progguide @@ -73,12 +86,6 @@ File and path functions and global constants .. doxygenfunction:: disk_free :project: progguide -.. doxygenfunction:: path_is_directory - :project: progguide - -.. doxygenfunction:: current_directory - :project: progguide - .. doxygenfunction:: list_directory :project: progguide diff --git a/doc/src/Developer_plugins.rst b/doc/src/Developer_plugins.rst index 354350dde7..3ef757abf0 100644 --- a/doc/src/Developer_plugins.rst +++ b/doc/src/Developer_plugins.rst @@ -68,24 +68,25 @@ Members of ``lammpsplugin_t`` * - author - String with the name and email of the author * - creator.v1 - - Pointer to factory function for pair, bond, angle, dihedral, improper, kspace, or command styles + - Pointer to factory function for pair, bond, angle, dihedral, improper, kspace, command, or minimize styles * - creator.v2 - - Pointer to factory function for compute, fix, or region styles + - Pointer to factory function for compute, fix, region, or run styles * - handle - Pointer to the open DSO file handle Only one of the two alternate creator entries can be used at a time and which of those is determined by the style of plugin. The "creator.v1" element is for factory functions of supported styles computing forces -(i.e. pair, bond, angle, dihedral, or improper styles) or command styles -and the function takes as single argument the pointer to the LAMMPS -instance. The factory function is cast to the ``lammpsplugin_factory1`` -type before assignment. The "creator.v2" element is for factory -functions creating an instance of a fix, compute, or region style and -takes three arguments: a pointer to the LAMMPS instance, an integer with -the length of the argument list and a ``char **`` pointer to the list of -arguments. The factory function pointer needs to be cast to the -``lammpsplugin_factory2`` type before assignment. +(i.e. pair, bond, angle, dihedral, or improper styles), command styles, +or minimize styles and the function takes as single argument the pointer +to the LAMMPS instance. The factory function is cast to the +``lammpsplugin_factory1`` type before assignment. The "creator.v2" +element is for factory functions creating an instance of a fix, compute, +region, or run style and takes three arguments: a pointer to the LAMMPS +instance, an integer with the length of the argument list and a ``char +**`` pointer to the list of arguments. The factory function pointer +needs to be cast to the ``lammpsplugin_factory2`` type before +assignment. Pair style example ^^^^^^^^^^^^^^^^^^ @@ -247,8 +248,8 @@ DSO handle. The registration function is called with a pointer to the address of this struct and the pointer of the LAMMPS class. The registration function will then add the factory function of the plugin style to the respective style map under the provided name. It will also make a copy of the struct -in a list of all loaded plugins and update the reference counter for loaded -plugins from this specific DSO file. +in a global list of all loaded plugins and update the reference counter for +loaded plugins from this specific DSO file. The pair style itself (i.e. the PairMorse2 class in this example) can be written just like any other pair style that is included in LAMMPS. For @@ -263,6 +264,21 @@ the plugin will override the existing code. This can be used to modify the behavior of existing styles or to debug new versions of them without having to re-compile or re-install all of LAMMPS. +.. versionchanged:: 12Jun2025 + +When using the :doc:`clear ` command, plugins are not unloaded +but restored to their respective style maps. This also applies when +multiple LAMMPS instances are created and deleted through the library +interface. The :doc:`plugin load ` load command may be issued +again, but for existing plugins they will be skipped. To replace +plugins they must be explicitly unloaded with :doc:`plugin unload +`. When multiple LAMMPS instances are created concurrently, any +loaded plugins will be added to the global list of plugins, but are not +immediately available to any LAMMPS instance that was created before +loading the plugin. To "import" such plugins, the :doc:`plugin restore +` may be used. Plugins are only removed when they are explicitly +unloaded or the LAMMPS interface is "finalized". + Compiling plugins ^^^^^^^^^^^^^^^^^ @@ -300,3 +316,8 @@ license). This will automatically set the required environment variable and launching a (compatible) LAMMPS binary will load and register the plugin and the ML-PACE package can then be used as it was linked into LAMMPS. + +--------- + +You can find additional LAMMPS plugins in the `LAMMPS plugins source +code repository on GitHub `_ diff --git a/doc/src/Developer_updating.rst b/doc/src/Developer_updating.rst index 21980be3d8..364c0e6ea0 100644 --- a/doc/src/Developer_updating.rst +++ b/doc/src/Developer_updating.rst @@ -29,6 +29,8 @@ Available topics in mostly chronological order are: - `Rename of fix STORE/PERATOM to fix STORE/ATOM and change of arguments`_ - `Use Output::get_dump_by_id() instead of Output::find_dump()`_ - `Refactored grid communication using Grid3d/Grid2d classes instead of GridComm`_ +- `FLERR as first argument to minimum image functions in Domain class`_ +- `Use utils::logmesg() instead of error->warning()`_ ---- @@ -162,7 +164,7 @@ New: .. seealso:: :cpp:func:`utils::count_words() `, - :cpp:func:`utils::trim_comments() ` + :cpp:func:`utils::trim_comment() ` Use utils::numeric() functions instead of force->numeric() @@ -333,7 +335,7 @@ Use of "override" instead of "virtual" .. versionchanged:: 17Feb2022 -Since LAMMPS requires C++11, we switched to use the "override" keyword +Since LAMMPS requires C++17, we switched to use the "override" keyword instead of "virtual" to indicate polymorphism in derived classes. This allows the C++ compiler to better detect inconsistencies when an override is intended or not. Please note that "override" has to be @@ -610,3 +612,72 @@ KSpace solvers which use distributed FFT grids: - ``src/KSPACE/pppm.cpp`` This change is **required** or else the code will not compile. + +FLERR as first argument to minimum image functions in Domain class +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. versionchanged:: 12Jun2025 + +The ``Domain::minimum_image()`` and ``Domain::minimum_image_big()`` +functions were changed to take the ``FLERR`` macros as first argument. +This way the error message indicates *where* the function was called +instead of pointing to the implementation of the function. Example: + +Old: + +.. code-block:: c++ + + double delx1 = x[i1][0] - x[i2][0]; + double dely1 = x[i1][1] - x[i2][1]; + double delz1 = x[i1][2] - x[i2][2]; + domain->minimum_image(delx1, dely1, delz1); + double r1 = sqrt(delx1 * delx1 + dely1 * dely1 + delz1 * delz1); + + double delx2 = x[i3][0] - x[i2][0]; + double dely2 = x[i3][1] - x[i2][1]; + double delz2 = x[i3][2] - x[i2][2]; + domain->minimum_image_big(delx2, dely2, delz2); + double r2 = sqrt(delx2 * delx2 + dely2 * dely2 + delz2 * delz2); + +New: + +.. code-block:: c++ + + double delx1 = x[i1][0] - x[i2][0]; + double dely1 = x[i1][1] - x[i2][1]; + double delz1 = x[i1][2] - x[i2][2]; + domain->minimum_image(FLERR, delx1, dely1, delz1); + double r1 = sqrt(delx1 * delx1 + dely1 * dely1 + delz1 * delz1); + + double delx2 = x[i3][0] - x[i2][0]; + double dely2 = x[i3][1] - x[i2][1]; + double delz2 = x[i3][2] - x[i2][2]; + domain->minimum_image_big(FLERR, delx2, dely2, delz2); + double r2 = sqrt(delx2 * delx2 + dely2 * dely2 + delz2 * delz2); + +This change is **required** or else the code will not compile. + +Use utils::logmesg() instead of error->warning() +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. versionchanged:: 22Jul2025 + +The ``Error::message()`` method has been removed since its functionality +has been superseded by the :cpp:func:`utils::logmesg() +` function. + +Old: + +.. code-block:: c++ + + if (comm->me == 0) { + error->message(FLERR, "INFO: About to read data file: {}", filename); + } + +New: + +.. code-block:: c++ + + if (comm->me == 0) utils::logmesg(lmp, "INFO: About to read data file: {}\n", filename); + +This change is **required** or else the code will not compile. diff --git a/doc/src/Developer_write_command.rst b/doc/src/Developer_write_command.rst index 16ac2092f6..05d1b2cf96 100644 --- a/doc/src/Developer_write_command.rst +++ b/doc/src/Developer_write_command.rst @@ -38,7 +38,7 @@ Interfacing the *libcurl* library """"""""""""""""""""""""""""""""" Rather than implementing the various protocols for downloading files, we -rely on an external library: `libcurl library `_. +rely on an external library: `libcurl library `_. This requires that the library and its headers are installed. For the traditional GNU make build system, this simply requires edits to the machine makefile to add compilation flags like for other libraries. For the CMake diff --git a/doc/src/Errors_common.rst b/doc/src/Errors_common.rst index 3229181d00..23c450c4fc 100644 --- a/doc/src/Errors_common.rst +++ b/doc/src/Errors_common.rst @@ -1,124 +1,149 @@ -Common problems -=============== - -If two LAMMPS runs do not produce the exact same answer on different -machines or different numbers of processors, this is typically not a -bug. In theory you should get identical answers on any number of -processors and on any machine. In practice, numerical round-off can -cause slight differences and eventual divergence of molecular dynamics -phase space trajectories within a few 100s or few 1000s of timesteps. -However, the statistical properties of the two runs (e.g. average -energy or temperature) should still be the same. - -If the :doc:`velocity ` command is used to set initial atom -velocities, a particular atom can be assigned a different velocity -when the problem is run on a different number of processors or on -different machines. If this happens, the phase space trajectories of -the two simulations will rapidly diverge. See the discussion of the -*loop* option in the :doc:`velocity ` command for details and -options that avoid this issue. - -Similarly, the :doc:`create_atoms ` command generates a -lattice of atoms. For the same physical system, the ordering and -numbering of atoms by atom ID may be different depending on the number -of processors. - -Some commands use random number generators which may be setup to -produce different random number streams on each processor and hence -will produce different effects when run on different numbers of -processors. A commonly-used example is the :doc:`fix langevin ` command for thermostatting. - -A LAMMPS simulation typically has two stages, setup and run. Most -LAMMPS errors are detected at setup time; others like a bond -stretching too far may not occur until the middle of a run. - -LAMMPS tries to flag errors and print informative error messages so -you can fix the problem. For most errors it will also print the last -input script command that it was processing. Of course, LAMMPS cannot -figure out your physics or numerical mistakes, like choosing too big a -timestep, specifying erroneous force field coefficients, or putting 2 -atoms on top of each other! If you run into errors that LAMMPS -does not catch that you think it should flag, please send an email to -the `developers `_ or create an new -topic on the dedicated `MatSci forum section `_. - -If you get an error message about an invalid command in your input -script, you can determine what command is causing the problem by -looking in the log.lammps file or using the :doc:`echo command ` -to see it on the screen. If you get an error like "Invalid ... -style", with ... being fix, compute, pair, etc, it means that you -mistyped the style name or that the command is part of an optional -package which was not compiled into your executable. The list of -available styles in your executable can be listed by using -:doc:`the -h command-line switch `. The installation and -compilation of optional packages is explained on the -:doc:`Build packages ` doc page. - -For a given command, LAMMPS expects certain arguments in a specified -order. If you mess this up, LAMMPS will often flag the error, but it -may also simply read a bogus argument and assign a value that is -valid, but not what you wanted. E.g. trying to read the string "abc" -as an integer value of 0. Careful reading of the associated doc page -for the command should allow you to fix these problems. In most cases, -where LAMMPS expects to read a number, either integer or floating point, -it performs a stringent test on whether the provided input actually -is an integer or floating-point number, respectively, and reject the -input with an error message (for instance, when an integer is required, -but a floating-point number 1.0 is provided): - -.. parsed-literal:: - - ERROR: Expected integer parameter instead of '1.0' in input script or data file - -Some commands allow for using variable references in place of numeric -constants so that the value can be evaluated and may change over the -course of a run. This is typically done with the syntax *v_name* for a -parameter, where name is the name of the variable. On the other hand, -immediate variable expansion with the syntax ${name} is performed while -reading the input and before parsing commands, - -.. note:: - - Using a variable reference (i.e. *v_name*) is only allowed if - the documentation of the corresponding command explicitly says it is. - Otherwise, you will receive an error message of this kind: - -.. parsed-literal:: - - ERROR: Expected floating point parameter instead of 'v_name' in input script or data file - -Generally, LAMMPS will print a message to the screen and logfile and -exit gracefully when it encounters a fatal error. Sometimes it will -print a WARNING to the screen and logfile and continue on; you can -decide if the WARNING is important or not. A WARNING message that is -generated in the middle of a run is only printed to the screen, not to -the logfile, to avoid cluttering up thermodynamic output. If LAMMPS -crashes or hangs without spitting out an error message first then it -could be a bug (see :doc:`this section `) or one of the following -cases: - -LAMMPS runs in the available memory a processor allows to be -allocated. Most reasonable MD runs are compute limited, not memory -limited, so this should not be a bottleneck on most platforms. Almost -all large memory allocations in the code are done via C-style malloc's -which will generate an error message if you run out of memory. -Smaller chunks of memory are allocated via C++ "new" statements. If -you are unlucky you could run out of memory just when one of these -small requests is made, in which case the code will crash or hang (in -parallel), since LAMMPS does not trap on those errors. - -Illegal arithmetic can cause LAMMPS to run slow or crash. This is -typically due to invalid physics and numerics that your simulation is -computing. If you see wild thermodynamic values or NaN values in your -LAMMPS output, something is wrong with your simulation. If you -suspect this is happening, it is a good idea to print out -thermodynamic info frequently (e.g. every timestep) via the -:doc:`thermo ` so you can monitor what is happening. -Visualizing the atom movement is also a good idea to ensure your model -is behaving as you expect. - -In parallel, one way LAMMPS can hang is due to how different MPI -implementations handle buffering of messages. If the code hangs -without an error message, it may be that you need to specify an MPI -setting or two (usually via an environment variable) to enable -buffering or boost the sizes of messages that can be buffered. +Common issues that are often regarded as bugs +============================================= + +The list below are some random notes on behavior of LAMMPS that is +sometimes unexpected or even considered a bug. Most of the time, these +are just issues of understanding how LAMMPS is implemented and +parallelized. Please also have a look at the :doc:`Error details +discussions page ` that contains recommendations for +tracking down issues and explanations for error messages that may +sometimes be confusing or need additional explanations. + +- A LAMMPS simulation typically has two stages, 1) issuing commands + and 2) run or minimize. Most LAMMPS errors are detected in stage 1), + others at the beginning of stage 2), and finally others like a bond + stretching too far may or lost atoms or bonds may not occur until the + middle of a run. + +- If two LAMMPS runs do not produce the exact same answer on different + machines or different numbers of processors, this is typically not a + bug. In theory you should get identical answers on any number of + processors and on any machine. In practice, numerical round-off can + cause slight differences and eventual divergence of molecular dynamics + phase space trajectories within a few 100s or few 1000s of timesteps. + This can be triggered by different ordering of atoms due to different + domain decompositions, but also through different CPU architectures, + different operating systems, different compilers or compiler versions, + different compiler optimization levels, different FFT libraries. + However, the statistical properties of the two runs (e.g. average + energy or temperature) should still be the same. + +- If the :doc:`velocity ` command is used to set initial atom + velocities, a particular atom can be assigned a different velocity + when the problem is run on a different number of processors or on + different machines. If this happens, the phase space trajectories of + the two simulations will rapidly diverge. See the discussion of the + *loop* option in the :doc:`velocity ` command for details + and options that avoid this issue. + +- Similarly, the :doc:`create_atoms ` command generates a + lattice of atoms. For the same physical system, the ordering and + numbering of atoms by atom ID may be different depending on the number + of processors. + +- Some commands use random number generators which may be setup to + produce different random number streams on each processor and hence + will produce different effects when run on different numbers of + processors. A commonly-used example is the :doc:`fix langevin + ` command for thermostatting. + +- LAMMPS tries to flag errors and print informative error messages so + you can fix the problem. For most errors it will also print the last + input script command that it was processing or even point to the + keyword that is causing troubles. Of course, LAMMPS cannot figure out + your physics or numerical mistakes, like choosing too big a timestep, + specifying erroneous force field coefficients, or putting 2 atoms on + top of each other! Also, LAMMPS does not know what you *intend* to + do, but very strictly applies the syntax as described in the + documentation. If you run into errors that LAMMPS does not catch that + you think it should flag, please send an email to the `developers + `_ or create an new topic on the + dedicated `MatSci forum section `_. + +- If you get an error message about an invalid command in your input + script, you can determine what command is causing the problem by + looking in the log.lammps file or using the :doc:`echo command ` + to see it on the screen. If you get an error like "Invalid ... + style", with ... being fix, compute, pair, etc, it means that you + mistyped the style name or that the command is part of an optional + package which was not compiled into your executable. The list of + available styles in your executable can be listed by using + :doc:`the -h command-line switch `. The installation and + compilation of optional packages is explained on the :doc:`Build + packages ` doc page. + +- For a given command, LAMMPS expects certain arguments in a specified + order. If you mess this up, LAMMPS will often flag the error, but it + may also simply read a bogus argument and assign a value that is + valid, but not what you wanted. E.g. trying to read the string "abc" + as an integer value of 0. Careful reading of the associated doc page + for the command should allow you to fix these problems. In most cases, + where LAMMPS expects to read a number, either integer or floating + point, it performs a stringent test on whether the provided input + actually is an integer or floating-point number, respectively, and + reject the input with an error message (for instance, when an integer + is required, but a floating-point number 1.0 is provided): + + .. parsed-literal:: + + ERROR: Expected integer parameter instead of '1.0' in input script or data file + +- Some commands allow for using variable references in place of numeric + constants so that the value can be evaluated and may change over the + course of a run. This is typically done with the syntax *v_name* for + a parameter, where name is the name of the variable. On the other + hand, immediate variable expansion with the syntax ${name} is + performed while reading the input and before parsing commands, + + .. note:: + + Using a variable reference (i.e. *v_name*) is only allowed if + the documentation of the corresponding command explicitly says it is. + Otherwise, you will receive an error message of this kind: + + .. parsed-literal:: + + ERROR: Expected floating point parameter instead of 'v_name' in input script or data file + +- Generally, LAMMPS will print a message to the screen and logfile and + exit gracefully when it encounters a fatal error. When running in + parallel this message may be stuck in an I/O buffer and LAMMPS will be + terminated before that buffer is printed. In that case you can try + adding the ``-nonblock`` or ``-nb`` command-line flag to turn off that + buffering. Please note that this should not be used for production + runs, since turning off buffering usually has a significant negative + impact on performance (even worse than :doc:`thermo_modify flush yes + `). Sometimes LAMMPS will print a WARNING to the + screen and logfile and continue on; you can decide if the WARNING is + important or not, but as a general rule do not ignore warnings that + you not understand. A WARNING message that is generated in the middle + of a run is only printed to the screen, not to the logfile, to avoid + cluttering up thermodynamic output. If LAMMPS crashes or hangs + without generating an error message first then it could be a bug + (see :doc:`this section `). + +- LAMMPS runs in the available memory a processor allows to be + allocated. Most reasonable MD runs are compute limited, not memory + limited, so this should not be a bottleneck on most platforms. Almost + all large memory allocations in the code are done via C-style malloc's + which will generate an error message if you run out of memory. + Smaller chunks of memory are allocated via C++ "new" statements. If + you are unlucky you could run out of memory just when one of these + small requests is made, in which case the code will crash or hang (in + parallel). + +- Illegal arithmetic can cause LAMMPS to run slow or crash. This is + typically due to invalid physics and numerics that your simulation is + computing. If you see wild thermodynamic values or NaN values in your + LAMMPS output, something is wrong with your simulation. If you + suspect this is happening, it is a good idea to print out + thermodynamic info frequently (e.g. every timestep) via the + :doc:`thermo ` so you can monitor what is happening. + Visualizing the atom movement is also a good idea to ensure your model + is behaving as you expect. + +- When running in parallel with MPI, one way LAMMPS can hang is because + LAMMPS has come across an error condition, but only on one or a few + MPI processes and not all of them. LAMMPS has two different "stop + with an error message" functions and the correct one has to be called + or else it will hang. diff --git a/doc/src/Errors_details.rst b/doc/src/Errors_details.rst index eba240aef7..c399cd1878 100644 --- a/doc/src/Errors_details.rst +++ b/doc/src/Errors_details.rst @@ -51,8 +51,11 @@ Parallel versus serial ^^^^^^^^^^^^^^^^^^^^^^ Issues where something is "lost" or "missing" often exhibit that issue -only when running in parallel. That doesn't mean there is no problem, -only the symptoms are not triggering an error quickly. Correspondingly, +*only* when running in parallel. That doesn't mean there is no problem +when running in serial, only the symptoms are not triggering an error. +This may be because there is no domain decomposition with just one +processor and thus all atoms are accessible, or it may be because the +problem will manifest faster with smaller subdomains. Correspondingly, errors may be triggered faster with more processors and thus smaller sub-domains. @@ -142,7 +145,7 @@ propagate. So-called :doc:`"soft-core" potentials ` or the :doc:`"soft" repulsive-only pair style ` are less prone for this behavior (depending on the settings in use) and can be used at the beginning of a simulation. Also, single precision numbers can -overflow much faster, so for the GPU or INTEL package it may be +overflow much faster, so for the GPU, KOKKOS, or INTEL package it may be beneficial to run with double precision initially before switching to mixed or single precision for faster execution when the system has relaxed. @@ -159,13 +162,17 @@ angle, dihedral, or improper with just one atom in the actual sub-domain. Typically, this cutoff is set to the largest cutoff from the :doc:`pair style(s) ` plus the :doc:`neighbor list skin distance ` and will typically be sufficient for all bonded -interactions. But if the pair style cutoff is small, this may not be -enough. LAMMPS will print a warning in this case using some heuristic -based on the equilibrium bond length, but that still may not be -sufficient for cases where the force constants are small and thus bonds -may be stretched very far. The communication cutoff can be adjusted -with :doc:`comm_modify cutoff \ `, but setting this -too large will waste CPU time and memory. +interactions. But if the pair style cutoff is small (e.g. with a +repulsive-only Lennard-Jones potential) this may not be enough. It is +even worse if there is no pair style defined (or the pair style is set +to "none"), since then there will be no ghost atoms created at all. + +The communication cutoff can be set or adjusted with :doc:`comm_modify +cutoff \ `, but setting this too large will waste +CPU time and memory. LAMMPS will print warnings in these cases. For +bonds it uses some heuristic based on the equilibrium bond length, but +that still may not be sufficient for cases where the force constants are +small and thus bonds may be stretched very far. .. _hint09: @@ -240,6 +247,25 @@ equal style (or similar) variables can only be expanded before the box is defined if they do not reference anything that cannot be defined before the box (e.g. a compute or fix reference or a thermo keyword). +.. _hint13: + +Illegal ... command +^^^^^^^^^^^^^^^^^^^ + +These are a catchall error messages that used to be used a lot in LAMMPS +(also programmers are sometimes lazy). They usually include the name of +the source file and the line where the error happened. This can be used +to track down what caused the error (most often some form of syntax error) +by looking at the source code. However, this has two disadvantages: 1. one +has to check the source file from the exact same LAMMPS version, or else +the line number would be different or the core may have been rewritten and +that specific error does not exist anymore. + +The LAMMPS developers are committed to replace these too generic error +messages with more descriptive errors, e.g. listing *which* keyword was +causing the error, so that it will be much simpler to look up the +correct syntax in the manual (and without referring to the source code). + ------ .. _err0001: @@ -295,9 +321,10 @@ completely different format. Illegal variable command: expected X arguments but found Y ---------------------------------------------------------- -This error indicates that a variable command has the wrong number of -arguments. A common reason for this is that the variable expression has -whitespace, but is not enclosed in single or double quotes. +This error indicates that a variable command has either incorrectly +formatted arguments or the wrong number of arguments. A common reason +for this is that a variable expression contains whitespace, but is not +enclosed in single or double quotes. To explain, the LAMMPS input parser reads and processes lines. The resulting line is broken down into "words". Those are usually @@ -305,11 +332,12 @@ individual commands, labels, names, and values separated by whitespace (a space or tab character). For "words" that may contain whitespace, they have to be enclosed in single (') or double (") quotes. The parser will then remove the outermost pair of quotes and pass that string as -"word" to the variable command. +single argument to the variable command. Thus missing quotes or accidental extra whitespace will trigger this error because the unquoted whitespace will result in the text being -broken into more "words", i.e. the variable expression being split. +broken into more "words" than expected, i.e. the variable expression +being split. .. _err0004: @@ -976,7 +1004,7 @@ There are multiple ways to get into contact and report your issue. In order of preference there are: - Submit a bug report `issue in the LAMMPS GitHub - ` repository + `_ repository - Post a message in the "LAMMPS Development" forum in the `MatSci Community Discourse `_ - Send an email to ``developers@lammps.org`` @@ -1025,13 +1053,15 @@ Even though the LAMMPS error message recommends to increase the "one" parameter, this may not always be the correct solution. The neighbor list overflow can also be a symptom for some other error that cannot be easily detected. For example, a frequent reason for an (unexpected) -high density are incorrect box boundaries (since LAMMPS wraps atoms back +high density are incorrect box dimensions (since LAMMPS wraps atoms back into the principal box with periodic boundaries) or coordinates provided -as fractional coordinates. In both cases, LAMMPS cannot easily know -whether the input geometry has such a high density (and thus requiring -more neighbor list storage per atom) by intention. Rather than blindly -increasing the "one" parameter, it is thus worth checking if this is -justified by the combination of density and cutoff. +as fractional coordinates (LAMMPS does not support this for data files). +In both cases, LAMMPS cannot easily know whether the input geometry has +such a high density (and thus requiring more neighbor list storage per +atom) on purpose or by accident. Rather than blindly increasing the +"one" parameter, it is thus worth checking if this is justified by the +combination of density and cutoff. This is particularly recommended +when using some tool(s) to convert input or data files. When boosting (= increasing) the "one" parameter, it is recommended to also increase the value for the "page" parameter to maintain the ratio diff --git a/doc/src/Errors_messages.rst b/doc/src/Errors_messages.rst index d1318ceffe..008ee26ae0 100644 --- a/doc/src/Errors_messages.rst +++ b/doc/src/Errors_messages.rst @@ -140,11 +140,6 @@ Please also see the page with :doc:`Warning messages `. Sum of atoms across processors does not equal initial total count. This is probably because you have lost some atoms. -*Atom in too many rigid bodies - boost MAXBODY* - Fix poems has a parameter MAXBODY (in fix_poems.cpp) which determines - the maximum number of rigid bodies a single atom can belong to (i.e. a - multibody joint). The bodies you have defined exceed this limit. - *Atom sort did not operate correctly* This is an internal LAMMPS error. Please report it to the developers. @@ -635,10 +630,6 @@ Please also see the page with :doc:`Warning messages `. The specified file cannot be opened. Check that the path and name are correct. -*Cannot open fix poems file %s* - The specified file cannot be opened. Check that the path and name are - correct. - *Cannot open fix print file %s* The output file generated by the fix print command cannot be opened @@ -1435,9 +1426,6 @@ Please also see the page with :doc:`Warning messages `. *Could not find fix group ID* A group ID used in the fix command does not exist. -*Could not find fix poems group ID* - A group ID used in the fix poems command does not exist. - *Could not find fix recenter group ID* A group ID used in the fix recenter command does not exist. @@ -1570,10 +1558,6 @@ Please also see the page with :doc:`Warning messages `. The command is accessing a vector added by the fix property/atom command, that does not exist. -*Cyclic loop in joint connections* - Fix poems cannot (yet) work with coupled bodies whose joints connect - the bodies in a ring (or cycle). - *Degenerate lattice primitive vectors* Invalid set of 3 lattice vectors for lattice command. @@ -2623,9 +2607,6 @@ Please also see the page with :doc:`Warning messages `. *Input line quote not followed by white-space* An end quote must be followed by white-space. -*Insufficient Jacobi rotations for POEMS body* - Eigensolve for rigid body was not sufficiently accurate. - *Insufficient Jacobi rotations for body nparticle* Eigensolve for rigid body was not sufficiently accurate. @@ -3641,10 +3622,6 @@ Please also see the page with :doc:`Warning messages `. *Overlapping small/large in pair colloid* This potential is infinite when there is an overlap. -*POEMS fix must come before NPT/NPH fix* - NPT/NPH fix must be defined in input script after all poems fixes, - else the fix contribution to the pressure virial is incorrect. - *PPPM can only currently be used with comm_style brick* This is a current restriction in LAMMPS. @@ -4382,11 +4359,6 @@ Please also see the page with :doc:`Warning messages `. to encompass the max distance printed when the fix rigid/small command was invoked. -*Rigid body has degenerate moment of inertia* - Fix poems will only work with bodies (collections of atoms) that have - non-zero principal moments of inertia. This means they must be 3 or - more non-collinear atoms, even with joint atoms removed. - *Rigid fix must come before NPT/NPH fix* NPT/NPH fix must be defined in input script after all rigid fixes, else the rigid fix contribution to the pressure virial is @@ -4712,9 +4684,6 @@ Please also see the page with :doc:`Warning messages `. The fix shake command cannot list more masses than there are atom types. -*Too many molecules for fix poems* - The limit is 2\^31 = ~2 billion molecules. - *Too many molecules for fix rigid* The limit is 2\^31 = ~2 billion molecules. @@ -4744,10 +4713,6 @@ Please also see the page with :doc:`Warning messages `. The number of bond, angle, etc types exceeds the system setting. See the create_box or read_data command for how to specify these values. -*Tree structure in joint connections* - Fix poems cannot (yet) work with coupled bodies whose joints connect - the bodies in a tree structure. - *Trying to build an occasional neighbor list before initialization completed* This is not allowed. Source code caller needs to be modified. diff --git a/doc/src/Errors_warnings.rst b/doc/src/Errors_warnings.rst index 3f18ddd2ca..0ef312a166 100644 --- a/doc/src/Errors_warnings.rst +++ b/doc/src/Errors_warnings.rst @@ -375,9 +375,6 @@ Please also see the page with :doc:`Error messages ` *More than one compute orientorder/atom* It is not efficient to use compute orientorder/atom more than once. -*More than one fix poems* - It is not efficient to use fix poems more than once. - *More than one fix rigid* It is not efficient to use fix rigid more than once. @@ -408,11 +405,6 @@ Please also see the page with :doc:`Error messages ` If you are not using a fix like nve, nvt, npt then atom velocities and coordinates will not be updated during timestepping. -*No joints between rigid bodies, use fix rigid instead* - The bodies defined by fix poems are not connected by joints. POEMS - will integrate the body motion, but it would be more efficient to use - fix rigid. - *Not using real units with pair reaxff* This is most likely an error, unless you have created your own ReaxFF parameter file in a different set of units. diff --git a/doc/src/Examples.rst b/doc/src/Examples.rst index d80fc8aa4c..322b11f1ab 100644 --- a/doc/src/Examples.rst +++ b/doc/src/Examples.rst @@ -43,11 +43,11 @@ Lists of both kinds of directories are given below. Lowercase directories --------------------- -+-------------+------------------------------------------------------------------+ -| accelerate | run with various acceleration options (OpenMP, GPU, Phi) | +-------------+------------------------------------------------------------------+ | airebo | polyethylene with AIREBO potential | +-------------+------------------------------------------------------------------+ +| amoeba | small water and bio models with AMOEBA and HIPPO potentials | ++-------------+------------------------------------------------------------------+ | atm | Axilrod-Teller-Muto potential example | +-------------+------------------------------------------------------------------+ | balance | dynamic load balancing, 2d system | @@ -78,14 +78,18 @@ Lowercase directories +-------------+------------------------------------------------------------------+ | ellipse | ellipsoidal particles in spherical solvent, 2d system | +-------------+------------------------------------------------------------------+ +| fire | examples for using minimization styles fire and quickmin | ++-------------+------------------------------------------------------------------+ | flow | Couette and Poiseuille flow in a 2d channel | +-------------+------------------------------------------------------------------+ | friction | frictional contact of spherical asperities between 2d surfaces | +-------------+------------------------------------------------------------------+ -| mc | Monte Carlo features via fix gcmc, widom and other commands | +| gjf | examples for Gronbech-Jensen thermostats with large time step | +-------------+------------------------------------------------------------------+ | granregion | use of fix wall/region/gran as boundary on granular particles | +-------------+------------------------------------------------------------------+ +| grid | use of commands which overlay grids on the simulation domain | ++-------------+------------------------------------------------------------------+ | hugoniostat | Hugoniostat shock dynamics | +-------------+------------------------------------------------------------------+ | hyper | global and local hyperdynamics of diffusion on Pt surface | @@ -94,16 +98,22 @@ Lowercase directories +-------------+------------------------------------------------------------------+ | kim | use of potentials from the `OpenKIM Repository `_ | +-------------+------------------------------------------------------------------+ +| mc | Monte Carlo features via fix gcmc, widom and other commands | ++-------------+------------------------------------------------------------------+ | mdi | use of the MDI package and MolSSI MDI code coupling library | +-------------+------------------------------------------------------------------+ | meam | MEAM test for SiC and shear (same as shear examples) | +-------------+------------------------------------------------------------------+ | melt | rapid melt of 3d LJ system | +-------------+------------------------------------------------------------------+ +| mesh | create_atoms mesh command examples | ++-------------+------------------------------------------------------------------+ | micelle | self-assembly of small lipid-like molecules into 2d bilayers | +-------------+------------------------------------------------------------------+ | min | energy minimization of 2d LJ melt | +-------------+------------------------------------------------------------------+ +| mliap | examples for using several bundled ML-IAP potentials | ++-------------+------------------------------------------------------------------+ | msst | MSST shock dynamics | +-------------+------------------------------------------------------------------+ | multi | multi neighboring for systems with large interaction disparities | @@ -114,6 +124,8 @@ Lowercase directories +-------------+------------------------------------------------------------------+ | nemd | non-equilibrium MD of 2d sheared system | +-------------+------------------------------------------------------------------+ +| numdiff | get forces, virial, and Born matrix from numerical differences | ++-------------+------------------------------------------------------------------+ | obstacle | flow around two voids in a 2d channel | +-------------+------------------------------------------------------------------+ | peptide | dynamics of a small solvated peptide chain (5-mer) | @@ -130,7 +142,9 @@ Lowercase directories +-------------+------------------------------------------------------------------+ | rdf-adf | computing radial and angle distribution functions for water | +-------------+------------------------------------------------------------------+ -| reax | RDX and TATB models using the ReaxFF | +| reaxff | RDX and TATB models and more using ReaxFF | ++-------------+------------------------------------------------------------------+ +| replicate | use of replicate command | +-------------+------------------------------------------------------------------+ | rerun | use of rerun and read_dump commands | +-------------+------------------------------------------------------------------+ @@ -144,20 +158,32 @@ Lowercase directories +-------------+------------------------------------------------------------------+ | srd | stochastic rotation dynamics (SRD) particles as solvent | +-------------+------------------------------------------------------------------+ +| steinhardt | Steinhardt-Nelson Q_l and W_l parameters using orientorder/atom | ++-------------+------------------------------------------------------------------+ | streitz | use of Streitz/Mintmire potential with charge equilibration | +-------------+------------------------------------------------------------------+ | stress_vcm | removing binned rigid body motion from binned stress profile | +-------------+------------------------------------------------------------------+ | tad | temperature-accelerated dynamics of vacancy diffusion in bulk Si | +-------------+------------------------------------------------------------------+ +| tersoff | regression test input for Tersoff potential variants | ++-------------+------------------------------------------------------------------+ | threebody | regression test input for a variety of manybody potentials | +-------------+------------------------------------------------------------------+ | tracker | track interactions in LJ melt | +-------------+------------------------------------------------------------------+ +| triclinic | general triclinic simulation boxes versus orthogonal boxes | ++-------------+------------------------------------------------------------------+ +| ttm | two-temperature model examples | ++-------------+------------------------------------------------------------------+ | vashishta | use of the Vashishta potential | +-------------+------------------------------------------------------------------+ | voronoi | Voronoi tesselation via compute voronoi/atom command | +-------------+------------------------------------------------------------------+ +| wall | use of reflective walls with different stochastic models | ++-------------+------------------------------------------------------------------+ +| yaml | demonstrates use of yaml thermo and dump styles | ++-------------+------------------------------------------------------------------+ Here is how you can run and visualize one of the sample problems: @@ -207,10 +233,14 @@ Uppercase directories +------------+--------------------------------------------------------------------------------------------------+ | KAPPA | compute thermal conductivity via several methods | +------------+--------------------------------------------------------------------------------------------------+ +| LEPTON | use of fix efield/lepton | ++------------+--------------------------------------------------------------------------------------------------+ | MC-LOOP | using LAMMPS in a Monte Carlo mode to relax the energy of a system in a input script loop | +------------+--------------------------------------------------------------------------------------------------+ | PACKAGES | examples for specific packages and contributed commands | +------------+--------------------------------------------------------------------------------------------------+ +| QUANTUM | how to use LAMMPS in tandem with several quantum codes via the MDI code coupling library | ++------------+--------------------------------------------------------------------------------------------------+ | SPIN | examples for features of the SPIN package | +------------+--------------------------------------------------------------------------------------------------+ | UNITS | examples that run the same simulation in lj, real, metal units | diff --git a/doc/src/Fortran.rst b/doc/src/Fortran.rst index 0a8434f63d..34bb5b9fed 100644 --- a/doc/src/Fortran.rst +++ b/doc/src/Fortran.rst @@ -69,10 +69,11 @@ statement. Internally, it will call either :cpp:func:`lammps_open_fortran` or :cpp:func:`lammps_open_no_mpi` from the C library API to create the class instance. All arguments are optional and :cpp:func:`lammps_mpi_init` will be called automatically -if it is needed. Similarly, a possible call to -:cpp:func:`lammps_mpi_finalize` is integrated into the :f:func:`close` -function and triggered with the optional logical argument set to -``.TRUE.``. Here is a simple example: +if it is needed. Similarly, optional calls to +:cpp:func:`lammps_mpi_finalize`, :cpp:func:`lammps_kokkos_finalize`, +:cpp:func:`lammps_python_finalize`, and :cpp:func:`lammps_plugin_finalize` +are integrated into the :f:func:`close` function and triggered with the +optional logical argument set to ``.TRUE.``. Here is a simple example: .. code-block:: fortran @@ -375,6 +376,8 @@ of the contents of the :f:mod:`LIBLAMMPS` Fortran interface to LAMMPS. :ftype get_os_info: subroutine :f config_has_mpi_support: :f:func:`config_has_mpi_support` :ftype config_has_mpi_support: function + :f config_has_omp_support: :f:func:`config_has_omp_support` + :ftype config_has_omp_support: function :f config_has_gzip_support: :f:func:`config_has_gzip_support` :ftype config_has_gzip_support: function :f config_has_png_support: :f:func:`config_has_png_support` @@ -521,8 +524,8 @@ Procedures Bound to the :f:type:`lammps` Derived Type This method will close down the LAMMPS instance through calling :cpp:func:`lammps_close`. If the *finalize* argument is present and has a value of ``.TRUE.``, then this subroutine also calls - :cpp:func:`lammps_kokkos_finalize` and - :cpp:func:`lammps_mpi_finalize`. + :cpp:func:`lammps_kokkos_finalize`, :cpp:func:`lammps_mpi_finalize`, + :cpp:func:`lammps_python_finalize`, and :cpp:func:`lammps_plugin_finalize`. :o finalize: shut down the MPI environment of the LAMMPS library if ``.TRUE.``. @@ -530,6 +533,8 @@ Procedures Bound to the :f:type:`lammps` Derived Type :to: :cpp:func:`lammps_close` :to: :cpp:func:`lammps_mpi_finalize` :to: :cpp:func:`lammps_kokkos_finalize` + :to: :cpp:func:`lammps_python_finalize` + :to: :cpp:func:`lammps_plugin_finalize` -------- @@ -2096,7 +2101,7 @@ Procedures Bound to the :f:type:`lammps` Derived Type -------- -.. f:subroutine:: create_atoms([id,] type, x, [v,] [image,] [bexpand]) +.. f:function:: create_atoms([id,] type, x, [v,] [image,] [bexpand]) This method calls :cpp:func:`lammps_create_atoms` to create additional atoms from a given list of coordinates and a list of atom types. Additionally, @@ -2125,6 +2130,8 @@ Procedures Bound to the :f:type:`lammps` Derived Type will be created, not dropped, and the box dimensions will be extended. Default is ``.FALSE.`` :otype bexpand: logical,optional + :r atoms: number of created atoms + :rtype atoms: integer(c_int) :to: :cpp:func:`lammps_create_atoms` .. note:: @@ -2149,6 +2156,18 @@ Procedures Bound to the :f:type:`lammps` Derived Type -------- +.. f:subroutine:: create_molecule(id, jsonstr) + + Add molecule template from string with JSON data + + .. versionadded:: 22Jul2025 + + :p character(len=\*) id: desired molecule-ID + :p character(len=\*) jsonstr: string with JSON data defining the molecule template + :to: :cpp:func:`lammps_create_molecule` + +-------- + .. f:function:: find_pair_neighlist(style[, exact][, nsub][, reqid]) Find index of a neighbor list requested by a pair style. @@ -2300,6 +2319,18 @@ Procedures Bound to the :f:type:`lammps` Derived Type -------- +.. f:function:: config_has_omp_support() + + This function is used to query whether LAMMPS was compiled with OpenMP enabled. + + .. versionadded:: 10Sep2025 + + :to: :cpp:func:`lammps_config_has_omp_support` + :r has_omp: ``.TRUE.`` when compiled with OpenMP enabled, ``.FALSE.`` if not. + :rtype has_omp: logical + +-------- + .. f:function:: config_has_gzip_support() Check if the LAMMPS library supports reading or writing compressed @@ -2389,7 +2420,7 @@ Procedures Bound to the :f:type:`lammps` Derived Type retrieved via :f:func:`get_last_error_message`. This allows to restart a calculation or delete and recreate the LAMMPS instance when a C++ exception occurs. One application of using exceptions this way - is the :ref:`lammps_gui`. + is `LAMMPS-GUI `_ :to: :cpp:func:`lammps_config_has_exceptions` :r has_exceptions: diff --git a/doc/src/Howto.rst b/doc/src/Howto.rst index ec90472c27..71b17b333e 100644 --- a/doc/src/Howto.rst +++ b/doc/src/Howto.rst @@ -66,6 +66,7 @@ Force fields howto :name: force_howto :maxdepth: 1 + Howto_FFgeneral Howto_bioFF Howto_amoeba Howto_tip3p @@ -92,6 +93,7 @@ Packages howto Howto_manifold Howto_rheo Howto_spins + Howto_apip Tutorials howto =============== @@ -105,6 +107,5 @@ Tutorials howto Howto_lammps_gui Howto_moltemplate Howto_python - Howto_pylammps Howto_wsl diff --git a/doc/src/Howto_FFgeneral.rst b/doc/src/Howto_FFgeneral.rst new file mode 100644 index 0000000000..1b96ae1119 --- /dev/null +++ b/doc/src/Howto_FFgeneral.rst @@ -0,0 +1,55 @@ +Some general force field considerations +======================================= + +A compact summary of the concepts, definitions, and properties of force +fields with explicit bonded interactions (like the ones discussed in +this HowTo) is given in :ref:`(Gissinger) `. + +A force field has 2 parts: the formulas that define its potential +functions and the coefficients used for a particular system. To assign +parameters it is first required to assign atom types. Those are not +only based on the elements, but also on the chemical environment due to +the atoms bound to them. This often follows the chemical concept of +*functional groups*. Example: a carbon atom bound with a single bond to +a single OH-group (alcohol) would be a different atom type than a carbon +atom bound to a methyl CH3 group (aliphatic carbon). The atom types +usually then determine the non-bonded Lennard-Jones parameters and the +parameters for bonds, angles, dihedrals, and impropers. On top of that, +partial charges have to be applied. Those are usually independent of +the atom types and are determined either for groups of atoms called +residues with some fitting procedure based on quantum mechanical +calculations, or based on some increment system that add or subtract +increments from the partial charge of an atom based on the types of +the neighboring atoms. + +Force fields differ in the strategies they employ to determine the +parameters and charge distribution in how generic or specific they are +which in turn has an impact on the accuracy (compare for example +CGenFF to CHARMM and GAFF to Amber). Because of the different +strategies, it is not a good idea to use a mix of parameters from +different force field *families* (like CHARMM, Amber, or GROMOS) +and that extends to the parameters for the solvent, especially +water. The publication describing the parameterization of a force +field will describe which water model to use. Changing the water +model usually leads to overall worse results (even if it may improve +on the water itself). + +In addition, one has to consider that *families* of force fields like +CHARMM, Amber, OPLS, or GROMOS have evolved over time and thus provide +different *revisions* of the force field parameters. These often +corresponds to changes in the functional form or the parameterization +strategies. This may also result in changes required for simulation +settings like the preferred cutoff or how Coulomb interactions are +computed (cutoff, smoothed/shifted cutoff, or long-range with Ewald +summation or equivalent). Unless explicitly stated in the publication +describing the force field, the Coulomb interaction cannot be chosen at +will but must match the revision of the force field. That said, +liberties may be taken during the initial equilibration of a system to +speed up the process, but not for production simulations. + +---------- + +.. _Typelabel2: + +**(Gissinger)** J. R. Gissinger, I. Nikiforov, Y. Afshar, B. Waters, M. Choi, D. S. Karls, A. Stukowski, W. Im, H. Heinz, A. Kohlmeyer, and E. B. Tadmor, J Phys Chem B, 128, 3282-3297 (2024). + diff --git a/doc/src/Howto_amoeba.rst b/doc/src/Howto_amoeba.rst index c927c28a1f..53254e4b9b 100644 --- a/doc/src/Howto_amoeba.rst +++ b/doc/src/Howto_amoeba.rst @@ -281,7 +281,8 @@ Here is more information about the extended XYZ format defined and used by Tinker, and links to programs that convert standard PDB files to the extended XYZ format: -* `https://openbabel.org/docs/current/FileFormats/Tinker_XYZ_format.html `_ +* `https://dasher.wustl.edu/tinker/distribution/doc/sphinx/tinker/_build/html/text/file-types.html `_ +* `https://openbabel.org/docs/FileFormats/Tinker_XYZ_format.html `_ * `https://github.com/emleddin/pdbxyz-xyzpdb `_ * `https://github.com/TinkerTools/tinker/blob/release/source/pdbxyz.f `_ diff --git a/doc/src/Howto_apip.rst b/doc/src/Howto_apip.rst new file mode 100644 index 0000000000..7f47c7cf25 --- /dev/null +++ b/doc/src/Howto_apip.rst @@ -0,0 +1,225 @@ +Adaptive-precision interatomic potentials (APIP) +================================================ + +The :ref:`PKG-APIP ` enables use of adaptive-precision potentials +as described in :ref:`(Immel) `. +In the context of this package, precision refers to the accuracy of an interatomic +potential. + +Modern machine-learning (ML) potentials translate the accuracy of DFT +simulations into MD simulations, i.e., ML potentials are more accurate +compared to traditional empirical potentials. +However, this accuracy comes at a cost: there is a considerable performance +gap between the evaluation of classical and ML potentials, e.g., the force +calculation of a classical EAM potential is 100-1000 times faster compared +to the ML-based ACE method. +The evaluation time difference results in a conflict between large time and +length scales on the one hand and accuracy on the other. +This conflict is resolved by an APIP model for simulations, in which the highest precision +is required only locally but not globally. + +An APIP model uses a precise but +expensive ML potential only for a subset of atoms, while a fast +potential is used for the remaining atoms. +Whether the precise or the fast potential is used is determined +by a continuous switching parameter :math:`\lambda_i` that can be defined for each +atom :math:`i`. +The switching parameter can be adjusted dynamically during a simulation or +kept constant as explained below. + +The potential energy :math:`E_i` of an atom :math:`i` described by an +adaptive-precision +interatomic potential is given by :ref:`(Immel) ` + +.. math:: + + E_i = \lambda_i E_i^\text{(fast)} + (1-\lambda_i) E_i^\text{(precise)}, + +whereas :math:`E_i^\text{(fast)}` is the potential energy of atom :math:`i` +according to a fast interatomic potential, +:math:`E_i^\text{(precise)}` is the potential energy according to a precise +interatomic potential and :math:`\lambda_i\in[0,1]` is the +switching parameter that decides how the potential energies are weighted. + +Adaptive-precision saves computation time when the computation of the +precise potential is not required for many atoms, i.e., when +:math:`\lambda_i=1` applies for many atoms. + +The currently implemented potentials are: + +.. list-table:: + :header-rows: 1 + + * - Fast potential + - Precise potential + * - :doc:`ACE ` + - :doc:`ACE ` + * - :doc:`EAM ` + - + +In theory, any short-range potential can be used for an adaptive-precision +interatomic potential. How to implement a new (fast or precise) +adaptive-precision +potential is explained in :ref:`here `. + +The switching parameter :math:`\lambda_i` that combines the two potentials +can be dynamically calculated during a +simulation. +Alternatively, one can set a constant switching parameter before the start +of a simulation. +To run a simulation with an adaptive-precision potential, one needs the +following components: + +.. tabs:: + + .. tab:: dynamic switching parameter + + #. :doc:`atom_style apip ` so that the switching parameter :math:`\lambda_i` can be stored. + #. A fast potential: :doc:`eam/apip ` or :doc:`pace/fast/apip `. + #. A precise potential: :doc:`pace/precise/apip `. + #. :doc:`pair_style lambda/input/apip ` to calculate :math:`\lambda_i^\text{input}`, from which :math:`\lambda_i` is calculated. + #. :doc:`fix lambda/apip ` to calculate the switching parameter :math:`\lambda_i`. + #. :doc:`pair_style lambda/zone/apip ` to calculate the spatial transition zone of the switching parameter. + #. :doc:`pair_style hybrid/overlay ` to combine the previously mentioned pair_styles. + #. :doc:`fix lambda_thermostat/apip ` to conserve the energy when switching parameters change. + #. :doc:`fix atom_weight/apip ` to approximate the load caused by every atom, as the computations of the pair_styles are only required for a subset of atoms. + #. :doc:`fix balance ` to perform dynamic load balancing with the calculated load. + + .. tab:: constant switching parameter + + #. :doc:`atom_style apip ` so that the switching parameter :math:`\lambda_i` can be stored. + #. A fast potential: :doc:`eam/apip ` or :doc:`pace/fast/apip `. + #. A precise potential: :doc:`pace/precise/apip `. + #. :doc:`set ` command to set the switching parameter :math:`\lambda_i`. + #. :doc:`pair_style hybrid/overlay ` to combine the previously mentioned pair_styles. + #. :doc:`fix atom_weight/apip ` to approximate the load caused by every atom, as the computations of the pair_styles are only required for a subset of atoms. + #. :doc:`fix balance ` to perform dynamic load balancing with the calculated load. + +---------- + +Example +""""""" +.. note:: + + How to select the values of the parameters of an adaptive-precision + interatomic potential is discussed in detail in :ref:`(Immel) `. + + +.. tabs:: + + .. tab:: dynamic switching parameter + + Lines like these would appear in the input script: + + + .. code-block:: LAMMPS + + atom_style apip + comm_style tiled + + pair_style hybrid/overlay eam/fs/apip pace/precise/apip lambda/input/csp/apip fcc cutoff 5.0 lambda/zone/apip 12.0 + pair_coeff * * eam/fs/apip Cu.eam.fs Cu + pair_coeff * * pace/precise/apip Cu.yace Cu + pair_coeff * * lambda/input/csp/apip + pair_coeff * * lambda/zone/apip + + fix 2 all lambda/apip 2.5 3.0 time_averaged_zone 4.0 12.0 110 110 min_delta_lambda 0.01 + fix 3 all lambda_thermostat/apip N_rescaling 200 + fix 4 all atom_weight/apip 100 eam ace lambda/input lambda/zone all + + variable myweight atom f_4 + + fix 5 all balance 100 1.1 rcb weight var myweight + + First, the :doc:`atom_style apip ` and the communication style are set. + + .. note:: + Note, that :doc:`comm_style ` *tiled* is required for the style *rcb* of + :doc:`fix balance `, but not for APIP. + However, the flexibility offered by the balancing style *rcb*, compared to the + balancing style *shift*, is advantageous for APIP. + + An adaptive-precision EAM-ACE potential, for which the switching parameter + :math:`\lambda` is calculated from the CSP, is defined via + :doc:`pair_style hybrid/overlay `. + The fixes ensure that the switching parameter is calculated, the energy conserved, + the weight for the load balancing calculated and the load-balancing itself is done. + + .. tab:: constant switching parameter + + Lines like these would appear in the input script: + + .. code-block:: LAMMPS + + atom_style apip + comm_style tiled + + pair_style hybrid/overlay eam/fs/apip pace/precise/apip + pair_coeff * * eam/fs/apip Cu.eam.fs Cu + pair_coeff * * pace/precise/apip Cu.yace Cu + + # calculate lambda somehow + variable lambda atom ... + set group all apip/lambda v_lambda + + fix 4 all atom_weight/apip 100 eam ace lambda/input lambda/zone all + + variable myweight atom f_4 + + fix 5 all balance 100 1.1 rcb weight var myweight + + First, the :doc:`atom_style apip ` and the communication style are set. + + .. note:: + Note, that :doc:`comm_style ` *tiled* is required for the style *rcb* of + :doc:`fix balance `, but not for APIP. + However, the flexibility offered by the balancing style *rcb*, compared to the + balancing style *shift*, is advantageous for APIP. + + An adaptive-precision EAM-ACE potential is defined via + :doc:`pair_style hybrid/overlay `. + The switching parameter :math:`\lambda_i` of the adaptive-precision + EAM-ACE potential is set via the :doc:`set command `. + The parameter is not updated during the simulation. + Therefore, the potential is conservative. + The fixes ensure that the weight for the load balancing is calculated + and the load-balancing itself is done. + +---------- + +.. _implementing_new_apip_styles: + +Implementing new APIP pair styles +""""""""""""""""""""""""""""""""" + +One can introduce adaptive-precision to an existing pair style by modifying +the original pair style. +One should calculate the force +:math:`F_i = - \nabla_i \sum_j E_j^\text{original}` for a fast potential or +:math:`F_i = - (1-\nabla_i) \sum_j E_j^\text{original}` for a precise +potential from the original potential +energy :math:`E_j^\text{original}` to see where the switching parameter +:math:`\lambda_i` needs to be introduced in the force calculation. +The switching parameter :math:`\lambda_i` is known for all atoms :math:`i` +in force calculation routine. +One needs to introduce an abortion criterion based on :math:`\lambda_i` to +ensure that all not required calculations are skipped and compute time can +be saved. +Furthermore, one needs to provide the number of calculations and measure the +computation time. +Communication within the force calculation needs to be prevented to allow +effective load-balancing. +With communication, the load balancer cannot balance few calculations of the +precise potential on one processor with many computations of the fast +potential on another processor. + +All changes in the pair_style pace/apip compared to the pair_style pace +are annotated and commented. +Thus, the pair_style pace/apip can serve as an example for the implementation +of new adaptive-precision potentials. + +---------- + +.. _Immel2025_1: + +**(Immel)** Immel, Drautz and Sutmann, J Chem Phys, 162, 114119 (2025) diff --git a/doc/src/Howto_bioFF.rst b/doc/src/Howto_bioFF.rst index 92dd45b9b6..cf8e4ab13e 100644 --- a/doc/src/Howto_bioFF.rst +++ b/doc/src/Howto_bioFF.rst @@ -1,22 +1,16 @@ CHARMM, AMBER, COMPASS, DREIDING, and OPLS force fields ======================================================= -A compact summary of the concepts, definitions, and properties of -force fields with explicit bonded interactions (like the ones discussed -in this HowTo) is given in :ref:`(Gissinger) `. - -A force field has 2 parts: the formulas that define it and the -coefficients used for a particular system. Here we only discuss -formulas implemented in LAMMPS that correspond to formulas commonly used -in the CHARMM, AMBER, COMPASS, and DREIDING force fields. Setting -coefficients is done either from special sections in an input data file -via the :doc:`read_data ` command or in the input script with -commands like :doc:`pair_coeff ` or :doc:`bond_coeff -` and so on. See the :doc:`Tools ` doc page for -additional tools that can use CHARMM, AMBER, or Materials Studio -generated files to assign force field coefficients and convert their -output into LAMMPS input. LAMMPS input scripts can also be generated by -`charmm-gui.org `_. +Here we only discuss formulas implemented in LAMMPS that correspond to +formulas commonly used in the CHARMM, AMBER, COMPASS, and DREIDING force +fields. Setting coefficients is done either from special sections in an +input data file via the :doc:`read_data ` command or in the +input script with commands like :doc:`pair_coeff ` or +:doc:`bond_coeff ` and so on. See the :doc:`Tools ` +doc page for additional tools that can use CHARMM, AMBER, or Materials +Studio generated files to assign force field coefficients and convert +their output into LAMMPS input. LAMMPS input scripts can also be +generated by `charmm-gui.org `_. CHARMM and AMBER ---------------- @@ -203,9 +197,11 @@ rather than individual force constants and geometric parameters that depend on the particular combinations of atoms involved in the bond, angle, or torsion terms. DREIDING has an :doc:`explicit hydrogen bond term ` to describe interactions involving a -hydrogen atom on very electronegative atoms (N, O, F). Unlike CHARMM -or AMBER, the DREIDING force field has not been parameterized for -considering solvents (like water). +hydrogen atom on very electronegative atoms (N, O, F). Unlike CHARMM or +AMBER, the DREIDING force field has not been parameterized for +considering solvents (like water) and has no rules for assigning +(partial) charges. That will seriously limit its accuracy when used for +simulating systems where those matter. See :ref:`(Mayo) ` for a description of the DREIDING force field @@ -272,10 +268,6 @@ compatible with a subset of OPLS interactions. ---------- -.. _Typelabel2: - -**(Gissinger)** J. R. Gissinger, I. Nikiforov, Y. Afshar, B. Waters, M. Choi, D. S. Karls, A. Stukowski, W. Im, H. Heinz, A. Kohlmeyer, and E. B. Tadmor, J Phys Chem B, 128, 3282-3297 (2024). - .. _howto-MacKerell: **(MacKerell)** MacKerell, Bashford, Bellott, Dunbrack, Evanseck, Field, Fischer, Gao, Guo, Ha, et al (1998). J Phys Chem, 102, 3586 . https://doi.org/10.1021/jp973084f diff --git a/doc/src/Howto_bpm.rst b/doc/src/Howto_bpm.rst index f632ee6172..b99c6c4b1e 100644 --- a/doc/src/Howto_bpm.rst +++ b/doc/src/Howto_bpm.rst @@ -85,26 +85,42 @@ files to render bond data. ---------- -As bonds can be broken between neighbor list builds, the -:doc:`special_bonds ` command works differently for BPM -bond styles. There are two possible settings which determine how pair -interactions work between bonded particles. First, one can overlay -pair forces with bond forces such that all bonded particles also -feel pair interactions. This can be accomplished by setting the *overlay/pair* -keyword present in all bpm bond styles to *yes* and requires using the -following special bond settings +As bonds can potentially be broken between neighbor list builds, BPM bond +styles may place restrictions on the :doc:`special_bonds ` command. There are three possible scenarios which determine how pair +interactions between bonded particles and special bond weights work. + +The first option is the simplest. If bonds cannot break, then one can use any +special bond settings to control pair forces. Namely, this is accomplished by +setting the *break* keyword to *no*. Note that a zero coul weight for 1-2 bonds +can be used to exclude bonded atoms from the neighbor list builds + + .. code-block:: LAMMPS + + special_bonds lj 0 1 1 coul 0 1 1 + +This can be useful for post-processing, or to determine pair interaction +properties between distinct bonded particles. + +If bonds can break, the second scenario is if pair forces are overlaid +on top of bond forces such that atoms can simultaneously exchange both types +of forces. This is accomplished by setting the *overlay/pair* keyword present +in all bpm bond styles to *yes*. This case requires the following special +bond settings .. code-block:: LAMMPS special_bonds lj/coul 1 1 1 -Alternatively, one can turn off all pair interactions between bonded -particles. Unlike :doc:`bond quartic `, this is not done -by subtracting pair forces during the bond computation, but rather by -dynamically updating the special bond list. This is the default behavior -of BPM bond styles and is done by updating the 1-2 special bond list as -bonds break. To do this, LAMMPS requires :doc:`newton ` bond off -such that all processors containing an atom know when a bond breaks. +Note that this scenario does not update special bond lists when bonds break, +hence why fractional weights are not allowed. Whether or not two particles +are bonded has no bearing on pair forces. + +In the third scenario, bonds can break but pair forces are disabled between +bonded particles. This is the default behavior of BPM bond styles. Unlike +:doc:`bond quartic `, pair forces are not removed by subtracting +pair forces during the bond computation, but rather by dynamically updating the +1-2 special bond list. To do this, LAMMPS requires :doc:`newton ` bond +off such that all processors containing an atom know when a bond breaks. Additionally, one must use the following special bond settings .. code-block:: LAMMPS @@ -112,23 +128,13 @@ Additionally, one must use the following special bond settings special_bonds lj 0 1 1 coul 1 1 1 These settings accomplish two goals. First, they turn off 1-3 and 1-4 -special bond lists, which are not currently supported for BPMs. As -BPMs often have dense bond networks, generating 1-3 and 1-4 special -bond lists is expensive. By setting the lj weight for 1-2 bonds to -zero, this turns off pairwise interactions. Even though there are no -charges in BPM models, setting a nonzero coul weight for 1-2 bonds +special bond lists, which are not currently supported for breakable BPMs. +As BPMs often have dense bond networks, generating/updating 1-3 and 1-4 +special bond lists can be expensive. By setting the lj weight for 1-2 +bonds to zero, this turns off pairwise interactions. Even though there +are no charges in BPM models, setting a nonzero coul weight for 1-2 bonds ensures all bonded neighbors are still included in the neighbor list -in case bonds break between neighbor list builds. If bond breakage is -disabled during a simulation run by setting the *break* keyword to *no*, -a zero coul weight for 1-2 bonds can be used to exclude bonded atoms -from the neighbor list builds - - .. code-block:: LAMMPS - - special_bonds lj 0 1 1 coul 0 1 1 - -This can be useful for post-processing, or to determine pair interaction -properties between distinct bonded particles. +in case bonds break between neighbor list builds. To monitor the fracture of bonds in the system, all BPM bond styles have the ability to record instances of bond breakage to output using diff --git a/doc/src/Howto_cmake.rst b/doc/src/Howto_cmake.rst index 64acee47dd..55f3c9f4ad 100644 --- a/doc/src/Howto_cmake.rst +++ b/doc/src/Howto_cmake.rst @@ -27,13 +27,15 @@ selected examples. Please see the chapter about :doc:`building LAMMPS ` for descriptions of specific flags and options for LAMMPS in general and for specific packages. +.. versionchanged:: 10Sep2025 + CMake can be used through either the command-line interface (CLI) program ``cmake`` (or ``cmake3``), a text mode interactive user interface (TUI) program ``ccmake`` (or ``ccmake3``), or a graphical user interface (GUI) program ``cmake-gui``. All of them are portable software available on all supported platforms and can be used -interchangeably. As of LAMMPS version 2 August 2023, the minimum -required CMake version is 3.16. +interchangeably. Since LAMMPS version 10Sep2025, the minimum +required CMake version is 3.20. All details about features and settings for CMake are in the `CMake online documentation `_. We focus diff --git a/doc/src/Howto_couple.rst b/doc/src/Howto_couple.rst index 7e91cd59c2..ecde863dd1 100644 --- a/doc/src/Howto_couple.rst +++ b/doc/src/Howto_couple.rst @@ -18,7 +18,7 @@ the context of your application. make library calls to the other code, which has been linked to LAMMPS as a library. This is the way the :ref:`VORONOI ` package, which computes Voronoi tesselations using the `Voro++ - library `_, is interfaced to LAMMPS. See + library `_, is interfaced to LAMMPS. See the :doc:`compute voronoi ` command for more details. Also see the :doc:`Modify ` pages for information on how to add a new fix or compute to LAMMPS. diff --git a/doc/src/Howto_lammps_gui.rst b/doc/src/Howto_lammps_gui.rst index 592e67abc5..40aec529ce 100644 --- a/doc/src/Howto_lammps_gui.rst +++ b/doc/src/Howto_lammps_gui.rst @@ -1,1067 +1,12 @@ Using LAMMPS-GUI ================ -LAMMPS-GUI is a graphical text editor programmed using the `Qt Framework -`_ and customized for editing LAMMPS input files. It -is linked to the :ref:`LAMMPS library ` and thus can run -LAMMPS directly using the contents of the editor's text buffer as input. - -It *differs* from other known interfaces to LAMMPS in that it can -retrieve and display information from LAMMPS *while it is running*, -display visualizations created with the :doc:`dump image command -`, can launch the online LAMMPS documentation for known -LAMMPS commands and styles, and directly integrates with a collection -of LAMMPS tutorials (:ref:`Gravelle1 `). - -This document describes **LAMMPS-GUI version 1.6**. - ------ - -.. contents:: - ----- - -LAMMPS-GUI tries to provide an experience similar to what people -traditionally would have running LAMMPS using a command-line window and -the console LAMMPS executable but just rolled into a single executable: - -- writing & editing LAMMPS input files with a text editor -- run LAMMPS on those input file with selected command-line flags -- extract data from the created files and visualize it with and - external software - -That procedure is quite effective for people proficient in using the -command-line, as that allows them to use tools for the individual steps -that they are most comfortable with. In fact, it is often *required* to -adopt this workflow when running LAMMPS simulations on high-performance -computing facilities. - -The main benefit of using LAMMPS-GUI is that many basic tasks can be -done directly from the GUI **without** switching to a text console -window or using external programs, let alone writing scripts to extract -data from the generated output. It also integrates well with graphical -desktop environments where the `.lmp` filename extension can be -registered with LAMMPS-GUI as the executable to launch when double -clicking on such files. Also, LAMMPS-GUI has support for drag-n-drop, -i.e. an input file can be selected and then moved and dropped on the -LAMMPS-GUI executable, and LAMMPS-GUI will launch and read the file into -its buffer. In many cases LAMMPS-GUI will be integrated into the -graphical desktop environment and can be launched like other -applications. - -LAMMPS-GUI thus makes it easier for beginners to get started running -simple LAMMPS simulations. It is very suitable for tutorials on LAMMPS -since you only need to learn how to use a single program for most tasks -and thus time can be saved and people can focus on learning LAMMPS. -The tutorials at https://lammpstutorials.github.io/ are specifically -updated for use with LAMMPS-GUI and their tutorial materials can -be downloaded and edited directly from the GUI. - -Another design goal is to keep the barrier low when replacing part of -the functionality of LAMMPS-GUI with external tools. That said, LAMMPS-GUI -has some unique functionality that is not found elsewhere: - -- auto-adapting to features available in the integrated LAMMPS library -- auto-completion for LAMMPS commands and options -- context-sensitive online help -- start and stop of simulations via mouse or keyboard -- monitoring of simulation progress -- interactive visualization using the :doc:`dump image ` - command with the option to copy-paste the resulting settings -- automatic slide show generation from dump image output at runtime -- automatic plotting of thermodynamic data at runtime -- inspection of binary restart files - -.. admonition:: Download LAMMPS-GUI for your platform - :class: Hint - - Pre-compiled, ready-to-use LAMMPS-GUI executables for Linux x86\_64 - (Ubuntu 20.04LTS or later and compatible), macOS (version 11 aka Big - Sur or later), and Windows (version 10 or later) :ref:`are available - ` for download. Non-MPI LAMMPS executables (as - ``lmp``) for running LAMMPS from the command-line and :doc:`some - LAMMPS tools ` compiled executables are also included. Also, - the pre-compiled LAMMPS-GUI packages include the WHAM executables - from http://membrane.urmc.rochester.edu/content/wham/ for use with - LAMMPS tutorials documented in this paper (:ref:`Gravelle1 - `). - - The source code for LAMMPS-GUI is included in the LAMMPS source code - distribution and can be found in the ``tools/lammps-gui`` folder. It - can be compiled alongside LAMMPS when :doc:`compiling with CMake - `. - ------ - -The following text provides a detailed tour of the features and -functionality of LAMMPS-GUI. Suggestions for new features and -reports of bugs are always welcome. You can use the :doc:`the same -channels as for LAMMPS itself ` for that purpose. - ------ - -Installing Pre-compiled LAMMPS-GUI Packages -------------------------------------------- - -LAMMPS-GUI is available for download as pre-compiled binary packages for -Linux x86\_64 (Ubuntu 20.04LTS or later and compatible), macOS (version -11 aka Big Sur or later), and Windows (version 10 or later) from the -`LAMMPS release pages on GitHub `_. -A backup download location is at https://download.lammps.org/static/ -Alternately, LAMMPS-GUI can be compiled from source when building LAMMPS. - -Windows 10 and later -^^^^^^^^^^^^^^^^^^^^ - -After downloading the ``LAMMPS-Win10-64bit-GUI-.exe`` installer -package, you need to execute it, and start the installation process. -Since those packages are currently unsigned, you have to enable "Developer Mode" -in the Windows System Settings to run the installer. - -MacOS 11 and later -^^^^^^^^^^^^^^^^^^ - -After downloading the ``LAMMPS-macOS-multiarch-GUI-.dmg`` -application bundle disk image, you need to double-click it and then, in -the window that opens, drag the app bundle as indicated into the -"Applications" folder. Afterwards, the disk image can be unmounted. -Then follow the instructions in the "README.txt" file to get access to -the other included command-line executables. - -Linux on x86\_64 -^^^^^^^^^^^^^^^^ - -For Linux with x86\_64 CPU there are currently two variants. The first -is compiled on Ubuntu 20.04LTS, is using some wrapper scripts, and -should be compatible with more recent Linux distributions. After -downloading and unpacking the -``LAMMPS-Linux-x86_64-GUI-.tar.gz`` package. You can switch -into the "LAMMPS_GUI" folder and execute "./lammps-gui" directly. - -The second variant uses `flatpak `_ and -requires the flatpak management and runtime software to be installed. -After downloading the ``LAMMPS-GUI-Linux-x86_64-GUI-.flatpak`` -flatpak bundle, you can install it with ``flatpak install --user -LAMMPS-GUI-Linux-x86_64-GUI-.flatpak``. After installation, -LAMMPS-GUI should be integrated into your desktop environment under -"Applications > Science" but also can be launched from the console with -``flatpak run org.lammps.lammps-gui``. The flatpak bundle also includes -the console LAMMPS executable ``lmp`` which can be launched to run -simulations with, for example with: - -.. code-block:: sh - - flatpak run --command=lmp org.lammps.lammps-gui -in in.melt - -Other bundled command-line executables are run the same way and can be -listed with: - -.. code-block:: sh - - ls $(flatpak info --show-location org.lammps.lammps-gui )/files/bin - - -Compiling from Source -^^^^^^^^^^^^^^^^^^^^^ - -There also are instructions for :ref:`compiling LAMMPS-GUI from source -code ` available elsewhere in the manual. -Compilation from source *requires* using CMake. - ------ - -Starting LAMMPS-GUI -------------------- - -When LAMMPS-GUI starts, it shows the main window, labeled *Editor*, with -either an empty buffer or the contents of the file used as argument. In -the latter case it may look like the following: - -.. |gui-main1| image:: JPG/lammps-gui-main.png - :width: 48% - -.. |gui-main2| image:: JPG/lammps-gui-dark.png - :width: 48% - -|gui-main1| |gui-main2| - -There is the typical menu bar at the top, then the main editor buffer, -and a status bar at the bottom. The input file contents are shown -with line numbers on the left and the input is colored according to -the LAMMPS input file syntax. The status bar shows the status of -LAMMPS execution on the left (e.g. "Ready." when idle) and the current -working directory on the right. The name of the current file in the -buffer is shown in the window title; the word `*modified*` is added if -the buffer edits have not yet saved to a file. The geometry of the main -window is stored when exiting and restored when starting again. - -Opening Files -^^^^^^^^^^^^^ - -The LAMMPS-GUI application can be launched without command-line arguments -and then starts with an empty buffer in the *Editor* window. If arguments -are given LAMMPS will use first command-line argument as the file name for -the *Editor* buffer and reads its contents into the buffer, if the file -exists. All further arguments are ignored. Files can also be opened via -the *File* menu, the `Ctrl-O` (`Command-O` on macOS) keyboard shortcut -or by drag-and-drop of a file from a graphical file manager into the editor -window. If a file extension (e.g. ``.lmp``) has been registered with the -graphical environment to launch LAMMPS-GUI, an existing input file can -be launched with LAMMPS-GUI through double clicking. - -Only one file can be edited at a time, so opening a new file with a -filled buffer closes that buffer. If the buffer has unsaved -modifications, you are asked to either cancel the operation, discard the -changes, or save them. A buffer with modifications can be saved any -time from the "File" menu, by the keyboard shortcut `Ctrl-S` -(`Command-S` on macOS), or by clicking on the "Save" button at the very -left in the status bar. - -Running LAMMPS -^^^^^^^^^^^^^^ - -From within the LAMMPS-GUI main window LAMMPS can be started either from -the *Run* menu using the *Run LAMMPS from Editor Buffer* entry, by -the keyboard shortcut `Ctrl-Enter` (`Command-Enter` on macOS), or by -clicking on the green "Run" button in the status bar. All of these -operations causes LAMMPS to process the entire input script in the -editor buffer, which may contain multiple :doc:`run ` or -:doc:`minimize ` commands. - -LAMMPS runs in a separate thread, so the GUI stays responsive and is -able to interact with the running calculation and access data it -produces. It is important to note that running LAMMPS this way is -using the contents of the input buffer for the run (via the -:cpp:func:`lammps_commands_string()` function of the LAMMPS C-library -interface), and **not** the original file it was read from. Thus, if -there are unsaved changes in the buffer, they *will* be used. As an -alternative, it is also possible to run LAMMPS by reading the contents -of a file from the *Run LAMMPS from File* menu entry or with -`Ctrl-Shift-Enter`. This option may be required in some rare cases -where the input uses some functionality that is not compatible with -running LAMMPS from a string buffer. For consistency, any unsaved -changes in the buffer must be either saved to the file or undone -before LAMMPS can be run from a file. - -.. image:: JPG/lammps-gui-running.png - :align: center - :scale: 75% - -While LAMMPS is running, the contents of the status bar change. On -the left side there is a text indicating that LAMMPS is running, which -also indicates the number of active threads, when thread-parallel -acceleration was selected in the *Preferences* dialog. On the right -side, a progress bar is shown that displays the estimated progress for -the current :doc:`run ` or :doc:`minimize ` command. - -Also, the line number of the currently executed command is highlighted -in green. - -If an error occurs (in the example below the command :doc:`label -