Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/cheaha/slurm/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ As of 2025-02-25, we offer cuDNN modules for CUDA up to `12.3.0`. If you need a

### CUDA Compute Capability and Known Issues

GPU-based software requires a compatible [CUDA Compute Capability](../slurm/gpu.md#available-devices) to function correctly. Each GPU card has a fixed CUDA Compute Capability version. For the software to run as expected, this version must be at least as large as the minimum CUDA Compute Capability required by the software; otherwise, the software will fail to run as expected, often resulting in runtime errors. Some of the known issues are reported in the [FAQ Entry](#frequently-asked-questions-faq-about-a100-gpus). For more information on CUDA Compute Capability please see the [official documentation](https://developer.nvidia.com/cuda-gpus).
GPU-based software requires a compatible [CUDA Compute Capability](../slurm/gpu.md#available-devices) to function correctly. Each GPU card has a fixed CUDA Compute Capability version. For the software to run as expected, this version must be at least as large as the minimum CUDA Compute Capability required by the software; otherwise, the software will fail to run as expected, often resulting in runtime errors. Some of the known issues are reported in the [FAQ Entry](#frequently-asked-questions-faq-about-a100-gpus). For more information on CUDA Compute Capability please see the [official documentation](https://developer.nvidia.com/cuda/gpus).

<!-- markdownlint-disable MD046 -->
!!! note
Expand Down
4 changes: 2 additions & 2 deletions docs/cheaha/slurm/slurm_tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -948,9 +948,9 @@ Program PWSCF v.6.3MaX starts on 8Mar2024 at 13:18:37
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
"P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
URL http://www.quantum-espresso.org",
URL https://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
https://www.quantum-espresso.org/quote

Parallel version (MPI & OpenMP), running on 4 processor cores
Number of MPI processes: 4
Expand Down
10 changes: 8 additions & 2 deletions docs/contributing/contributor_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ Style is not automated at this time as the cost is greater than the benefit. Ent

Style is not automated at this time as the cost is greater than the benefit. Entires in the following keys should be sorted alphabetically.

### News Blog Posts
#### News Blog Posts

News updates should be added as separate Markdown files in the `/docs/news/` directory. Except for the considerations below, all blog posts should be written using the ordinary article style for this guide.

Expand All @@ -153,6 +153,12 @@ categories:
- The `categories` field is a sequence of category labels. These must be selected from the `plugins: blog: categories_allowed:` field of `mkdocs.yml`.
- If you need to add a new category, be sure to give it appropriate title case, or there will be an error during build.

#### Conventions

##### External URLs

- Use only `https://`, never `http://`, unless absolutely required (this is rare).

### Development

The workflow below assumes you are using VSCode and all of the prerequisites listed above. Some familiarity with git and GitHub are assumed.
Expand Down Expand Up @@ -235,7 +241,7 @@ You'll need to add, remove or otherwise modify files as appropriate to implement

![!example mkdocs serve usage](images/contrib-workflow-mkdocs-serve.png)

1. If a new browser tab does not open automatically, use your browser to navigate to `http://localhost:8000`.
1. If a new browser tab does not open automatically, use your browser to navigate to `https://localhost:8000`.
1. Ensure your changes look and function as expected.

![!browser with changes made](images/contrib-workflow-verify-changes-in-browser.png)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The following software are known to use `/tmp/` by default, and can be worked ar

- [Java](https://docs.oracle.com/cd/E63231_01/doc/BIAIN/GUID-94C6B992-1488-4FC7-85EC-91E410D6E7D1.htm#BIAIN-GUID-94C6B992-1488-4FC7-85EC-91E410D6E7D1): `java * -Djava.io.tmpdir=/local/$USER/$SLURM_JOB_ID`
- [UMI Tools](https://umi-tools.readthedocs.io/en/latest/common_options.html): `umi_tools * --temp-dir=/local/$USER/SLURM_JOB_ID`
- [Samtools Sort](http://www.htslib.org/doc/samtools-sort.html): `samtools sort * -T /local/$USER/$SLURM_JOB_ID`
- [Samtools Sort](https://www.htslib.org/doc/samtools-sort.html): `samtools sort * -T /local/$USER/$SLURM_JOB_ID`
- [GATK Tool](https://gatk.broadinstitute.org/hc/en-us/community/posts/360072269012--tmp-dir-option-user-error): `gatk --java-options * --tmp-dir /local/$USER/$SLURM_JOB_ID`
- [NVIDIA Clara Parabricks](https://docs.nvidia.com/clara/parabricks/latest/gettingstarted.html): `pbrun * --tmp-dir=/local/$USER/$SLURM_JOB_ID`.
- [FastQC](https://home.cc.umanitoba.ca/~psgendb/doc/fastqc.help): `fastqc * -d /local/$USER/$SLURM_JOB_ID`
Expand Down
4 changes: 2 additions & 2 deletions docs/policies.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,10 @@ All use of AI-powered tools, whether inside IDEs or via web interfaces, must com
Do not use any cloud-based AI service (where data is sent to external servers) with sensitive or restricted data unless you’ve received formal approval.

Examples of prohibited services without approval include:
- Web Chat AI Tools: [ChatGPT](http://chatgpt.com), [Gemini](https://gemini.google.com), [Claude](https://claude.ai), [Grok](https://grok.com), [DeepSeek](https://www.deepseek.com).
- Web Chat AI Tools: [ChatGPT](https://chatgpt.com), [Gemini](https://gemini.google.com), [Claude](https://claude.ai/login), [Grok](https://grok.com), [DeepSeek](https://www.deepseek.com).
- IDE Extensions and AI Assistants: [Cursor](https://cursor.com), [Cline](https://cline.bot), [Windsurf](https://windsurf.com), [Trae](https://www.trae.ai).

To request approval for AI usage in your project, complete the [UAB IT AI Request Form](https://uabprod.service-now.com/now/nav/ui/classic/params/target/com.glideapp.servicecatalog_cat_item_view.do%3Fv%3D1%26sysparm_id%3D421769291b8502506bd68552604bcba5).
To request approval for AI usage in your project, complete the [UAB IT AI Request Form](https://uabprod.service-now.com/service_portal?id=sc_cat_item&sys_id=421769291b8502506bd68552604bcba5).

The following AI tools are permitted for use with UAB Single Sign-On (SSO) credentials, as they offer enterprise-grade protections for institutional data:
- [Copilot Web](https://copilot.microsoft.com/)
Expand Down
2 changes: 1 addition & 1 deletion docs/workflow_solutions/using_anaconda.md
Original file line number Diff line number Diff line change
Expand Up @@ -376,7 +376,7 @@ dependencies:
- pip:
- numpy==1.26.4 # Pinned version for pip
- git+https://github.com/user/repo.git # Example of installing from a Git repo
- http://insert_package_link_here # For URL links
- https://insert_package_link_here # For URL links
```

For git repos, add them under `- pip:`. For examples, please see <https://pip.pypa.io/en/stable/cli/pip_install/#examples>. See the section [Replicability versus Portability](#replicability-versus-portability) for more information.
Expand Down
79 changes: 35 additions & 44 deletions verification_scripts/linkchecker.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,7 @@


def run_linkchecker() -> None:
"""Run the linkchecker application.

We are ok with running
"""
"""Run the linkchecker application."""
with Path(LINKCHECKER_LOG).open("wb", buffering=0) as f:
subprocess.run( # noqa: S603
[_get_linkchecker_path(), "--config", ".linkcheckerrc", "docs"],
Expand All @@ -69,7 +66,7 @@ def load_output() -> pd.DataFrame:
)


def replace_lines_containing(
def replace_rows(
_s: pd.Series,
_containing: str,
_with: str,
Expand Down Expand Up @@ -98,7 +95,7 @@ def drop_ok_with_no_redirects(_df: pd.DataFrame) -> pd.DataFrame:
return _df[~drop]


def drop_rows_containing(
def drop_rows(
_df: pd.DataFrame,
_in: str,
_containing: str,
Expand Down Expand Up @@ -149,66 +146,60 @@ def _get_linkchecker_path() -> PurePath:

if __name__ == "__main__":
run_linkchecker()
linkchecker_results = load_output()
results = load_output()

# drop good urls
linkchecker_results = drop_ok_with_no_redirects(linkchecker_results)
### drop good urls
results = drop_ok_with_no_redirects(results)

### replace unhelpful error messages
# change 200 OK to 300 Redirect for human clarity
linkchecker_results[RESULT] = replace_lines_containing(
linkchecker_results[RESULT],
"200 OK",
"300 Redirect",
)

results[RESULT] = replace_rows(results[RESULT], "200 OK", "300 Redirect")
# replace long error messages with short codes
linkchecker_results[RESULT] = replace_lines_containing(
linkchecker_results[RESULT],
"ConnectTimeout",
"408 Timeout",
)

results[RESULT] = replace_rows(results[RESULT], "ConnectTimeout", "408 Timeout")
# special code for SSO urls
linkchecker_results[RESULT] = replace_lines_containing(
linkchecker_results[RESULT],
results[RESULT] = replace_rows(
results[RESULT],
"https://padlock.idm.uab.edu",
"423 Locked",
find_in=linkchecker_results[URL_AFTER_REDIRECTION],
find_in=results[URL_AFTER_REDIRECTION],
)

# special ignore rules
linkchecker_results = drop_rows_containing(
linkchecker_results,
### special url ignore rules
# doi.org always redirects, that's its purpose, so we ignore
results = drop_rows(
results,
URL_IN_MARKDOWN,
"https://doi.org",
if_result_code="300",
) # doi.org always redirects, that's its purpose, so we ignore
linkchecker_results = drop_rows_containing(
linkchecker_results,
)
# if anaconda.org goes down we'll surely hear about it
results = drop_rows(
results,
URL_IN_MARKDOWN,
"https://anaconda.org",
if_result_code="403",
) # if anaconda.org goes down we'll surely hear about it
linkchecker_results = drop_rows_containing(
linkchecker_results,
URL_AFTER_REDIRECTION,
"https://padlock.idm.uab.edu",
)
# UAB specific requiring login
results = drop_rows(
results,
URL_IN_MARKDOWN,
"https://idm.uab.edu/cgi-cas/xrmi/sites",
if_result_code="423",
)

# modify file uris
linkchecker_results[MARKDOWN_FILE] = modify_file_uris(
linkchecker_results[MARKDOWN_FILE],
)
### modify file uris to improve readability
results[MARKDOWN_FILE] = modify_file_uris(results[MARKDOWN_FILE])

# organize
linkchecker_results = linkchecker_results.sort_values(
### organize
results = results.sort_values(
by=[RESULT, URL_IN_MARKDOWN, MARKDOWN_FILE, LINE, COLUMN],
)

# output
linkchecker_results.to_csv(LINKCHECKER_OUT_CSV, index=False)
### output
# csv
results.to_csv(LINKCHECKER_OUT_CSV, index=False)

records = linkchecker_results.to_dict(orient="records")
# yml
records = results.to_dict(orient="records")
with Path(LINKCHECKER_OUT_YAML).open("w") as f:
yaml.safe_dump(records, f, sort_keys=False)
Loading