+ ---
+ Lakshya Saharan began his Master’s in Computer Science and joined NJIT’s High Performance Computing (HPC) department as a student intern on September 24. He provides assistance to researchers and students with effective utilization of the cluster, contributes to essential technical documentation, and supports ongoing operational tasks to ensure the cluster's smooth and efficient performance.
+ { style="width:150px; height:200px;" }
+
Aakash Singh
+
+ ---
+ Aakash Singh is a CKA certified DevOps and SRE professional with extensive experience in cloud infrastructure optimization (AWS, Azure) and Kubernetes administration. He has a strong record of enhancing system reliability and achieving significant operational cost reductions.. Aakash excels in designing and managing high-availability, scalable systems, including GPU-accelerated infrastructure for AI/ML workloads and MLOps. His expertise encompasses CI/CD automation, platform engineering, and performance benchmarking of GPUs (e.g., A100s). With deep Linux knowledge, Aakash currently works as an HPC Support Specialist, focusing on AI/ML platforms and contributing to the efficiency of distributed systems. He is keen to leverage his skills in cutting-edge AI and HPC environments.
{ width="15" }
+
{ width="15" }
+
+# Former staff and students
## Gedaliah Wolosh
{ align=left, width="300" }
-Dr. Wolosh has been at NJIT for over twenty years working in research computing. He has been the lead architect for all of the HPC resources at NJIT. He specializes in building scientific software stacks.
+Dr. Wolosh recently retired after 25 years at NJIT working in research computing. He was the lead architect for all the HPC resources at NJIT. He specialized in building scientific software stacks.
{ width="15" }
-
{ width="15" }
-
-Schedule an appointment with Gedaliah [{ width="20"}](https://calendar.google.com/calendar/selfsched?sstoken=UUdmZjlnUlItR09GfGRlZmF1bHR8YTQ0MmFjMWU4N2ZiODUxZjEzMTIwZGZlMWI4MjlkZjQ)
-
-
-## David Perel
-
-## Kate Cahill
-
-{ align=left, width="300" }
-
-Kate started at NJIT in September 2023. Previously, she was the Education & Training Specialist at the Ohio Supercomputer Center for 8 years, where she led the training programs for OSC as well as external education programs related to HPC and computational science for the XSEDE project as well as other grant-funded efforts.
-
-## Kevin Walsh
-
{ width="15" }
-
-
-
-## Abhishek Mukherjee
-
-{ align=left, width="300" }
-
-Abhishek Mukherjee is a computational scientist and has experience in multidisciplinary projects, providing engineering solutions for computational fluid dynamics problems. He has expertise in HPC software installation and is an active contributor to EasyBuild that allows to manage software on HPC systems in an efficient way.
-
{ width="15" }
-
{ width="15" }
-
-You can schedule appointments with Abhishek to consult on problems or questions you are encountering related to their work using the high-performance computing and big data resources at NJIT. Please, before making appointments with [Abhishek Mukherjee](#abhishek-mukherjee), send your query to [hpc@njit.edu](mailto:hpc@njit.edu), so that an incident number will be created which is required to schedule an appointment.
-
-Schedule an appointment directly on Abhishek's calendar from [{ width="20"}](https://calendly.com/abhinjit/arcs-hpc)
-
-
+
{ width="15"}
diff --git a/docs/Policies/lochness_policies.md b/docs/archived/lochness_policies.md
similarity index 82%
rename from docs/Policies/lochness_policies.md
rename to docs/archived/lochness_policies.md
index 85e859f33..65368c21e 100644
--- a/docs/Policies/lochness_policies.md
+++ b/docs/archived/lochness_policies.md
@@ -4,7 +4,7 @@ Access and usages policies for the lochness cluster
* Access
* Faculty members can request access to lochness for themselves and their students by sending an email to hpc@njit.edu. There is no charge for using lochness.
- * For courses send an email to hpc@njit.edu to discuss requirements
+ * For courses, send an email to hpc@njit.edu to discuss requirements
* Storage
* Users are given a quota of 100GB
@@ -19,4 +19,6 @@ Access and usages policies for the lochness cluster
* Condo
* Lochness has a private pool ownership model. Users buy nodes which are then incorporated into the cluster as a private partition.
-As of Sep 1, 2022 no further addition will be incorporated into Lochness
\ No newline at end of file
+!!! warning
+
+ As of Jan 16, no new Lochness accounts will be created and Lochness will be decommissioned once the Wulver migration is complete.
\ No newline at end of file
diff --git a/docs/archived/migration/Lochness_node_owners.md b/docs/archived/migration/Lochness_node_owners.md
new file mode 100644
index 000000000..58e58af80
--- /dev/null
+++ b/docs/archived/migration/Lochness_node_owners.md
@@ -0,0 +1,33 @@
+# Lochness Node Owners Information
+
+## Overview
+
+As a node owner on the Lochness high-performance computing (HPC) cluster, we would like to provide you with important information regarding the upcoming migration to the new cluster, Wulver.
+
+## Node Migration to Wulver
+
+### Privately Owned Nodes
+
+Most privately owned nodes on Lochness will be migrated to Wulver as part of the cluster upgrade. Owners of these nodes will be considered "condo investors" and will receive higher priority access on an equivalent amount of resources for a period of time to be determined. This ensures that your investment in HPC resources continues to be recognized and prioritized during the transition.
+
+### Decommissioning of Out-of-Warranty Nodes
+
+Nodes that are out of warranty and deemed too old to become part of Wulver's infrastructure will be decommissioned. We will contact owners of such nodes for further discussions and potential considerations.
+
+## Node Exclusions from Wulver Migration
+
+### Nodes with Consumer Grade GPUs
+
+Nodes equipped with consumer-grade GPUs, such as Titan, RTX, etc., will not be migrated to Wulver. Instead, these nodes will be relocated to the GITC 4320 datacenter and will be individually managed by ARCS.
+
+This decision is made to ensure the optimal performance and compatibility of the Wulver cluster while providing an alternative solution for nodes with specialized hardware configurations.
+
+## Action Required
+
+1. **Privately Owned Node Owners:** Await further communication regarding the migration process and priority access details.
+2. **Owners of Out-of-Warranty Nodes:** Await further communication for discussions on potential considerations and decommissioning.
+3. **Owners of Nodes with Consumer Grade GPUs:** Expect your nodes to be relocated to the GITC 4320 datacenter, managed individually by ARCS.
+
+## Timeline
+
+The migration timeline and the duration of higher priority access for condo investors will be communicated to you in advance.
diff --git a/docs/archived/migration/faqs.md b/docs/archived/migration/faqs.md
new file mode 100644
index 000000000..a12de2ab9
--- /dev/null
+++ b/docs/archived/migration/faqs.md
@@ -0,0 +1,149 @@
+## The following are questions we have received over the last several weeks regarding the migration from Lochness to Wulver.
+
+### What is the timeline for the migration from Lochness to Wulver?
+??? answer
+
+ We anticipate the migration process to be complete by the end of February 2024. The migration will commence on January 16, 2024, and our team is dedicated to ensuring a smooth transition for all users. We will keep you informed about any updates or changes to the timeline as the migration progresses. Your cooperation and understanding during this period are greatly appreciated. If you have any specific concerns about the timeline, please feel free to reach out to our support team for further clarification.
+
+### How will the migration impact my existing workflows and computations?
+
+??? answer
+
+ The research facilitation team is committed to assisting you during the migration process. Our team will work closely with you to ensure that your existing workflows and computations are seamlessly transferred to Wulver. We understand the importance of minimizing disruptions to your research activities, and our experts will provide guidance and support to address any compatibility issues that may arise. You can expect personalized assistance to make the transition as smooth as possible. If you have specific concerns about your workflows, please don't hesitate to reach out to our team for tailored support.
+
+
+### What specific steps should I take to prepare for the migration?
+
+??? answer
+
+ To prepare for the migration, there are a few crucial steps:
+
+ - Provide a List of Current Students and Postdocs:
+ - Please share with us an updated list of current students, postdocs, and external collaborators who are actively using the HPC resources on Lochness. This information will ensure that user accounts are accurately migrated to Wulver, and access is maintained for the relevant individuals.
+
+ - List of Required Software:
+ - Compile a list of software applications that are essential for your research. This includes both commonly used software and any specialized tools unique to your work. Knowing your software requirements enables us to ensure that the necessary applications are available and properly configured on Wulver. You can find the list of software applications installed on Wulver in [Software](../Software/index.md). If you don't find your applications in the list, submit a request for [HPC Software Installation](https://njit.service-now.com/sp?id=sc_cat_item&sys_id=0746c1f31b6691d04c82cddf034bcbe2&sysparm_category=405f99b41b5b1d507241400abc4bcb6b) by visiting the [Service Catalog](https://njit.service-now.com/sp?id=sc_category).
+
+ - Planning for Former Students:
+ - If you have former students who may still have data or files on Lochness, it's essential to plan for their data migration or archival. We recommend reaching out to former students to coordinate any necessary data transfers or backups to avoid potential data loss.
+
+ These steps will contribute to a successful migration, allowing us to tailor the process to your specific needs. If you have any questions or need assistance with these preparations, please contact our support team.
+
+
+### Will there be any downtime during the migration, and how will it affect my research activities?
+??? answer
+
+ Once the migration has started for your group, Lochness will be inaccessible. If access to Lochness is critical for specific tasks during this period, we strongly encourage you to reach out to us. We understand that some users may have time-sensitive activities or dependencies on Lochness, and we are committed to working with you to find solutions that meet your needs. Please contact our support team by sending an email to [hpc@njit.edu](mailto:hpc@njit.edu) to discuss your specific requirements, and we will do our best to accommodate your situation during the migration process. You can check [Contact Us](contact.md) to see how to create tickets with us and what information is required to create tickets.
+
+### Is there a plan for backing up and restoring data during the migration process?
+??? answer
+
+ Yes, there is a plan in place for data continuity. The data on Lochness resides on a shared filesystem, and these same filesystems will be mounted and available on Wulver post-migration. This approach ensures that your data remains accessible and seamlessly transfers to the new cluster.
+
+ There is no need for a separate backup and restoration process as the shared filesystem continuity facilitates a smooth transition. If you have specific data-related concerns or requirements, please feel free to reach out to our support team ([Contact Us](contact.md)) for further clarification and assistance. Your data integrity and accessibility are our top priorities throughout the migration process.
+
+### What changes will occur in the file directory structure, especially regarding the `/research` and `/home` directories?
+??? answer
+
+ With the migration to Wulver, there will be changes in the file directory structure:
+
+ Filesystems on Wulver: Wulver will have three filesystems available for use: `/home`, `/project`, and `/scratch`.
+
+ - Availability of `/research` Directory: The `/research` directory from Lochness will be mounted on Wulver and will be available for use. This ensures continuity for research-related files and data.
+
+ - Lochness `/home` Directory on Wulver: The Lochness `/home` directory will be mounted on the Wulver login node as `/oldhome`. Users will be able to read files in this directory and move files from this directory but will not be able to write files into this deirectory. This allows users to access their personal home directories from Lochness during the migration period. The Lochness files under `/oldhome` will be available until 1-Jan-2025. Users should move needed files into `/project/PI_UCID` directory (replace PI_UCID with the UCID of PI).
+
+ These changes are designed to optimize the file organization on Wulver while maintaining accessibility to critical research data. The research facilitation team will work closely with users to ensure a smooth transition of data and assist in adapting to the new file directory structure. If you have specific questions or require assistance with data migration, please don't hesitate to contact our support team.
+
+
+### How will the migration impact access to specialized software or applications that I currently use on Lochness?
+??? answer
+
+ The research facilitation team is dedicated to ensuring a smooth transition for users in terms of software and applications:
+
+ - Installation Support:
+ - The research facilitation team will handle the installation of necessary software on Wulver, ensuring that essential tools and applications are available for your research. You can request for [HPC Software Installation](https://njit.service-now.com/sp?id=sc_cat_item&sys_id=0746c1f31b6691d04c82cddf034bcbe2&sysparm_category=405f99b41b5b1d507241400abc4bcb6b) by visiting the [Service Catalog](https://njit.service-now.com/sp?id=sc_category). Please visit [Software](../Software/index.md) to see the list of applications isntalled on Wulver.
+ - Code Compilation Assistance:
+ - If your research involves custom code that needs compilation, the research facilitation team will provide assistance to ensure a successful compilation on Wulver. This support extends to helping users adapt their code to the new environment.
+
+ Our goal is to minimize any disruptions to your research activities and provide the necessary support for a seamless transition. If you have specific software requirements or need assistance with code compilation, please reach out to the research facilitation team, and they will be happy to assist you.
+
+### Can I continue using Lochness for specific tasks during the migration, or will it be completely inaccessible?
+??? answer
+
+ Once the migration has started for your group, Lochness will be inaccessible. If access to Lochness is critical for specific tasks during this period, we strongly encourage you to reach out to us. We understand that some users may have time-sensitive activities or dependencies on Lochness, and we are committed to working with you to find solutions that meet your needs. Please contact our support team to discuss your specific requirements, and we will do our best to accommodate your situation during the migration process.
+
+### What happens to my existing job submissions and queued jobs during the migration process?
+??? answer
+
+ For the most part, the migration will start after all running jobs have been completed. We understand the importance of job completion for ongoing research activities. Accommodations will be made for long-running jobs that cannot be checkpointed.
+ Our aim is to minimize disruptions to your computational tasks and ensure a smooth transition. If you have specific concerns about job submissions or if you anticipate long-running jobs during the migration period, please communicate with our support team. We are here to work collaboratively and make necessary accommodations to facilitate the completion of your jobs during the migration process.
+
+### Are there any adjustments needed for code or scripts to ensure compatibility with Wulver?
+??? answer
+
+ Yes, adjustments will be required for code or submission scripts to ensure compatibility with Wulver:
+
+ - Submit Script Changes:
+ - Submit scripts will need to be modified to accommodate changes in partitions, hardware configurations, policies, and filesystems on Wulver. The research facilitation team will provide guidance and support in updating your submit scripts for seamless job submissions. Check the sample submit scipts for Wulver in [SLURM](slurm.md).
+ - Code Recompilation:
+ - Due to differences in hardware, code may need to be recompiled to ensure optimal performance on Wulver. The research facilitation team is ready to assist you in this process, offering support to recompile code and address any related issues.
+ - If you code is compiled based on [FOSS Toolchain](compilers.md#free-open-source-software-(foss)) (GCC and OpenMPI), you need to compile the code the same way you did in Lochness. Just make sure all the dependency libraries are installed on Wulver. Please visit [Software](../Software/index.md) to check the list of applications installed on Wulver.
+ - If your code is based on the [Intel toolchain](compilers.md#intel), you need to add the following while configuring your code.
+ ```console
+ ./configure CFLAGS="-march=core-avx2"
+ ```
+ - When installing codes, ensure that you perform the installation on the compute node rather than the login node. Since the hardware architecture is different on the login node, it's best practice to compile your code on the compute node. You need to initiate an [interactive session with compute node](slurm.md#interactive-session-on-a-compute-node) before compiling your code.
+
+ Assistance will be provided to help you adapt your code and scripts to the new environment on Wulver. If you have specific concerns or require support in making these adjustments, please reach out to our [research facilitation team](contact.md), and they will work with you to ensure a smooth transition.
+
+### Will there be training sessions or documentation available to help faculty and researchers transition smoothly to Wulver?
+??? answer
+
+ While there are no official training sessions scheduled at this point, comprehensive documentation is available at [NJIT HPC Documentation](https://hpc.njit.edu) to assist faculty and researchers during the transition to Wulver.
+ In addition to documentation, the research facilitation team is committed to providing personal assistance to faculty and researchers. If you have specific questions, require hands-on support, or need guidance on using Wulver effectively for your research, please do not hesitate to reach out to the [research facilitation team](contact.md). We are here to ensure that you receive the assistance you need for a successful transition.
+
+### How can I request additional resources or discuss specific requirements for my research projects on Wulver?
+??? answer
+
+ To request additional resources or discuss specific requirements for your research projects on Wulver, please reach out to us at [hpc@njit.edu](mailto:hpc@njit.edu). Our team is ready to assist you with any inquiries related to resource allocation, project needs, or any other aspects that can enhance your experience on the Wulver cluster. Your requests will be promptly addressed, and we are committed to providing the support necessary for the success of your research endeavors.
+
+### What support services will be available during and after the migration to address any issues or concerns?
+??? answer
+
+ The research facilitation team is committed to providing personalized assistance and support services during and after the migration:
+
+ - Personal Assistance:
+ - The research facilitation team is dedicated to offering personal assistance to each user. Whether you need help with data migration, code adjustments, or understanding the new environment, our team is here to provide tailored support.
+
+ - Issue Resolution:
+ - Any issues or concerns that arise during or after the migration will be promptly addressed by the research facilitation team. We aim to ensure a smooth transition for all users and are ready to tackle any challenges that may arise.
+
+ - Ongoing Support:
+ - Support services will continue to be available after the migration to address ongoing needs, answer questions, and assist with any further optimizations or adjustments required for your research projects.
+
+ Your success is our priority, and the research facilitation team is here to guide you through the migration process and beyond. If you encounter any issues or have specific concerns, please reach out to the team for personalized assistance.
+
+### Can I test my applications or simulations on Wulver before the migration to ensure compatibility?
+??? answer
+
+ Yes, absolutely! We encourage users to proactively test their applications or simulations on Wulver before the migration to ensure compatibility and identify any potential issues. This testing phase allows you to familiarize yourself with the new environment and address any concerns in advance.
+
+ If you encounter challenges or have questions during the testing process, please don't hesitate to reach out to us. Our team is here to provide guidance, answer queries, and assist you in ensuring a smooth transition for your research activities on Wulver. Your proactive testing will contribute to a successful migration experience.
+
+### How long will the `/oldhome` directory on Lochness be accessible after the migration, and what actions should I take regarding data stored there?
+??? answer
+
+ The `/oldhome` directory on Lochness will be accessible for 6 months after the migration is complete. During this period, users are advised to review and move their data to other locations on Wulver or archive it as needed. This timeframe provides a reasonable window for users to organize and transfer their data while ensuring a smooth transition.
+
+ If you have specific questions about data migration or need assistance during this post-migration period, please reach out to our support team. We are here to help you with any further steps or considerations related to your data on Lochness.
+
+### Will AFS be available on Wulver?
+??? answer
+
+ The AFS will not be available on Wulver. However we will setup a self-serve procedure for researchers to move AFS files to `/research`. We can do this as part of the full migration process. Please reach out to us at [hpc@njit.edu](mailto:hpc@njit.edu) for any questions.
+
+### I have several Conda environments on Lochness. Should I perform a fresh installation or copy the environment to Wulver?
+??? answer
+
+ We recommend a fresh installation since the hardware architecture is different on Wulver. However, you can export the existing Conda environment to a YAML file, transfer it to Wulver, and then install the environment using this YAML file. See [Conda Export Environment](conda.md#export-and-import-conda-environment) for details.
\ No newline at end of file
diff --git a/docs/archived/migration/index.md b/docs/archived/migration/index.md
new file mode 100644
index 000000000..4bf45ef26
--- /dev/null
+++ b/docs/archived/migration/index.md
@@ -0,0 +1,25 @@
+# Important Announcement: HPC Cluster Migration from Lochness to Wulver
+We are excited to share important news about the upcoming migration of users from the old HPC cluster, [Lochness](lochness.md), to the new and improved cluster, [Wulver](wulver.md). This migration is scheduled to commence on 1/16/2024.
+
+## Migration Details:
+
+* Start Date: January 16, 2024
+* Priority: Migration will be carried out in PI groups, with PIs who own nodes in Lochness being migrated first.
+* Communication: PIs will receive prior communication from our team to discuss specific details tailored to their groups.
+* Lochness Complete Shutdown: At the conclusion of the migration, Lochness will undergo a complete shutdown to facilitate its merger with Wulver. You will be advised of any necessary preparations needed on your end for a seamless transition.
+* Restrictions on Lochness: As of the migration start date, no new user accounts or software installations will be permitted on Lochness. We appreciate your cooperation in adhering to these restrictions to ensure a smooth migration process.
+* HPC Usage in Spring Courses: For coursework during the Spring semester, HPC usage will be exclusively on Wulver. Faculty members planning to integrate Wulver into their Spring courses are encouraged to contact us for testing and any necessary support. This proactive approach will help ensure a successful experience for both faculty and students.
+
+## Action Required:
+
+PIs: Expect communication from our team before the migration date, providing essential details and discussing the migration plan tailored to your group's needs. We will need a list of your current students and software in use by your group.
+
+## Important Note Regarding Documentation:
+
+To facilitate a smooth transition, please refer to [FAQs](faqs.md), and other resources specific to the migration process.
+
+## Benefits of Migration:
+
+Wulver is equipped with enhanced capabilities and improved performance, ensuring a more efficient and streamlined high-performance computing experience for all users.
+
+If you have immediate concerns or questions, please feel free to reach out to the group via [Contact US](contact.md).
\ No newline at end of file
diff --git a/docs/archived/migration/lochness_filesystem.md b/docs/archived/migration/lochness_filesystem.md
new file mode 100644
index 000000000..89090f43b
--- /dev/null
+++ b/docs/archived/migration/lochness_filesystem.md
@@ -0,0 +1,29 @@
+# Lochness to Wulver Migration: File Directory Changes
+
+## Overview
+
+As part of the HPC cluster migration from Lochness to Wulver, there will be changes in the file directory structure to ensure a smooth transition. This documentation provides details on how the `/research` and home directories will be handled on the new Wulver cluster.
+
+### `/research` Directory
+
+The `/research` directory from Lochness will be mounted on Wulver as `/research`. This ensures that users can seamlessly access their research-related files and data in the familiar directory structure. All content, including subdirectories and files, within `/research` on Lochness will be avaialble on the same location on Wulver.
+
+### Home Directories
+
+#### Lochness Home Directory
+
+The home directories from Lochness will be available on Wulver under the directory `/oldhome`. Users will find their personal home directories within `/oldhome`, allowing them to access their files and configurations.
+
+#### Locking of `/oldhome` Directory
+
+After the migration is completed, the `/oldhome` directory on Lochness will be locked. This means that no additional files or directories can be created in `/oldhome`. The locking of this directory is a crucial step to maintain the integrity of the migrated data and to ensure a stable environment on both clusters.
+
+Users are advised to review and transfer any necessary files from `/oldhome` on Lochness to their new home directories on Wulver during the migration period.
+
+### Important Notes
+
+- **Migration Date:** The migration process is scheduled to start on January 16, 2024.
+- **Locking Date for `/oldhome`:** After the migration, the `/oldhome` directory on Lochness will be locked to prevent further modifications.
+- **File Access:** Users will have continued access to their research files under `/research` on Wulver and their home directories under `/oldhome` on Lochness until the specified locking date.
+
+Please make the necessary adjustments to your workflows and ensure a smooth transition by reviewing and moving your files as needed. If you have any questions or concerns, feel free to reach out to our support team.
diff --git a/docs/assets/images/Aakash.jpg b/docs/assets/images/Aakash.jpg
new file mode 100644
index 000000000..8969d47ee
Binary files /dev/null and b/docs/assets/images/Aakash.jpg differ
diff --git a/docs/assets/images/HPC_data_center.png b/docs/assets/images/HPC_data_center.png
new file mode 100644
index 000000000..1721ddf83
Binary files /dev/null and b/docs/assets/images/HPC_data_center.png differ
diff --git a/docs/assets/images/Lakshya_bio.jpg b/docs/assets/images/Lakshya_bio.jpg
new file mode 100644
index 000000000..56605989e
Binary files /dev/null and b/docs/assets/images/Lakshya_bio.jpg differ
diff --git a/docs/assets/images/Lochness-schematic.png b/docs/assets/images/Lochness-schematic.png
new file mode 100644
index 000000000..8da86ef12
Binary files /dev/null and b/docs/assets/images/Lochness-schematic.png differ
diff --git a/docs/assets/images/MIG/gpu-alloc-vs-util-3mo.png b/docs/assets/images/MIG/gpu-alloc-vs-util-3mo.png
new file mode 100644
index 000000000..64dc11e32
Binary files /dev/null and b/docs/assets/images/MIG/gpu-alloc-vs-util-3mo.png differ
diff --git a/docs/assets/images/MIG/gpu-mem-util-3mo.png b/docs/assets/images/MIG/gpu-mem-util-3mo.png
new file mode 100644
index 000000000..f8dc8c386
Binary files /dev/null and b/docs/assets/images/MIG/gpu-mem-util-3mo.png differ
diff --git a/docs/assets/images/Wulver-schematic.png b/docs/assets/images/Wulver-schematic.png
new file mode 100644
index 000000000..72bf62755
Binary files /dev/null and b/docs/assets/images/Wulver-schematic.png differ
diff --git a/docs/assets/images/course-config-schematic.png b/docs/assets/images/course-config-schematic.png
new file mode 100644
index 000000000..c1a16f852
Binary files /dev/null and b/docs/assets/images/course-config-schematic.png differ
diff --git a/docs/assets/images/julia.jpg b/docs/assets/images/julia.jpg
new file mode 100644
index 000000000..f8e61d168
Binary files /dev/null and b/docs/assets/images/julia.jpg differ
diff --git a/docs/assets/images/mukherjee.jpeg b/docs/assets/images/mukherjee.jpeg
new file mode 100644
index 000000000..5b435f516
Binary files /dev/null and b/docs/assets/images/mukherjee.jpeg differ
diff --git a/docs/assets/images/mukherjee.jpg b/docs/assets/images/mukherjee.jpg
deleted file mode 100644
index 36da38457..000000000
Binary files a/docs/assets/images/mukherjee.jpg and /dev/null differ
diff --git a/docs/assets/images/zoom.png b/docs/assets/images/zoom.png
new file mode 100644
index 000000000..1223afd47
Binary files /dev/null and b/docs/assets/images/zoom.png differ
diff --git a/docs/assets/ondemand/cluster_shell/wulver-shell-access-1.png b/docs/assets/ondemand/cluster_shell/wulver-shell-access-1.png
new file mode 100644
index 000000000..def0af824
Binary files /dev/null and b/docs/assets/ondemand/cluster_shell/wulver-shell-access-1.png differ
diff --git a/docs/assets/ondemand/cluster_shell/wulver-shell-access-2.png b/docs/assets/ondemand/cluster_shell/wulver-shell-access-2.png
new file mode 100644
index 000000000..ed229cd2a
Binary files /dev/null and b/docs/assets/ondemand/cluster_shell/wulver-shell-access-2.png differ
diff --git a/docs/assets/ondemand/dashboard.png b/docs/assets/ondemand/dashboard.png
new file mode 100644
index 000000000..30916787e
Binary files /dev/null and b/docs/assets/ondemand/dashboard.png differ
diff --git a/docs/assets/ondemand/files/files-dropdown.png b/docs/assets/ondemand/files/files-dropdown.png
new file mode 100644
index 000000000..e10d94b2f
Binary files /dev/null and b/docs/assets/ondemand/files/files-dropdown.png differ
diff --git a/docs/assets/ondemand/files/files.png b/docs/assets/ondemand/files/files.png
new file mode 100644
index 000000000..fd9ca9c13
Binary files /dev/null and b/docs/assets/ondemand/files/files.png differ
diff --git a/docs/assets/ondemand/interactive/interactive-app-dropdown.png b/docs/assets/ondemand/interactive/interactive-app-dropdown.png
new file mode 100644
index 000000000..dc9650227
Binary files /dev/null and b/docs/assets/ondemand/interactive/interactive-app-dropdown.png differ
diff --git a/docs/assets/ondemand/interactive/interactive-sessions-1.png b/docs/assets/ondemand/interactive/interactive-sessions-1.png
new file mode 100644
index 000000000..3f94626ff
Binary files /dev/null and b/docs/assets/ondemand/interactive/interactive-sessions-1.png differ
diff --git a/docs/assets/ondemand/interactive/interactive-sessions-2.png b/docs/assets/ondemand/interactive/interactive-sessions-2.png
new file mode 100644
index 000000000..ab6ec32e6
Binary files /dev/null and b/docs/assets/ondemand/interactive/interactive-sessions-2.png differ
diff --git a/docs/assets/ondemand/jobs/active_jobs1.png b/docs/assets/ondemand/jobs/active_jobs1.png
new file mode 100644
index 000000000..75aa93fbc
Binary files /dev/null and b/docs/assets/ondemand/jobs/active_jobs1.png differ
diff --git a/docs/assets/ondemand/jobs/active_jobs2.png b/docs/assets/ondemand/jobs/active_jobs2.png
new file mode 100644
index 000000000..293f7ebec
Binary files /dev/null and b/docs/assets/ondemand/jobs/active_jobs2.png differ
diff --git a/docs/assets/ondemand/jobs/job_submit1.png b/docs/assets/ondemand/jobs/job_submit1.png
new file mode 100644
index 000000000..58954ac90
Binary files /dev/null and b/docs/assets/ondemand/jobs/job_submit1.png differ
diff --git a/docs/assets/ondemand/jobs/job_submit2.png b/docs/assets/ondemand/jobs/job_submit2.png
new file mode 100644
index 000000000..b10715c6c
Binary files /dev/null and b/docs/assets/ondemand/jobs/job_submit2.png differ
diff --git a/docs/assets/ondemand/jobs/job_submit3.png b/docs/assets/ondemand/jobs/job_submit3.png
new file mode 100644
index 000000000..d6df95cf7
Binary files /dev/null and b/docs/assets/ondemand/jobs/job_submit3.png differ
diff --git a/docs/assets/ondemand/jobs/jobcomposer-1.png b/docs/assets/ondemand/jobs/jobcomposer-1.png
new file mode 100644
index 000000000..a999ec544
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobcomposer-1.png differ
diff --git a/docs/assets/ondemand/jobs/jobcomposer-2.png b/docs/assets/ondemand/jobs/jobcomposer-2.png
new file mode 100644
index 000000000..86601ad24
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobcomposer-2.png differ
diff --git a/docs/assets/ondemand/jobs/jobcomposer-3.png b/docs/assets/ondemand/jobs/jobcomposer-3.png
new file mode 100644
index 000000000..e234eadd2
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobcomposer-3.png differ
diff --git a/docs/assets/ondemand/jobs/jobcomposer-4.png b/docs/assets/ondemand/jobs/jobcomposer-4.png
new file mode 100644
index 000000000..9477f78ed
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobcomposer-4.png differ
diff --git a/docs/assets/ondemand/jobs/jobcomposer-5.png b/docs/assets/ondemand/jobs/jobcomposer-5.png
new file mode 100644
index 000000000..44487c11e
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobcomposer-5.png differ
diff --git a/docs/assets/ondemand/jobs/jobcomposer-6.png b/docs/assets/ondemand/jobs/jobcomposer-6.png
new file mode 100644
index 000000000..799897fe8
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobcomposer-6.png differ
diff --git a/docs/assets/ondemand/jobs/jobcomposer-7.png b/docs/assets/ondemand/jobs/jobcomposer-7.png
new file mode 100644
index 000000000..4f2ca6179
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobcomposer-7.png differ
diff --git a/docs/assets/ondemand/jobs/jobs-dashboard.png b/docs/assets/ondemand/jobs/jobs-dashboard.png
new file mode 100644
index 000000000..0347ac4bf
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobs-dashboard.png differ
diff --git a/docs/assets/ondemand/jobs/jobs-dropdown.png b/docs/assets/ondemand/jobs/jobs-dropdown.png
new file mode 100644
index 000000000..d74ae494a
Binary files /dev/null and b/docs/assets/ondemand/jobs/jobs-dropdown.png differ
diff --git a/docs/assets/ondemand/jupyter/jupyter1.png b/docs/assets/ondemand/jupyter/jupyter1.png
new file mode 100644
index 000000000..5dacdf877
Binary files /dev/null and b/docs/assets/ondemand/jupyter/jupyter1.png differ
diff --git a/docs/assets/ondemand/jupyter/jupyter2.png b/docs/assets/ondemand/jupyter/jupyter2.png
new file mode 100644
index 000000000..bbb6f066a
Binary files /dev/null and b/docs/assets/ondemand/jupyter/jupyter2.png differ
diff --git a/docs/assets/ondemand/jupyter/jupyter3.png b/docs/assets/ondemand/jupyter/jupyter3.png
new file mode 100644
index 000000000..0413d26e8
Binary files /dev/null and b/docs/assets/ondemand/jupyter/jupyter3.png differ
diff --git a/docs/assets/ondemand/jupyter/jupyter4.png b/docs/assets/ondemand/jupyter/jupyter4.png
new file mode 100644
index 000000000..66f75dc8f
Binary files /dev/null and b/docs/assets/ondemand/jupyter/jupyter4.png differ
diff --git a/docs/assets/ondemand/jupyter/jupyter5.png b/docs/assets/ondemand/jupyter/jupyter5.png
new file mode 100644
index 000000000..51adeec60
Binary files /dev/null and b/docs/assets/ondemand/jupyter/jupyter5.png differ
diff --git a/docs/assets/ondemand/login.png b/docs/assets/ondemand/login.png
new file mode 100644
index 000000000..55e0cf087
Binary files /dev/null and b/docs/assets/ondemand/login.png differ
diff --git a/docs/assets/ondemand/matlab/jupyter-matlab-proxy-1.png b/docs/assets/ondemand/matlab/jupyter-matlab-proxy-1.png
new file mode 100644
index 000000000..1483e13ee
Binary files /dev/null and b/docs/assets/ondemand/matlab/jupyter-matlab-proxy-1.png differ
diff --git a/docs/assets/ondemand/matlab/jupyter-matlab-proxy-2.png b/docs/assets/ondemand/matlab/jupyter-matlab-proxy-2.png
new file mode 100644
index 000000000..a81fe819a
Binary files /dev/null and b/docs/assets/ondemand/matlab/jupyter-matlab-proxy-2.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-dropdown.png b/docs/assets/ondemand/matlab/matlab-dropdown.png
new file mode 100644
index 000000000..68f619af7
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-dropdown.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-license-1.png b/docs/assets/ondemand/matlab/matlab-license-1.png
new file mode 100644
index 000000000..4a64b9916
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-license-1.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-license-2.png b/docs/assets/ondemand/matlab/matlab-license-2.png
new file mode 100644
index 000000000..5e40a33be
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-license-2.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-license-3.png b/docs/assets/ondemand/matlab/matlab-license-3.png
new file mode 100644
index 000000000..c5232e944
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-license-3.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-server-1.png b/docs/assets/ondemand/matlab/matlab-server-1.png
new file mode 100644
index 000000000..b1160d126
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-server-1.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-server-2.png b/docs/assets/ondemand/matlab/matlab-server-2.png
new file mode 100644
index 000000000..3e93b0f0d
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-server-2.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-start-working.png b/docs/assets/ondemand/matlab/matlab-start-working.png
new file mode 100644
index 000000000..570337e6a
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-start-working.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-1.png b/docs/assets/ondemand/matlab/matlab-vnc-1.png
new file mode 100644
index 000000000..f3c65abb9
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-1.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-2.png b/docs/assets/ondemand/matlab/matlab-vnc-2.png
new file mode 100644
index 000000000..468f5cb5a
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-2.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-3.png b/docs/assets/ondemand/matlab/matlab-vnc-3.png
new file mode 100644
index 000000000..4e6bc0aba
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-3.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-4.png b/docs/assets/ondemand/matlab/matlab-vnc-4.png
new file mode 100644
index 000000000..33545622c
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-4.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-5.png b/docs/assets/ondemand/matlab/matlab-vnc-5.png
new file mode 100644
index 000000000..c534e9a14
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-5.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-6.png b/docs/assets/ondemand/matlab/matlab-vnc-6.png
new file mode 100644
index 000000000..13c35beca
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-6.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-7.png b/docs/assets/ondemand/matlab/matlab-vnc-7.png
new file mode 100644
index 000000000..e28f7cfb1
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-7.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-8.png b/docs/assets/ondemand/matlab/matlab-vnc-8.png
new file mode 100644
index 000000000..994c9cffe
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-8.png differ
diff --git a/docs/assets/ondemand/matlab/matlab-vnc-9.png b/docs/assets/ondemand/matlab/matlab-vnc-9.png
new file mode 100644
index 000000000..e42ec4e36
Binary files /dev/null and b/docs/assets/ondemand/matlab/matlab-vnc-9.png differ
diff --git a/docs/assets/ondemand/rstudio/Rstudio1.png b/docs/assets/ondemand/rstudio/Rstudio1.png
new file mode 100644
index 000000000..4c09b517b
Binary files /dev/null and b/docs/assets/ondemand/rstudio/Rstudio1.png differ
diff --git a/docs/assets/ondemand/rstudio/Rstudio2.png b/docs/assets/ondemand/rstudio/Rstudio2.png
new file mode 100644
index 000000000..163c08a99
Binary files /dev/null and b/docs/assets/ondemand/rstudio/Rstudio2.png differ
diff --git a/docs/assets/ondemand/rstudio/Rstudio3.png b/docs/assets/ondemand/rstudio/Rstudio3.png
new file mode 100644
index 000000000..3d19b313c
Binary files /dev/null and b/docs/assets/ondemand/rstudio/Rstudio3.png differ
diff --git a/docs/assets/ondemand/rstudio/Rstudio4.png b/docs/assets/ondemand/rstudio/Rstudio4.png
new file mode 100644
index 000000000..8f8518060
Binary files /dev/null and b/docs/assets/ondemand/rstudio/Rstudio4.png differ
diff --git a/docs/assets/ondemand/rstudio/Rstudio5.png b/docs/assets/ondemand/rstudio/Rstudio5.png
new file mode 100644
index 000000000..1c3fb6d9f
Binary files /dev/null and b/docs/assets/ondemand/rstudio/Rstudio5.png differ
diff --git a/docs/assets/ondemand/tools/checkload-output.png b/docs/assets/ondemand/tools/checkload-output.png
new file mode 100644
index 000000000..9114b3812
Binary files /dev/null and b/docs/assets/ondemand/tools/checkload-output.png differ
diff --git a/docs/assets/ondemand/tools/homespace-ouput.png b/docs/assets/ondemand/tools/homespace-ouput.png
new file mode 100644
index 000000000..8cecf898f
Binary files /dev/null and b/docs/assets/ondemand/tools/homespace-ouput.png differ
diff --git a/docs/assets/ondemand/tools/joblist-date-input.png b/docs/assets/ondemand/tools/joblist-date-input.png
new file mode 100644
index 000000000..5d22032c2
Binary files /dev/null and b/docs/assets/ondemand/tools/joblist-date-input.png differ
diff --git a/docs/assets/ondemand/tools/joblist-output.png b/docs/assets/ondemand/tools/joblist-output.png
new file mode 100644
index 000000000..e14f8ac76
Binary files /dev/null and b/docs/assets/ondemand/tools/joblist-output.png differ
diff --git a/docs/assets/ondemand/tools/ps-output.png b/docs/assets/ondemand/tools/ps-output.png
new file mode 100644
index 000000000..0cef17be5
Binary files /dev/null and b/docs/assets/ondemand/tools/ps-output.png differ
diff --git a/docs/assets/ondemand/tools/qoslist-output.png b/docs/assets/ondemand/tools/qoslist-output.png
new file mode 100644
index 000000000..ae8d5ed45
Binary files /dev/null and b/docs/assets/ondemand/tools/qoslist-output.png differ
diff --git a/docs/assets/ondemand/tools/quota-info-output.png b/docs/assets/ondemand/tools/quota-info-output.png
new file mode 100644
index 000000000..5185e6c58
Binary files /dev/null and b/docs/assets/ondemand/tools/quota-info-output.png differ
diff --git a/docs/assets/ondemand/tools/tools-dropdown.png b/docs/assets/ondemand/tools/tools-dropdown.png
new file mode 100644
index 000000000..5e720f1f4
Binary files /dev/null and b/docs/assets/ondemand/tools/tools-dropdown.png differ
diff --git a/docs/assets/parallel_slurm.mlpkginstall b/docs/assets/parallel_slurm.mlpkginstall
new file mode 100644
index 000000000..1f798a435
--- /dev/null
+++ b/docs/assets/parallel_slurm.mlpkginstall
@@ -0,0 +1,283 @@
+
+
+
+
+
+
+ MathWorks
+ parallel_slurm
+ Parallel Computing Toolbox
+ Parallel Computing Toolbox integration for MATLAB Distributed Computing Server with Slurm
+ PCT_SLURM
+
+ 4213D3AD53F30F3A25949B00CD1EA644
+
+
+
+
+
diff --git a/docs/assets/slides/Conda_training_Feb26.pdf b/docs/assets/slides/Conda_training_Feb26.pdf
new file mode 100644
index 000000000..8330f2e83
Binary files /dev/null and b/docs/assets/slides/Conda_training_Feb26.pdf differ
diff --git a/docs/assets/slides/HPC_Advanced_SLURM_11-20-2024.pdf b/docs/assets/slides/HPC_Advanced_SLURM_11-20-2024.pdf
new file mode 100644
index 000000000..a548bf8df
Binary files /dev/null and b/docs/assets/slides/HPC_Advanced_SLURM_11-20-2024.pdf differ
diff --git a/docs/assets/slides/Intro_to_Linux.pdf b/docs/assets/slides/Intro_to_Linux.pdf
new file mode 100644
index 000000000..6420d84b3
Binary files /dev/null and b/docs/assets/slides/Intro_to_Linux.pdf differ
diff --git a/docs/assets/slides/Intro_to_Wulver_III_10_08_2025.pdf b/docs/assets/slides/Intro_to_Wulver_III_10_08_2025.pdf
new file mode 100644
index 000000000..08940ce6f
Binary files /dev/null and b/docs/assets/slides/Intro_to_Wulver_III_10_08_2025.pdf differ
diff --git a/docs/assets/slides/Intro_to_Wulver_II_01_29_2025.pdf b/docs/assets/slides/Intro_to_Wulver_II_01_29_2025.pdf
new file mode 100644
index 000000000..b909a4217
Binary files /dev/null and b/docs/assets/slides/Intro_to_Wulver_II_01_29_2025.pdf differ
diff --git a/docs/assets/slides/Intro_to_Wulver_II_10_01_2025.pdf b/docs/assets/slides/Intro_to_Wulver_II_10_01_2025.pdf
new file mode 100644
index 000000000..d3d5830e7
Binary files /dev/null and b/docs/assets/slides/Intro_to_Wulver_II_10_01_2025.pdf differ
diff --git a/docs/assets/slides/Intro_to_Wulver_I_01_22_2025.pdf b/docs/assets/slides/Intro_to_Wulver_I_01_22_2025.pdf
new file mode 100644
index 000000000..428d4113e
Binary files /dev/null and b/docs/assets/slides/Intro_to_Wulver_I_01_22_2025.pdf differ
diff --git a/docs/assets/slides/Intro_to_Wulver_I_09_17_2025.pdf b/docs/assets/slides/Intro_to_Wulver_I_09_17_2025.pdf
new file mode 100644
index 000000000..07166bfb9
Binary files /dev/null and b/docs/assets/slides/Intro_to_Wulver_I_09_17_2025.pdf differ
diff --git a/docs/assets/slides/NJIT_HPC_Seminar-Part-I.pdf b/docs/assets/slides/NJIT_HPC_Seminar-Part-I.pdf
new file mode 100644
index 000000000..0c4329f2b
Binary files /dev/null and b/docs/assets/slides/NJIT_HPC_Seminar-Part-I.pdf differ
diff --git a/docs/assets/slides/NJIT_HPC_Seminar-Part-II.pdf b/docs/assets/slides/NJIT_HPC_Seminar-Part-II.pdf
new file mode 100644
index 000000000..b30569018
Binary files /dev/null and b/docs/assets/slides/NJIT_HPC_Seminar-Part-II.pdf differ
diff --git a/docs/assets/slides/NJIT_HPC_Seminar-SLURM.pdf b/docs/assets/slides/NJIT_HPC_Seminar-SLURM.pdf
new file mode 100644
index 000000000..330973891
Binary files /dev/null and b/docs/assets/slides/NJIT_HPC_Seminar-SLURM.pdf differ
diff --git a/docs/assets/slides/Open_OnDemand_on_Wulver.pdf b/docs/assets/slides/Open_OnDemand_on_Wulver.pdf
new file mode 100644
index 000000000..fc5870209
Binary files /dev/null and b/docs/assets/slides/Open_OnDemand_on_Wulver.pdf differ
diff --git a/docs/assets/slides/Wulver_ClusterTools-Monitoring_10-22-2025.pdf b/docs/assets/slides/Wulver_ClusterTools-Monitoring_10-22-2025.pdf
new file mode 100644
index 000000000..666f02c35
Binary files /dev/null and b/docs/assets/slides/Wulver_ClusterTools-Monitoring_10-22-2025.pdf differ
diff --git a/docs/assets/slides/Wulver_MIG_Dec03_2025.pdf b/docs/assets/slides/Wulver_MIG_Dec03_2025.pdf
new file mode 100644
index 000000000..dd5b72a5e
Binary files /dev/null and b/docs/assets/slides/Wulver_MIG_Dec03_2025.pdf differ
diff --git a/docs/assets/slides/Wulver_MIG_SU_2025.pdf b/docs/assets/slides/Wulver_MIG_SU_2025.pdf
new file mode 100644
index 000000000..48deb3212
Binary files /dev/null and b/docs/assets/slides/Wulver_MIG_SU_2025.pdf differ
diff --git a/docs/assets/slides/conda_training_11-05-2025.pdf b/docs/assets/slides/conda_training_11-05-2025.pdf
new file mode 100644
index 000000000..43cf73394
Binary files /dev/null and b/docs/assets/slides/conda_training_11-05-2025.pdf differ
diff --git a/docs/assets/slides/container_HPC_10-16-2024.pdf b/docs/assets/slides/container_HPC_10-16-2024.pdf
new file mode 100644
index 000000000..57e4552fc
Binary files /dev/null and b/docs/assets/slides/container_HPC_10-16-2024.pdf differ
diff --git a/docs/assets/slides/intro-to-Python-and-Conda.pdf b/docs/assets/slides/intro-to-Python-and-Conda.pdf
new file mode 100644
index 000000000..6ced51e06
Binary files /dev/null and b/docs/assets/slides/intro-to-Python-and-Conda.pdf differ
diff --git a/docs/assets/tables/MIG/gromacs_performance.csv b/docs/assets/tables/MIG/gromacs_performance.csv
new file mode 100644
index 000000000..cbc8f8425
--- /dev/null
+++ b/docs/assets/tables/MIG/gromacs_performance.csv
@@ -0,0 +1,9 @@
+GPU Profile,Nodes,Cores,Threads,Performance (ns/day),Walltime (s),SU Consumption,Cost/Performance
+A100_10gb MIG,1,1,4,43.28,499.131,6,3.327171904
+A100_20gb MIG,1,1,4,81.205,265.995,8,2.364386429
+A100_40gb MIG,1,1,4,115.507,187.003,12,2.493355381
+A100_80gb full GPU,1,1,4,118.59,182.142,20,4.047558816
+
+
+
+
diff --git a/docs/assets/tables/MIG/llm-finetuning.csv b/docs/assets/tables/MIG/llm-finetuning.csv
new file mode 100644
index 000000000..a1e9a0800
--- /dev/null
+++ b/docs/assets/tables/MIG/llm-finetuning.csv
@@ -0,0 +1,8 @@
+GPU Profile,Walltime (h),SU Total,Tokens Processed,Tokens/s,Peak Allocated (GB),Peak Reserved (GB),SU per 1M Tokens
+A100_10gb MIG,1.092,3.28,166327,42.3,5.68,8.97,19.05
+A100_20gb MIG,0.556,2.78,166327,83.0,5.68,18.30,16.39
+A100_40gb MIG,0.353,3.18,166327,130.9,5.68,23.55,18.88
+A100_80gb full GPU,0.267,4.53,166327,173.2,5.68,23.55,27.04
+
+
+
diff --git a/docs/assets/tables/MIG/matrix_multiplication.csv b/docs/assets/tables/MIG/matrix_multiplication.csv
new file mode 100644
index 000000000..301038a70
--- /dev/null
+++ b/docs/assets/tables/MIG/matrix_multiplication.csv
@@ -0,0 +1,5 @@
+GPU Profile,SMs,Memory (GB),Peak FP16 TFLOPs,Peak FP32 (TF32) TFLOPs,Peak Matrix Size (n),Peak GPU Mem Used (GB),SU Usage Factor
+A100_10gb MIG,14,9.5,38.694,18.985,"12288 (FP16), 22528 (FP32)",7.57,2
+A100_20gb MIG,28,19.5,79.304,37.887,"20480 (FP16), 32256 (FP32)",15.52,4
+A100_40gb MIG,42,39.5,118.924,55.576,"49152 (FP16), 32768 (FP32)",18.01,8
+A100_80gb full GPU,108,79.3,286.185,135.676,"16384 (FP16), 16384 (FP32)",18.01,16
diff --git a/docs/assets/tables/MIG/mig-profile-comparison.csv b/docs/assets/tables/MIG/mig-profile-comparison.csv
new file mode 100644
index 000000000..2188931cf
--- /dev/null
+++ b/docs/assets/tables/MIG/mig-profile-comparison.csv
@@ -0,0 +1,24 @@
+Specification,10gb,20gb,40gb,Full 80GB,Notes
+Device Name,A100 MIG 10gb,A100 MIG 20gb,A100 MIG 40gb,A100-SXM4-80GB,GPU Profile
+SU Usage factor,2,4,8,16,Service units
+Global Memory,10.2 GB,20.9 GB,42.4 GB,85.2 GB,Raw hardware memory
+Usable Memory,~9.5 GB,~20 GB,~40 GB,~80 GB,Available for applications
+Multiprocessors (SMs),14,28,42,108,Parallel compute units
+Relative Compute Power,1x,2x,3x,7.7x,Performance scaling
+Total Parallel Threads,28672,57344,86016,221184,SMs × threads/SMP
+Memory Bus Width,640 bits,1280 bits,2560 bits,5120 bits,Memory bandwidth
+Memory Bandwidth limits,~1.3 TB/s,~2.6 TB/s,~5.1 TB/s,~10.2 TB/s,"Theoretical peak Bandwidth (B/s)=Memory Clock (Hz)×Bus Width (bits)÷8"
+L2 Cache Size,5 MB,10 MB,20 MB,41 MB,Fast memory cache
+Async Engines,1,2,3,5,Concurrent operations
+Max Threads per Block,1024,1024,1024,1024,CUDA block limit
+Max Threads per SMP,2048,2048,2048,2048,Per multiprocessor
+Memory Clock,1593 MHz,1593 MHz,1593 MHz,1593 MHz,Memory frequency
+Clock Rate,1410 MHz,1410 MHz,1410 MHz,1410 MHz,GPU core frequency
+Shared Memory/Block,49 KB,49 KB,49 KB,49 KB,Per CUDA block
+Registers per Block,65536,65536,65536,65536,Per CUDA block
+Constant Memory,64 KB,64 KB,64 KB,64 KB,Read-only memory
+Warp Size,32,32,32,32,SIMD execution width
+ECC Support,Yes,Yes,Yes,Yes,Error correction
+Unified Memory,Yes,Yes,Yes,Yes,CPU-GPU memory sharing
+Concurrent Kernels,Yes,Yes,Yes,Yes,Multiple kernel execution
+Max Grid Dimensions,2³¹-1 × 65535 × 65535,2³¹-1 × 65535 × 65535,2³¹-1 × 65535 × 65535,2³¹-1 × 65535 × 65535,CUDA grid limits
\ No newline at end of file
diff --git a/docs/assets/tables/SU.csv b/docs/assets/tables/SU.csv
new file mode 100644
index 000000000..d08abe05a
--- /dev/null
+++ b/docs/assets/tables/SU.csv
@@ -0,0 +1,6 @@
+Partition,Service Unit (SU) Charge
+`--partition=general`,"MAX(CPUs, RAM/4G) SU"
+`--partition=gpu`,"16 * (GPU Memory requested)/80G + MAX(CPUs, RAM/4G) SU"
+`--partition=bigmem`,"MAX(1.5 *CPUs, 1.5 *RAM/16G) SU"
+`--partition=debug`,"No charges, must be used with `--qos=debug`"
+`--partition=debug_gpu`,"No charges, must be used with `--qos=debug`"
\ No newline at end of file
diff --git a/docs/assets/tables/Wulver_filesystems.csv b/docs/assets/tables/Wulver_filesystems.csv
new file mode 100644
index 000000000..56b2d5e3b
--- /dev/null
+++ b/docs/assets/tables/Wulver_filesystems.csv
@@ -0,0 +1,6 @@
+Filesystem,Purpose,Characterics,Backup policy,Total Size,Default Quota,Deletion Policy,Cost per TB
+`/home`,"Non-research user files, such as profile, history, etc., files not intended for sharing. Not for actual research files.",Kalray Pixstor / GPFS (expensive),Daily,~1 PB,50GB per user (locked),One year after owner leaves NJIT,"Not possible to increase size, use `/project` or `/research` instead"
+`/project`,Active research by groups. Deployed as `/project/$PI_UCID/$LOGIN/`,Kalray Pixstor / GPFS (expensive),Daily,~1 PB,2TB per group,TBD,"$200 per TB for a duration of five years, minimum storage allocation of 5TB is required"
+`/scratch`,"Temporary space for intermediate results, downloads, checkpoints, and such. MOVE YOUR RESULTS & IMPORTANT FILES TO `/project` or `/research`",Nvme (very expensive),NEVER,~ 150 TB,"10TB per group",Files deleted after 30 days or sooner if 80% full,"No charge, but files are automatically deleted"
+`/local`,"Very high speed temporary storage, can be accessed while running jobs on the node via `$TEMP_DIR` ",Node-local SSD or NVME (very expensive),NEVER,800 GB per node shared by all users,"NONE",Files deleted after job completion,"No charge, but files are automatically deleted"
+`/research`,Long term archive. Users can buy as much as they need. The users need to buy this space. Existing purchases/quotas will be kept over from Lochness.,NFS (inexpensive),Daily,8 TB,Whatever the PI purchases.,TBD,"$100 per TB per five years"
\ No newline at end of file
diff --git a/docs/assets/tables/condo.csv b/docs/assets/tables/condo.csv
new file mode 100644
index 000000000..dafb436bc
--- /dev/null
+++ b/docs/assets/tables/condo.csv
@@ -0,0 +1,8 @@
+Resource,Mem (GB)/Core,Cost
+1 CPU,4,$150
+1 CPU,8,$250
+1 CPU,16,$300
+A100 GPU (80GB) + 1 CPU,4,"$10,000 "
+40GB MIG + 1 CPU,4,"$5,000 "
+20GB MIG + 1 CPU,4,"$2,500 "
+10GB MIG + 1 CPU,4,"$1,250 "
\ No newline at end of file
diff --git a/docs/assets/tables/facilities.csv b/docs/assets/tables/facilities.csv
index 2462e3dc5..ab0d5cad6 100644
--- a/docs/assets/tables/facilities.csv
+++ b/docs/assets/tables/facilities.csv
@@ -1,9 +1,9 @@
-Category,Lochness,Wulver
-Total Nodes,224,127
-Total Cores,5600,16256
-Total RAM / GB,65291,68096
-Total GPUs,64,100
-Total GPU RAM / GB,888,8000
-Access,General,General
-InfiniBand,"mix of HDR100, EDR, and FDR",HDR100 and 10Gig
-Theoretical Peak Performance (TFLOPs),514.79,1584.19
\ No newline at end of file
+Category,Wulver
+Total Nodes,127
+Total Cores,6256
+Total RAM / GB,68096
+Total GPUs,100
+Total GPU RAM / GB,8000
+Access,General
+Network (Infiniband; Ethernet),HDR100 and 10Gig
+Theoretical Peak Performance (TFLOPs),1584.19
\ No newline at end of file
diff --git a/docs/assets/tables/facilities_lochness.csv b/docs/assets/tables/facilities_lochness.csv
new file mode 100644
index 000000000..809827f8e
--- /dev/null
+++ b/docs/assets/tables/facilities_lochness.csv
@@ -0,0 +1,9 @@
+Category,Lochness,Wulver
+Total Nodes,224,127
+Total Cores,5600,16256
+Total RAM / GB,65291,68096
+Total GPUs,64,100
+Total GPU RAM / GB,888,8000
+Access,General,General
+Network (Infiniband; Ethernet),"mix of HDR100, EDR, and FDR",HDR100 and 10Gig
+Theoretical Peak Performance (TFLOPs),514.79,1584.19
\ No newline at end of file
diff --git a/docs/assets/tables/interactive.csv b/docs/assets/tables/interactive.csv
new file mode 100644
index 000000000..616e2a927
--- /dev/null
+++ b/docs/assets/tables/interactive.csv
@@ -0,0 +1,8 @@
+Flag,Explanation,Example
+`-a`,"Mandatory option. This is followed by your group's name. Use `quota_info` to check the account/group name.",`-a $PI_UCID`
+`-q`,"Mandatory option. Used to access the priority",`-q standard`
+`-j`,"Mandatory option. Specify whether you want CPU or GPU node.",`-j cpu`
+`-n`,"Optional. The total number of CPUs. Default is 1 core unless specified.",`-n 1`
+`-t`,"Optional. The amount of walltime to reserve for your job in hours. Default is 1 hour unless specified.",`-t 1`
+`-g`,"Optional. The total number of GPUs. Default is 1 GPU unless specified.",`-g 1`
+`-p`,"Optional. Specify the name of the partition. (Default: `general` for CPU jobs, `gpu` for GPU jobs).",`-p debug`
\ No newline at end of file
diff --git a/docs/assets/tables/lochness.csv b/docs/assets/tables/lochness.csv
old mode 100755
new mode 100644
diff --git a/docs/assets/tables/module_wulver.csv b/docs/assets/tables/module_wulver_rhel8.csv
similarity index 51%
rename from docs/assets/tables/module_wulver.csv
rename to docs/assets/tables/module_wulver_rhel8.csv
index bf8310bf0..755d04db6 100644
--- a/docs/assets/tables/module_wulver.csv
+++ b/docs/assets/tables/module_wulver_rhel8.csv
@@ -1,67 +1,109 @@
Software,Version,Dependent Toolchain,Module Load Command
+zlib,1.2.13,-,`module load zlib/1.2.13`
zlib,1.2.12,-,`module load zlib/1.2.12`
zlib,1.2.11,-,`module load zlib/1.2.11`
+ffnvcodec,12.1.14.0,-,`module load ffnvcodec/12.1.14.0`
ffnvcodec,11.1.5.2,-,`module load ffnvcodec/11.1.5.2`
M4,1.4.18,-,`module load M4/1.4.18`
M4,1.4.19,-,`module load M4/1.4.19`
+Altair,2023,-,`module load Altair/2023`
CUDA,12.0.0,-,`module load CUDA/12.0.0`
+CUDA,12.4.0,-,`module load CUDA/12.4.0`
CUDA,11.4.1,-,`module load CUDA/11.4.1`
gettext,0.21,-,`module load gettext/0.21`
gettext,0.21.1,-,`module load gettext/0.21.1`
+gettext,0.22,-,`module load gettext/0.22`
+Firefox,44.0.2,-,`module load Firefox/44.0.2`
+cuDNN,8.8.0.121-CUDA-12.0.0,-,`module load cuDNN/8.8.0.121-CUDA-12.0.0`
NVHPC,23.1-CUDA-12.0.0,-,`module load NVHPC/23.1-CUDA-12.0.0`
+OptiX,6.5.0,-,`module load OptiX/6.5.0`
Autoconf,2.71,-,`module load Autoconf/2.71`
Bison,3.8.2,-,`module load Bison/3.8.2`
Autotools,20220317,-,`module load Autotools/20220317`
+apptainer,1.1.9,-,`module load apptainer/1.1.9`
intel,2022b,-,`module load intel/2022b`
+intel,2023b,-,`module load intel/2023b`
intel,2022a,-,`module load intel/2022a`
intel,2021b,-,`module load intel/2021b`
+ParaView,5.11.2-egl,-,`module load ParaView/5.11.2-egl`
+ParaView,5.11.2-osmesa,-,`module load ParaView/5.11.2-osmesa`
ParaView,5.11.0-osmesa,-,`module load ParaView/5.11.0-osmesa`
ParaView,5.11.0-egl,-,`module load ParaView/5.11.0-egl`
Mamba,4.14.0-0,-,`module load Mamba/4.14.0-0`
+Maven,3.6.3,-,`module load Maven/3.6.3`
PyCharm,2022.3.2,-,`module load PyCharm/2022.3.2`
+DMTCP,3.0.0,-,`module load DMTCP/3.0.0`
DMTCP,2.6.0,-,`module load DMTCP/2.6.0`
flex,2.6.4,-,`module load flex/2.6.4`
+Miniforge3,24.1.2-0,-,`module load Miniforge3/24.1.2-0`
foss,2022b,-,`module load foss/2022b`
+foss,2023b,-,`module load foss/2023b`
foss,2021b,-,`module load foss/2021b`
+EasyBuild,4.8.2,-,`module load EasyBuild/4.8.2`
+EasyBuild,4.9.0,-,`module load EasyBuild/4.9.0`
+EasyBuild,5.0.0,-,`module load EasyBuild/5.0.0`
EasyBuild,4.8.0,-,`module load EasyBuild/4.8.0`
+EasyBuild,4.9.1,-,`module load EasyBuild/4.9.1`
+Avogadro2,1.97.0-linux-x86_64,-,`module load Avogadro2/1.97.0-linux-x86_64`
imkl,2022.1.0,-,`module load imkl/2022.1.0`
imkl,2022.2.1,-,`module load imkl/2022.2.1`
imkl,2021.4.0,-,`module load imkl/2021.4.0`
+imkl,2023.2.0,-,`module load imkl/2023.2.0`
Automake,1.16.5,-,`module load Automake/1.16.5`
OpenSSL,1.1,-,`module load OpenSSL/1.1`
libtool,2.4.7,-,`module load libtool/2.4.7`
-Gaussian,16.A.03-AVX2,-,`module load Gaussian/16.A.03-AVX2`
+Gaussian,16.C.01-AVX2,-,`module load Gaussian/16.C.01-AVX2`
+Gurobi_license,license,-,`module load Gurobi_license/license`
Spack,0.17.2,-,`module load Spack/0.17.2`
Spack,0.20.0,-,`module load Spack/0.20.0`
binutils,2.36.1,-,`module load binutils/2.36.1`
binutils,2.37,-,`module load binutils/2.37`
+binutils,2.40,-,`module load binutils/2.40`
binutils,2.38,-,`module load binutils/2.38`
binutils,2.39,-,`module load binutils/2.39`
intel-compilers,2022.2.1,intel/2022b,`module load intel-compilers/2022.2.1`
intel-compilers,2021.4.0,intel/2021b,`module load intel-compilers/2021.4.0`
+tecplot,2024,-,`module load tecplot/2024`
Java,11.0.16,-,`module load Java/11.0.16`
Java,.modulerc,-,`module load Java/.modulerc`
gompi,2022b,-,`module load gompi/2022b`
+gompi,2023b,-,`module load gompi/2023b`
gompi,2021b,-,`module load gompi/2021b`
ncurses,6.3,-,`module load ncurses/6.3`
ncurses,6.4,-,`module load ncurses/6.4`
ncurses,6.2,-,`module load ncurses/6.2`
gfbf,2022b,-,`module load gfbf/2022b`
+gfbf,2023b,-,`module load gfbf/2023b`
p7zip,17.04,-,`module load p7zip/17.04`
libevent,2.1.12,-,`module load libevent/2.1.12`
+ABAQUS,2024-hotfix-2405,-,`module load ABAQUS/2024-hotfix-2405`
+git-lfs,3.2.0,-,`module load git-lfs/3.2.0`
GCC,12.2.0,foss/2022b,`module load GCC/12.2.0`
+GCC,13.2.0,foss/2023b,`module load GCC/13.2.0`
GCC,11.2.0,foss/2021b,`module load GCC/11.2.0`
+opendihu,2204,-,`module load opendihu/2204`
+opendihu,1.5,-,`module load opendihu/1.5`
+git,2.33.1,-,`module load git/2.33.1`
+rclone,1.57.0,-,`module load rclone/1.57.0`
tmux,3.3a,-,`module load tmux/3.3a`
iimpi,2022b,-,`module load iimpi/2022b`
+iimpi,2023b,-,`module load iimpi/2023b`
iimpi,2022a,-,`module load iimpi/2022a`
iimpi,2021b,-,`module load iimpi/2021b`
+MATLAB,2024a,-,`module load MATLAB/2024a`
MATLAB,2023a,-,`module load MATLAB/2023a`
ANSYS,2022,-,`module load ANSYS/2022`
+Anaconda3,2023.09-0,-,`module load Anaconda3/2023.09-0`
Anaconda3,5.3.0,-,`module load Anaconda3/5.3.0`
GCCcore,12.2.0,-,`module load GCCcore/12.2.0`
GCCcore,11.3.0,-,`module load GCCcore/11.3.0`
+GCCcore,13.2.0,-,`module load GCCcore/13.2.0`
GCCcore,11.2.0,-,`module load GCCcore/11.2.0`
+COMSOL,6.2,-,`module load COMSOL/6.2`
+COMSOL,5.6,-,`module load COMSOL/5.6`
tecplot360ex,2022R2,-,`module load tecplot360ex/2022R2`
+Go,1.17.3,-,`module load Go/1.17.3`
+Go,1.17.6,-,`module load Go/1.17.6`
pkgconf,1.8.0,-,`module load pkgconf/1.8.0`
matplotlib,3.4.3,intel/2021b,`module load intel/2021b matplotlib/3.4.3`
CP2K,8.2,intel/2021b,`module load intel/2021b CP2K/8.2`
@@ -69,13 +111,17 @@ PLUMED,2.8.0,intel/2021b,`module load intel/2021b PLUMED/2.8.0`
netCDF-Fortran,4.5.3,intel/2021b,`module load intel/2021b netCDF-Fortran/4.5.3`
netCDF,4.8.1,intel/2021b,`module load intel/2021b netCDF/4.8.1`
SciPy-bundle,2021.10,intel/2021b,`module load intel/2021b SciPy-bundle/2021.10`
+SCOTCH,6.1.2,intel/2021b,`module load intel/2021b SCOTCH/6.1.2`
HDF5,1.12.1,intel/2021b,`module load intel/2021b HDF5/1.12.1`
Libint,2.7.2-lmax-6-cp2k,intel/2021b,`module load intel/2021b Libint/2.7.2-lmax-6-cp2k`
SuperLU,5.3.0,intel/2021b,`module load intel/2021b SuperLU/5.3.0`
ELPA,2021.11.001,intel/2021b,`module load intel/2021b ELPA/2021.11.001`
imkl-FFTW,2021.4.0,intel/2021b,`module load intel/2021b imkl-FFTW/2021.4.0`
+HPL,2.3,intel/2022b,`module load intel/2022b HPL/2.3`
HDF5,1.14.0,intel/2022b,`module load intel/2022b HDF5/1.14.0`
imkl-FFTW,2022.2.1,intel/2022b,`module load intel/2022b imkl-FFTW/2022.2.1`
+VASP,6.4.2,intel/2022b,`module load intel/2022b VASP/6.4.2`
+VASP,6.5.0,intel/2022b,`module load intel/2022b VASP/6.5.0`
GROMACS,2021.5,foss/2021b,`module load foss/2021b GROMACS/2021.5`
GROMACS,2021.5-CUDA-11.4.1,foss/2021b,`module load foss/2021b GROMACS/2021.5-CUDA-11.4.1`
matplotlib,3.4.3,foss/2021b,`module load foss/2021b matplotlib/3.4.3`
@@ -87,9 +133,12 @@ LIGGGHTS-PUBLIC,3.8.0-kokkos-CUDA-11.4.1,foss/2021b,`module load foss/2021b LIGG
netCDF-Fortran,4.5.3,foss/2021b,`module load foss/2021b netCDF-Fortran/4.5.3`
Biopython,1.79,foss/2021b,`module load foss/2021b Biopython/1.79`
LAMMPS,23Jun2022-kokkos-CUDA-11.4.1,foss/2021b,`module load foss/2021b LAMMPS/23Jun2022-kokkos-CUDA-11.4.1`
+LAMMPS,23Jun2022-kokkos,foss/2021b,`module load foss/2021b LAMMPS/23Jun2022-kokkos`
magma,2.6.2-CUDA-11.4.1,foss/2021b,`module load foss/2021b magma/2.6.2-CUDA-11.4.1`
+SuiteSparse,5.10.1-METIS-5.1.0,foss/2021b,`module load foss/2021b SuiteSparse/5.10.1-METIS-5.1.0`
OVITO,3.7.11-basic,foss/2021b,`module load foss/2021b OVITO/3.7.11-basic`
netCDF,4.8.1,foss/2021b,`module load foss/2021b netCDF/4.8.1`
+MUMPS,5.4.1-metis,foss/2021b,`module load foss/2021b MUMPS/5.4.1-metis`
ParaView,5.9.1-mpi,foss/2021b,`module load foss/2021b ParaView/5.9.1-mpi`
Boost.MPI,1.77.0,foss/2021b,`module load foss/2021b Boost.MPI/1.77.0`
CGAL,4.14.3,foss/2021b,`module load foss/2021b CGAL/4.14.3`
@@ -100,27 +149,85 @@ RIP-MD,master,foss/2021b,`module load foss/2021b RIP-MD/master`
HDF5,1.12.1,foss/2021b,`module load foss/2021b HDF5/1.12.1`
AFNI,23.0.04-Python-3.9.6,foss/2021b,`module load foss/2021b AFNI/23.0.04-Python-3.9.6`
VTK,9.1.0,foss/2021b,`module load foss/2021b VTK/9.1.0`
+perm-md-count,main,foss/2021b,`module load foss/2021b perm-md-count/main`
+ParMETIS,4.0.3,foss/2021b,`module load foss/2021b ParMETIS/4.0.3`
networkx,2.6.3,foss/2021b,`module load foss/2021b networkx/2.6.3`
FFTW,3.3.10,foss/2021b,`module load foss/2021b FFTW/3.3.10`
ELPA,2021.05.001,foss/2021b,`module load foss/2021b ELPA/2021.05.001`
GDAL,3.3.2,foss/2021b,`module load foss/2021b GDAL/3.3.2`
ScaFaCoS,1.0.1,foss/2021b,`module load foss/2021b ScaFaCoS/1.0.1`
ScaLAPACK,2.1.0-fb,foss/2021b,`module load foss/2021b ScaLAPACK/2.1.0-fb`
+MDI,1.4.29,foss/2023b,`module load foss/2023b MDI/1.4.29`
+GROMACS,2024.1,foss/2023b,`module load foss/2023b GROMACS/2024.1`
+OpenFOAM,v2406,foss/2023b,`module load foss/2023b OpenFOAM/v2406`
+PLUMED,2.9.2,foss/2023b,`module load foss/2023b PLUMED/2.9.2`
+LAMMPS,2Aug2023_update2-kokkos,foss/2023b,`module load foss/2023b LAMMPS/2Aug2023_update2-kokkos`
+mpi4py,3.1.5,foss/2023b,`module load foss/2023b mpi4py/3.1.5`
+netCDF,4.9.2,foss/2023b,`module load foss/2023b netCDF/4.9.2`
+MUMPS,5.6.1-metis,foss/2023b,`module load foss/2023b MUMPS/5.6.1-metis`
+ParaView,5.12.0,foss/2023b,`module load foss/2023b ParaView/5.12.0`
+Clp,1.17.9,foss/2023b,`module load foss/2023b Clp/1.17.9`
+Cgl,0.60.8,foss/2023b,`module load foss/2023b Cgl/0.60.8`
+Chapel,2.3.0,foss/2023b,`module load foss/2023b Chapel/2.3.0`
+Chapel,2.1.0,foss/2023b,`module load foss/2023b Chapel/2.1.0`
+Armadillo,12.8.0,foss/2023b,`module load foss/2023b Armadillo/12.8.0`
+SCOTCH,7.0.4,foss/2023b,`module load foss/2023b SCOTCH/7.0.4`
+HDF5,1.14.3,foss/2023b,`module load foss/2023b HDF5/1.14.3`
+VTK,9.3.0,foss/2023b,`module load foss/2023b VTK/9.3.0`
+OR-Tools,9.9,foss/2023b,`module load foss/2023b OR-Tools/9.9`
+Cbc,2.10.11,foss/2023b,`module load foss/2023b Cbc/2.10.11`
+KaHIP,3.16,foss/2023b,`module load foss/2023b KaHIP/3.16`
+GDAL,3.9.0,foss/2023b,`module load foss/2023b GDAL/3.9.0`
+netCDF-C++4,4.3.1,foss/2023b,`module load foss/2023b netCDF-C++4/4.3.1`
+FFTW.MPI,3.3.10,foss/2023b,`module load foss/2023b FFTW.MPI/3.3.10`
+arpack-ng,3.9.0,foss/2023b,`module load foss/2023b arpack-ng/3.9.0`
+ScaFaCoS,1.0.4,foss/2023b,`module load foss/2023b ScaFaCoS/1.0.4`
+ScaLAPACK,2.2.0-fb,foss/2023b,`module load foss/2023b ScaLAPACK/2.2.0-fb`
GROMACS,2023.1-CUDA-12.0.0,foss/2022b,`module load foss/2022b GROMACS/2023.1-CUDA-12.0.0`
+GROMACS,2024.4-CUDA-12.4.0,foss/2022b,`module load foss/2022b GROMACS/2024.4-CUDA-12.4.0`
+R,4.2.2,foss/2022b,`module load foss/2022b R/4.2.2`
+OpenFOAM,v2306,foss/2022b,`module load foss/2022b OpenFOAM/v2306`
+ncview,2.1.8,foss/2022b,`module load foss/2022b ncview/2.1.8`
+CP2K,2023.1,foss/2022b,`module load foss/2022b CP2K/2023.1`
+PLUMED,2.9.0,foss/2022b,`module load foss/2022b PLUMED/2.9.0`
+MDAnalysis,2.4.2,foss/2022b,`module load foss/2022b MDAnalysis/2.4.2`
PnetCDF,1.12.3,foss/2022b,`module load foss/2022b PnetCDF/1.12.3`
netCDF-Fortran,4.6.0,foss/2022b,`module load foss/2022b netCDF-Fortran/4.6.0`
+Biopython,1.81,foss/2022b,`module load foss/2022b Biopython/1.81`
+PyTables,3.8.0,foss/2022b,`module load foss/2022b PyTables/3.8.0`
+SuiteSparse,5.13.0-METIS-5.1.0,foss/2022b,`module load foss/2022b SuiteSparse/5.13.0-METIS-5.1.0`
mpi4py,3.1.4,foss/2022b,`module load foss/2022b mpi4py/3.1.4`
netCDF,4.9.0,foss/2022b,`module load foss/2022b netCDF/4.9.0`
+MUMPS,5.6.1-metis,foss/2022b,`module load foss/2022b MUMPS/5.6.1-metis`
ParaView,5.11.0-mpi,foss/2022b,`module load foss/2022b ParaView/5.11.0-mpi`
+ParaView,5.11.1,foss/2022b,`module load foss/2022b ParaView/5.11.1`
+IQ-TREE,2.2.2.6,foss/2022b,`module load foss/2022b IQ-TREE/2.2.2.6`
+chapel,1.33.0,foss/2022b,`module load foss/2022b chapel/1.33.0`
SuperLU_DIST,8.1.0,foss/2022b,`module load foss/2022b SuperLU_DIST/8.1.0`
+SuperLU_DIST,8.1.2,foss/2022b,`module load foss/2022b SuperLU_DIST/8.1.2`
+Armadillo,11.4.3,foss/2022b,`module load foss/2022b Armadillo/11.4.3`
+HPL,2.3,foss/2022b,`module load foss/2022b HPL/2.3`
+SCOTCH,7.0.3,foss/2022b,`module load foss/2022b SCOTCH/7.0.3`
HDF5,1.14.0,foss/2022b,`module load foss/2022b HDF5/1.14.0`
+xtb,6.6.1,foss/2022b,`module load foss/2022b xtb/6.6.1`
+perm-md-count,main,foss/2022b,`module load foss/2022b perm-md-count/main`
ParMETIS,4.0.3,foss/2022b,`module load foss/2022b ParMETIS/4.0.3`
+PETSc,3.19.2,foss/2022b,`module load foss/2022b PETSc/3.19.2`
MPAS,8.0.1,foss/2022b,`module load foss/2022b MPAS/8.0.1`
SuperLU,5.3.0,foss/2022b,`module load foss/2022b SuperLU/5.3.0`
+KaHIP,3.14,foss/2022b,`module load foss/2022b KaHIP/3.14`
+ORCA,5.0.4,foss/2022b,`module load foss/2022b ORCA/5.0.4`
+VMD,1.9.4a57,foss/2022b,`module load foss/2022b VMD/1.9.4a57`
+arkouda,2023.11.15,foss/2022b,`module load foss/2022b arkouda/2023.11.15`
+GDAL,3.6.2,foss/2022b,`module load foss/2022b GDAL/3.6.2`
ParallelIO,2.5.10,foss/2022b,`module load foss/2022b ParallelIO/2.5.10`
Grace,5.1.25,foss/2022b,`module load foss/2022b Grace/5.1.25`
FFTW.MPI,3.3.10,foss/2022b,`module load foss/2022b FFTW.MPI/3.3.10`
+RASPA2,2.0.47,foss/2022b,`module load foss/2022b RASPA2/2.0.47`
+arpack-ng,3.8.0,foss/2022b,`module load foss/2022b arpack-ng/3.8.0`
+Valgrind,3.21.0,foss/2022b,`module load foss/2022b Valgrind/3.21.0`
ScaLAPACK,2.2.0-fb,foss/2022b,`module load foss/2022b ScaLAPACK/2.2.0-fb`
+Hypre,2.27.0,foss/2022b,`module load foss/2022b Hypre/2.27.0`
libxsmm,1.17,intel/2021b,`module load intel/2021b libxsmm/1.17`
impi,2021.4.0,intel/2021b,`module load intel/2021b`
GSL,2.7,intel/2021b,`module load intel/2021b GSL/2.7`
@@ -134,26 +241,68 @@ libxc,5.2.3,intel/2022b,`module load intel/2022b libxc/5.2.3`
Boost.Python,1.77.0,foss/2021b,`module load foss/2021b Boost.Python/1.77.0`
poppler,22.01.0,foss/2021b,`module load foss/2021b poppler/22.01.0`
AutoDock-GPU,1.5.3-CUDA-11.4.1,foss/2021b,`module load foss/2021b AutoDock-GPU/1.5.3-CUDA-11.4.1`
+pocl,1.8,foss/2021b,`module load foss/2021b pocl/1.8`
OpenMPI,4.1.1,foss/2021b,`module load foss/2021b`
OpenBLAS,0.3.18,foss/2021b,`module load foss/2021b OpenBLAS/0.3.18`
GSL,2.7,foss/2021b,`module load foss/2021b GSL/2.7`
BLIS,0.8.1,foss/2021b,`module load foss/2021b BLIS/0.8.1`
Boost,1.79.0,foss/2021b,`module load foss/2021b Boost/1.79.0`
Boost,1.77.0,foss/2021b,`module load foss/2021b Boost/1.77.0`
+POV-Ray,3.7.0.10,foss/2021b,`module load foss/2021b POV-Ray/3.7.0.10`
FlexiBLAS,3.0.4,foss/2021b,`module load foss/2021b FlexiBLAS/3.0.4`
+VirtualGL,3.0,foss/2021b,`module load foss/2021b VirtualGL/3.0`
libxc,5.1.6,foss/2021b,`module load foss/2021b libxc/5.1.6`
GEOS,3.9.1,foss/2021b,`module load foss/2021b GEOS/3.9.1`
+matplotlib,3.8.2,foss/2023b,`module load foss/2023b matplotlib/3.8.2`
+R,4.4.1,foss/2023b,`module load foss/2023b R/4.4.1`
+kim-api,2.3.0,foss/2023b,`module load foss/2023b kim-api/2.3.0`
+Osi,0.108.10,foss/2023b,`module load foss/2023b Osi/0.108.10`
+HiGHS,1.7.0,foss/2023b,`module load foss/2023b HiGHS/1.7.0`
+LEMON,1.3.1,foss/2023b,`module load foss/2023b LEMON/1.3.1`
+CoinUtils,2.11.10,foss/2023b,`module load foss/2023b CoinUtils/2.11.10`
+SciPy-bundle,2023.11,foss/2023b,`module load foss/2023b SciPy-bundle/2023.11`
+OpenMPI,4.1.6,foss/2023b,`module load foss/2023b`
+OpenBLAS,0.3.24,foss/2023b,`module load foss/2023b OpenBLAS/0.3.24`
+GSL,2.7,foss/2023b,`module load foss/2023b GSL/2.7`
+networkx,3.2.1,foss/2023b,`module load foss/2023b networkx/3.2.1`
+BLIS,0.9.0,foss/2023b,`module load foss/2023b BLIS/0.9.0`
+Boost,1.83.0,foss/2023b,`module load foss/2023b Boost/1.83.0`
+FFTW,3.3.10,foss/2023b,`module load foss/2023b FFTW/3.3.10`
+ncdu,1.20,foss/2023b,`module load foss/2023b ncdu/1.20`
+FlexiBLAS,3.3.1,foss/2023b,`module load foss/2023b FlexiBLAS/3.3.1`
+GEOS,3.12.1,foss/2023b,`module load foss/2023b GEOS/3.12.1`
+matplotlib,3.7.0,foss/2022b,`module load foss/2022b matplotlib/3.7.0`
+libxsmm,1.17,foss/2022b,`module load foss/2022b libxsmm/1.17`
+packmol,20.14.3,foss/2022b,`module load foss/2022b packmol/20.14.3`
+pocl,4.0,foss/2022b,`module load foss/2022b pocl/4.0`
+NTL,11.5.1,foss/2022b,`module load foss/2022b NTL/11.5.1`
+Arb,2.23.0,foss/2022b,`module load foss/2022b Arb/2.23.0`
SciPy-bundle,2023.02,foss/2022b,`module load foss/2022b SciPy-bundle/2023.02`
+Libint,2.7.2-lmax-6-cp2k,foss/2022b,`module load foss/2022b Libint/2.7.2-lmax-6-cp2k`
OpenMPI,4.1.4,foss/2022b,`module load foss/2022b`
+xtb,6.6.1,foss/2022b,`module load foss/2022b xtb/6.6.1`
OpenBLAS,0.3.21,foss/2022b,`module load foss/2022b OpenBLAS/0.3.21`
+GSL,2.7,foss/2022b,`module load foss/2022b GSL/2.7`
networkx,2.8.8,foss/2022b,`module load foss/2022b networkx/2.8.8`
+networkx,3.0,foss/2022b,`module load foss/2022b networkx/3.0`
BLIS,0.9.0,foss/2022b,`module load foss/2022b BLIS/0.9.0`
+CREST,2.12,foss/2022b,`module load foss/2022b CREST/2.12`
+Arrow,11.0.0,foss/2022b,`module load foss/2022b Arrow/11.0.0`
+Boost,1.81.0,foss/2022b,`module load foss/2022b Boost/1.81.0`
FFTW,3.3.10,foss/2022b,`module load foss/2022b FFTW/3.3.10`
+POV-Ray,3.7.0.10,foss/2022b,`module load foss/2022b POV-Ray/3.7.0.10`
+silo,4.10.2,foss/2022b,`module load foss/2022b silo/4.10.2`
FlexiBLAS,3.2.1,foss/2022b,`module load foss/2022b FlexiBLAS/3.2.1`
+VirtualGL,3.1,foss/2022b,`module load foss/2022b VirtualGL/3.1`
+libxc,6.1.0,foss/2022b,`module load foss/2022b libxc/6.1.0`
+FLINT,2.9.0,foss/2022b,`module load foss/2022b FLINT/2.9.0`
+FLINT,3.0.1,foss/2022b,`module load foss/2022b FLINT/3.0.1`
+GEOS,3.11.1,foss/2022b,`module load foss/2022b GEOS/3.11.1`
libsndfile,1.0.31,foss/2021b,`module load foss/2021b libsndfile/1.0.31`
nodejs,14.17.6,foss/2021b,`module load foss/2021b nodejs/14.17.6`
Rust,1.54.0,foss/2021b,`module load foss/2021b Rust/1.54.0`
zlib,1.2.11,foss/2021b,`module load foss/2021b zlib/1.2.11`
+OpenEXR,3.1.1,foss/2021b,`module load foss/2021b OpenEXR/3.1.1`
jbigkit,2.1,foss/2021b,`module load foss/2021b jbigkit/2.1`
libgd,2.3.3,foss/2021b,`module load foss/2021b libgd/2.3.3`
lz4,1.9.3,foss/2021b,`module load foss/2021b lz4/1.9.3`
@@ -200,6 +349,7 @@ libtirpc,1.3.2,foss/2021b,`module load foss/2021b libtirpc/1.3.2`
LibTIFF,4.3.0,foss/2021b,`module load foss/2021b LibTIFF/4.3.0`
gperf,3.1,foss/2021b,`module load foss/2021b gperf/3.1`
Tcl,8.6.11,foss/2021b,`module load foss/2021b Tcl/8.6.11`
+ACTC,1.1,foss/2021b,`module load foss/2021b ACTC/1.1`
x265,3.5,foss/2021b,`module load foss/2021b x265/3.5`
libdrm,2.4.107,foss/2021b,`module load foss/2021b libdrm/2.4.107`
Tkinter,3.9.6,foss/2021b,`module load foss/2021b Tkinter/3.9.6`
@@ -248,6 +398,7 @@ Automake,1.16.4,foss/2021b,`module load foss/2021b Automake/1.16.4`
libGLU,9.0.2,foss/2021b,`module load foss/2021b libGLU/9.0.2`
HarfBuzz,2.8.2,foss/2021b,`module load foss/2021b HarfBuzz/2.8.2`
GMP,6.2.1,foss/2021b,`module load foss/2021b GMP/6.2.1`
+SDL2,2.0.20,foss/2021b,`module load foss/2021b SDL2/2.0.20`
libtool,2.4.6,foss/2021b,`module load foss/2021b libtool/2.4.6`
LLVM,12.0.1,foss/2021b,`module load foss/2021b LLVM/12.0.1`
libjpeg-turbo,2.0.6,foss/2021b,`module load foss/2021b libjpeg-turbo/2.0.6`
@@ -261,6 +412,7 @@ xorg-macros,1.19.3,foss/2021b,`module load foss/2021b xorg-macros/1.19.3`
binutils,2.37,foss/2021b,`module load foss/2021b binutils/2.37`
Brotli,1.0.9,foss/2021b,`module load foss/2021b Brotli/1.0.9`
PCRE2,10.37,foss/2021b,`module load foss/2021b PCRE2/10.37`
+tbb,2020.3,foss/2021b,`module load foss/2021b tbb/2020.3`
libpciaccess,0.16,foss/2021b,`module load foss/2021b libpciaccess/0.16`
FFmpeg,4.3.2,foss/2021b,`module load foss/2021b FFmpeg/4.3.2`
libcerf,1.17,foss/2021b,`module load foss/2021b libcerf/1.17`
@@ -301,78 +453,304 @@ DB,18.1.40,foss/2021b,`module load foss/2021b DB/18.1.40`
ImageMagick,7.1.0-4,foss/2021b,`module load foss/2021b ImageMagick/7.1.0-4`
Tk,8.6.11,foss/2021b,`module load foss/2021b Tk/8.6.11`
pkgconf,1.8.0,foss/2021b,`module load foss/2021b pkgconf/1.8.0`
+nodejs,20.9.0,foss/2023b,`module load foss/2023b nodejs/20.9.0`
+Rust,1.73.0,foss/2023b,`module load foss/2023b Rust/1.73.0`
+zlib,1.2.13,foss/2023b,`module load foss/2023b zlib/1.2.13`
+OpenEXR,3.2.0,foss/2023b,`module load foss/2023b OpenEXR/3.2.0`
+jbigkit,2.1,foss/2023b,`module load foss/2023b jbigkit/2.1`
+Qt6,6.6.3,foss/2023b,`module load foss/2023b Qt6/6.6.3`
+klayout,0.29.1,foss/2023b,`module load foss/2023b klayout/0.29.1`
+libgd,2.3.3,foss/2023b,`module load foss/2023b libgd/2.3.3`
+lz4,1.9.4,foss/2023b,`module load foss/2023b lz4/1.9.4`
+SCons,4.6.0,foss/2023b,`module load foss/2023b SCons/4.6.0`
+libarchive,3.7.2,foss/2023b,`module load foss/2023b libarchive/3.7.2`
+virtualenv,20.24.6,foss/2023b,`module load foss/2023b virtualenv/20.24.6`
+Ninja,1.11.1,foss/2023b,`module load foss/2023b Ninja/1.11.1`
+jupyter-server,2.14.0,foss/2023b,`module load foss/2023b jupyter-server/2.14.0`
+Perl,5.38.0,foss/2023b,`module load foss/2023b Perl/5.38.0`
+M4,1.4.19,foss/2023b,`module load foss/2023b M4/1.4.19`
+GLib,2.78.1,foss/2023b,`module load foss/2023b GLib/2.78.1`
+ICU,74.1,foss/2023b,`module load foss/2023b ICU/74.1`
+giflib,5.2.1,foss/2023b,`module load foss/2023b giflib/5.2.1`
+libsodium,1.0.19,foss/2023b,`module load foss/2023b libsodium/1.0.19`
+groff,1.23.0,foss/2023b,`module load foss/2023b groff/1.23.0`
+Meson,1.2.3,foss/2023b,`module load foss/2023b Meson/1.2.3`
+IPython,8.17.2,foss/2023b,`module load foss/2023b IPython/8.17.2`
+gettext,0.22,foss/2023b,`module load foss/2023b gettext/0.22`
+JupyterNotebook,7.2.0,foss/2023b,`module load foss/2023b JupyterNotebook/7.2.0`
+LittleCMS,2.15,foss/2023b,`module load foss/2023b LittleCMS/2.15`
+scikit-build,0.17.6,foss/2023b,`module load foss/2023b scikit-build/0.17.6`
+Szip,2.1.1,foss/2023b,`module load foss/2023b Szip/2.1.1`
+libffi,3.4.4,foss/2023b,`module load foss/2023b libffi/3.4.4`
+PCRE,8.45,foss/2023b,`module load foss/2023b PCRE/8.45`
+Eigen,3.4.0,foss/2023b,`module load foss/2023b Eigen/3.4.0`
+PyYAML,6.0.1,foss/2023b,`module load foss/2023b PyYAML/6.0.1`
+GObject-Introspection,1.78.1,foss/2023b,`module load foss/2023b GObject-Introspection/1.78.1`
+LAME,3.100,foss/2023b,`module load foss/2023b LAME/3.100`
+snappy,1.1.10,foss/2023b,`module load foss/2023b snappy/1.1.10`
+NSS,3.94,foss/2023b,`module load foss/2023b NSS/3.94`
+expat,2.5.0,foss/2023b,`module load foss/2023b expat/2.5.0`
+Abseil,20240116.1,foss/2023b,`module load foss/2023b Abseil/20240116.1`
+jedi,0.19.1,foss/2023b,`module load foss/2023b jedi/0.19.1`
+Autoconf,2.71,foss/2023b,`module load foss/2023b Autoconf/2.71`
+hatchling,1.18.0,foss/2023b,`module load foss/2023b hatchling/1.18.0`
+Xvfb,21.1.9,foss/2023b,`module load foss/2023b Xvfb/21.1.9`
+xxd,9.1.0307,foss/2023b,`module load foss/2023b xxd/9.1.0307`
+flit,3.9.0,foss/2023b,`module load foss/2023b flit/3.9.0`
+XZ,5.4.4,foss/2023b,`module load foss/2023b XZ/5.4.4`
+cppy,1.2.1,foss/2023b,`module load foss/2023b cppy/1.2.1`
+X11,20231019,foss/2023b,`module load foss/2023b X11/20231019`
+Mesa,23.1.9,foss/2023b,`module load foss/2023b Mesa/23.1.9`
+GLPK,5.0,foss/2023b,`module load foss/2023b GLPK/5.0`
+archspec,0.2.2,foss/2023b,`module load foss/2023b archspec/0.2.2`
+Bison,3.8.2,foss/2023b,`module load foss/2023b Bison/3.8.2`
+setuptools-rust,1.8.0,foss/2023b,`module load foss/2023b setuptools-rust/1.8.0`
+Lua,5.4.6,foss/2023b,`module load foss/2023b Lua/5.4.6`
+Autotools,20220317,foss/2023b,`module load foss/2023b Autotools/20220317`
+ZeroMQ,4.3.5,foss/2023b,`module load foss/2023b ZeroMQ/4.3.5`
+re2c,3.1,foss/2023b,`module load foss/2023b re2c/3.1`
+libtirpc,1.3.4,foss/2023b,`module load foss/2023b libtirpc/1.3.4`
+Wayland,1.22.0,foss/2023b,`module load foss/2023b Wayland/1.22.0`
+LibTIFF,4.6.0,foss/2023b,`module load foss/2023b LibTIFF/4.6.0`
+gperf,3.1,foss/2023b,`module load foss/2023b gperf/3.1`
+Tcl,8.6.13,foss/2023b,`module load foss/2023b Tcl/8.6.13`
+patchelf,0.18.0,foss/2023b,`module load foss/2023b patchelf/0.18.0`
+x265,3.5,foss/2023b,`module load foss/2023b x265/3.5`
+libdrm,2.4.117,foss/2023b,`module load foss/2023b libdrm/2.4.117`
+Tkinter,3.11.5,foss/2023b,`module load foss/2023b Tkinter/3.11.5`
+gzip,1.13,foss/2023b,`module load foss/2023b gzip/1.13`
+Perl-bundle-CPAN,5.38.0,foss/2023b,`module load foss/2023b Perl-bundle-CPAN/5.38.0`
+libxslt,1.1.38,foss/2023b,`module load foss/2023b libxslt/1.1.38`
+SQLite,3.43.1,foss/2023b,`module load foss/2023b SQLite/3.43.1`
+cryptography,41.0.5,foss/2023b,`module load foss/2023b cryptography/41.0.5`
+libreadline,8.2,foss/2023b,`module load foss/2023b libreadline/8.2`
+numactl,2.0.16,foss/2023b,`module load foss/2023b numactl/2.0.16`
+intltool,0.51.0,foss/2023b,`module load foss/2023b intltool/0.51.0`
+CGAL,5.6.1,foss/2023b,`module load foss/2023b CGAL/5.6.1`
+double-conversion,3.3.0,foss/2023b,`module load foss/2023b double-conversion/3.3.0`
+libunwind,1.6.2,foss/2023b,`module load foss/2023b libunwind/1.6.2`
+Python,3.11.5,foss/2023b,`module load foss/2023b Python/3.11.5`
+Brunsli,0.1,foss/2023b,`module load foss/2023b Brunsli/0.1`
+help2man,1.49.3,foss/2023b,`module load foss/2023b help2man/1.49.3`
+JasPer,4.0.0,foss/2023b,`module load foss/2023b JasPer/4.0.0`
+matlab-proxy,0.24.2,foss/2023b,`module load foss/2023b matlab-proxy/0.24.2`
+flex,2.6.4,foss/2023b,`module load foss/2023b flex/2.6.4`
+FriBidi,1.0.13,foss/2023b,`module load foss/2023b FriBidi/1.0.13`
+hatch-jupyter-builder,0.9.1,foss/2023b,`module load foss/2023b hatch-jupyter-builder/0.9.1`
+UCC,1.2.0,foss/2023b,`module load foss/2023b UCC/1.2.0`
+json-c,0.17,foss/2023b,`module load foss/2023b json-c/0.17`
+cURL,8.3.0,foss/2023b,`module load foss/2023b cURL/8.3.0`
+util-linux,2.39,foss/2023b,`module load foss/2023b util-linux/2.39`
+libfabric,1.19.0,foss/2023b,`module load foss/2023b libfabric/1.19.0`
+nettle,3.9.1,foss/2023b,`module load foss/2023b nettle/3.9.1`
+libiconv,1.17,foss/2023b,`module load foss/2023b libiconv/1.17`
+aiohttp,3.9.5,foss/2023b,`module load foss/2023b aiohttp/3.9.5`
+MPFR,4.2.1,foss/2023b,`module load foss/2023b MPFR/4.2.1`
+Xerces-C++,3.2.5,foss/2023b,`module load foss/2023b Xerces-C++/3.2.5`
+SWIG,4.1.1,foss/2023b,`module load foss/2023b SWIG/4.1.1`
+CMake,3.27.6,foss/2023b,`module load foss/2023b CMake/3.27.6`
+CMake,3.29.3,foss/2023b,`module load foss/2023b CMake/3.29.3`
+fontconfig,2.14.2,foss/2023b,`module load foss/2023b fontconfig/2.14.2`
+libgit2,1.7.2,foss/2023b,`module load foss/2023b libgit2/1.7.2`
+Mako,1.2.4,foss/2023b,`module load foss/2023b Mako/1.2.4`
+libglvnd,1.7.0,foss/2023b,`module load foss/2023b libglvnd/1.7.0`
+freetype,2.13.2,foss/2023b,`module load foss/2023b freetype/2.13.2`
+lxml,4.9.3,foss/2023b,`module load foss/2023b lxml/4.9.3`
+libgeotiff,1.7.3,foss/2023b,`module load foss/2023b libgeotiff/1.7.3`
+CFITSIO,4.3.1,foss/2023b,`module load foss/2023b CFITSIO/4.3.1`
+bzip2,1.0.8,foss/2023b,`module load foss/2023b bzip2/1.0.8`
+zstd,1.5.5,foss/2023b,`module load foss/2023b zstd/1.5.5`
+cairo,1.18.0,foss/2023b,`module load foss/2023b cairo/1.18.0`
+Automake,1.16.5,foss/2023b,`module load foss/2023b Automake/1.16.5`
+libGLU,9.0.3,foss/2023b,`module load foss/2023b libGLU/9.0.3`
+HarfBuzz,8.2.2,foss/2023b,`module load foss/2023b HarfBuzz/8.2.2`
+GMP,6.3.0,foss/2023b,`module load foss/2023b GMP/6.3.0`
+BeautifulSoup,4.12.2,foss/2023b,`module load foss/2023b BeautifulSoup/4.12.2`
+SDL2,2.28.5,foss/2023b,`module load foss/2023b SDL2/2.28.5`
+libtool,2.4.7,foss/2023b,`module load foss/2023b libtool/2.4.7`
+assimp,5.3.1,foss/2023b,`module load foss/2023b assimp/5.3.1`
+LLVM,16.0.6,foss/2023b,`module load foss/2023b LLVM/16.0.6`
+protobuf,25.3,foss/2023b,`module load foss/2023b protobuf/25.3`
+libjpeg-turbo,3.0.1,foss/2023b,`module load foss/2023b libjpeg-turbo/3.0.1`
+NSPR,4.35,foss/2023b,`module load foss/2023b NSPR/4.35`
+Voro++,0.4.6,foss/2023b,`module load foss/2023b Voro++/0.4.6`
+x264,20231019,foss/2023b,`module load foss/2023b x264/20231019`
+DBus,1.15.8,foss/2023b,`module load foss/2023b DBus/1.15.8`
+pybind11,2.11.1,foss/2023b,`module load foss/2023b pybind11/2.11.1`
+xorg-macros,1.20.0,foss/2023b,`module load foss/2023b xorg-macros/1.20.0`
+cffi,1.15.1,foss/2023b,`module load foss/2023b cffi/1.15.1`
+binutils,2.40,foss/2023b,`module load foss/2023b binutils/2.40`
+Brotli,1.1.0,foss/2023b,`module load foss/2023b Brotli/1.1.0`
+PCRE2,10.42,foss/2023b,`module load foss/2023b PCRE2/10.42`
+Imath,3.1.9,foss/2023b,`module load foss/2023b Imath/3.1.9`
+tbb,2021.13.0,foss/2023b,`module load foss/2023b tbb/2021.13.0`
+libpciaccess,0.17,foss/2023b,`module load foss/2023b libpciaccess/0.17`
+FFmpeg,6.0,foss/2023b,`module load foss/2023b FFmpeg/6.0`
+libcerf,2.4,foss/2023b,`module load foss/2023b libcerf/2.4`
+make,4.4.1,foss/2023b,`module load foss/2023b make/4.4.1`
+Pillow,10.2.0,foss/2023b,`module load foss/2023b Pillow/10.2.0`
+pixman,0.42.2,foss/2023b,`module load foss/2023b pixman/0.42.2`
+googletest,1.14.0,foss/2023b,`module load foss/2023b googletest/1.14.0`
+METIS,5.1.0,foss/2023b,`module load foss/2023b METIS/5.1.0`
+UCX,1.15.0,foss/2023b,`module load foss/2023b UCX/1.15.0`
+PMIx,4.2.6,foss/2023b,`module load foss/2023b PMIx/4.2.6`
+ncurses,6.4,foss/2023b,`module load foss/2023b ncurses/6.4`
+Perl-bundle-njit,5.38.0,foss/2023b,`module load foss/2023b Perl-bundle-njit/5.38.0`
+pkgconfig,1.5.5-python,foss/2023b,`module load foss/2023b pkgconfig/1.5.5-python`
+makeinfo,7.1,foss/2023b,`module load foss/2023b makeinfo/7.1`
+libpng,1.6.40,foss/2023b,`module load foss/2023b libpng/1.6.40`
+Pango,1.51.0,foss/2023b,`module load foss/2023b Pango/1.51.0`
+libevent,2.1.12,foss/2023b,`module load foss/2023b libevent/2.1.12`
+Yasm,1.3.0,foss/2023b,`module load foss/2023b Yasm/1.3.0`
+Doxygen,1.9.8,foss/2023b,`module load foss/2023b Doxygen/1.9.8`
+gnuplot,6.0.1,foss/2023b,`module load foss/2023b gnuplot/6.0.1`
+hwloc,2.9.2,foss/2023b,`module load foss/2023b hwloc/2.9.2`
+HDF,4.2.16-2,foss/2023b,`module load foss/2023b HDF/4.2.16-2`
+libxml2,2.11.5,foss/2023b,`module load foss/2023b libxml2/2.11.5`
+NASM,2.16.01,foss/2023b,`module load foss/2023b NASM/2.16.01`
+git,2.42.0,foss/2023b,`module load foss/2023b git/2.42.0`
+PyZMQ,25.1.2,foss/2023b,`module load foss/2023b PyZMQ/25.1.2`
+UnZip,6.0,foss/2023b,`module load foss/2023b UnZip/6.0`
+Qt5,5.15.13,foss/2023b,`module load foss/2023b Qt5/5.15.13`
+tornado,6.4,foss/2023b,`module load foss/2023b tornado/6.4`
+graphite2,1.3.14,foss/2023b,`module load foss/2023b graphite2/1.3.14`
+PROJ,9.3.1,foss/2023b,`module load foss/2023b PROJ/9.3.1`
+spdlog,1.12.0,foss/2023b,`module load foss/2023b spdlog/1.12.0`
+poetry,1.6.1,foss/2023b,`module load foss/2023b poetry/1.6.1`
+PyInstaller,6.3.0,foss/2023b,`module load foss/2023b PyInstaller/6.3.0`
+LERC,4.0.0,foss/2023b,`module load foss/2023b LERC/4.0.0`
+hypothesis,6.90.0,foss/2023b,`module load foss/2023b hypothesis/6.90.0`
+OpenJPEG,2.5.0,foss/2023b,`module load foss/2023b OpenJPEG/2.5.0`
+Catch2,2.13.9,foss/2023b,`module load foss/2023b Catch2/2.13.9`
+RE2,2024-03-01,foss/2023b,`module load foss/2023b RE2/2024-03-01`
+JupyterLab,4.2.0,foss/2023b,`module load foss/2023b JupyterLab/4.2.0`
+maturin,1.3.1,foss/2023b,`module load foss/2023b maturin/1.3.1`
+meson-python,0.15.0,foss/2023b,`module load foss/2023b meson-python/0.15.0`
+protobuf-python,4.25.3,foss/2023b,`module load foss/2023b protobuf-python/4.25.3`
+Qhull,2020.2,foss/2023b,`module load foss/2023b Qhull/2020.2`
+Python-bundle-PyPI,2023.10,foss/2023b,`module load foss/2023b Python-bundle-PyPI/2023.10`
+OpenPGM,5.2.122,foss/2023b,`module load foss/2023b OpenPGM/5.2.122`
+libwebp,1.3.2,foss/2023b,`module load foss/2023b libwebp/1.3.2`
+Tk,8.6.13,foss/2023b,`module load foss/2023b Tk/8.6.13`
+nlohmann_json,3.11.3,foss/2023b,`module load foss/2023b nlohmann_json/3.11.3`
+libdeflate,1.19,foss/2023b,`module load foss/2023b libdeflate/1.19`
+pkgconf,2.0.3,foss/2023b,`module load foss/2023b pkgconf/2.0.3`
+libyaml,0.2.5,foss/2023b,`module load foss/2023b libyaml/0.2.5`
+Gurobi,11.0.1,foss/2023b,`module load foss/2023b Gurobi/11.0.1`
+libsndfile,1.2.0,foss/2022b,`module load foss/2022b libsndfile/1.2.0`
nodejs,18.12.1,foss/2022b,`module load foss/2022b nodejs/18.12.1`
Rust,1.65.0,foss/2022b,`module load foss/2022b Rust/1.65.0`
zlib,1.2.12,foss/2022b,`module load foss/2022b zlib/1.2.12`
+OpenEXR,3.1.5,foss/2022b,`module load foss/2022b OpenEXR/3.1.5`
jbigkit,2.1,foss/2022b,`module load foss/2022b jbigkit/2.1`
libgd,2.3.3,foss/2022b,`module load foss/2022b libgd/2.3.3`
lz4,1.9.4,foss/2022b,`module load foss/2022b lz4/1.9.4`
motif,2.3.8,foss/2022b,`module load foss/2022b motif/2.3.8`
libarchive,3.6.1,foss/2022b,`module load foss/2022b libarchive/3.6.1`
+libidn2,2.3.2,foss/2022b,`module load foss/2022b libidn2/2.3.2`
Ninja,1.11.1,foss/2022b,`module load foss/2022b Ninja/1.11.1`
Perl,5.36.0-minimal,foss/2022b,`module load foss/2022b Perl/5.36.0-minimal`
Perl,5.36.0,foss/2022b,`module load foss/2022b Perl/5.36.0`
M4,1.4.19,foss/2022b,`module load foss/2022b M4/1.4.19`
GLib,2.75.0,foss/2022b,`module load foss/2022b GLib/2.75.0`
ICU,72.1,foss/2022b,`module load foss/2022b ICU/72.1`
+giflib,5.2.1,foss/2022b,`module load foss/2022b giflib/5.2.1`
+libsodium,1.0.18,foss/2022b,`module load foss/2022b libsodium/1.0.18`
groff,1.22.4,foss/2022b,`module load foss/2022b groff/1.22.4`
+py-cpuinfo,9.0.0,foss/2022b,`module load foss/2022b py-cpuinfo/9.0.0`
Meson,0.64.0,foss/2022b,`module load foss/2022b Meson/0.64.0`
at-spi2-core,2.46.0,foss/2022b,`module load foss/2022b at-spi2-core/2.46.0`
+UDUNITS,2.2.28,foss/2022b,`module load foss/2022b UDUNITS/2.2.28`
+Z3,4.12.2,foss/2022b,`module load foss/2022b Z3/4.12.2`
gettext,0.21.1,foss/2022b,`module load foss/2022b gettext/0.21.1`
+libvorbis,1.3.7,foss/2022b,`module load foss/2022b libvorbis/1.3.7`
+LittleCMS,2.14,foss/2022b,`module load foss/2022b LittleCMS/2.14`
scikit-build,0.17.2,foss/2022b,`module load foss/2022b scikit-build/0.17.2`
wxWidgets,3.2.2.1,foss/2022b,`module load foss/2022b wxWidgets/3.2.2.1`
Szip,2.1.1,foss/2022b,`module load foss/2022b Szip/2.1.1`
libffi,3.4.4,foss/2022b,`module load foss/2022b libffi/3.4.4`
+PCRE,8.45,foss/2022b,`module load foss/2022b PCRE/8.45`
+LZO,2.10,foss/2022b,`module load foss/2022b LZO/2.10`
Eigen,3.4.0,foss/2022b,`module load foss/2022b Eigen/3.4.0`
GObject-Introspection,1.74.0,foss/2022b,`module load foss/2022b GObject-Introspection/1.74.0`
LAME,3.100,foss/2022b,`module load foss/2022b LAME/3.100`
snappy,1.1.9,foss/2022b,`module load foss/2022b snappy/1.1.9`
NSS,3.85,foss/2022b,`module load foss/2022b NSS/3.85`
+jemalloc,5.3.0,foss/2022b,`module load foss/2022b jemalloc/5.3.0`
expat,2.4.9,foss/2022b,`module load foss/2022b expat/2.4.9`
Autoconf,2.71,foss/2022b,`module load foss/2022b Autoconf/2.71`
+Xvfb,21.1.6,foss/2022b,`module load foss/2022b Xvfb/21.1.6`
+xxd,9.0.1696,foss/2022b,`module load foss/2022b xxd/9.0.1696`
GStreamer,1.22.1,foss/2022b,`module load foss/2022b GStreamer/1.22.1`
+glew,2.2.0-osmesa,foss/2022b,`module load foss/2022b glew/2.2.0-osmesa`
+glew,2.2.0-egl,foss/2022b,`module load foss/2022b glew/2.2.0-egl`
+utf8proc,2.8.0,foss/2022b,`module load foss/2022b utf8proc/2.8.0`
XZ,5.2.7,foss/2022b,`module load foss/2022b XZ/5.2.7`
cppy,1.2.1,foss/2022b,`module load foss/2022b cppy/1.2.1`
X11,20221110,foss/2022b,`module load foss/2022b X11/20221110`
Mesa,22.2.4,foss/2022b,`module load foss/2022b Mesa/22.2.4`
+GLPK,5.0,foss/2022b,`module load foss/2022b GLPK/5.0`
Bison,3.8.2,foss/2022b,`module load foss/2022b Bison/3.8.2`
+LSD2,2.4.1,foss/2022b,`module load foss/2022b LSD2/2.4.1`
Lua,5.4.4,foss/2022b,`module load foss/2022b Lua/5.4.4`
Autotools,20220317,foss/2022b,`module load foss/2022b Autotools/20220317`
+ZeroMQ,4.3.4,foss/2022b,`module load foss/2022b ZeroMQ/4.3.4`
re2c,3.0,foss/2022b,`module load foss/2022b re2c/3.0`
libtirpc,1.3.3,foss/2022b,`module load foss/2022b libtirpc/1.3.3`
LibTIFF,4.4.0,foss/2022b,`module load foss/2022b LibTIFF/4.4.0`
gperf,3.1,foss/2022b,`module load foss/2022b gperf/3.1`
Tcl,8.6.12,foss/2022b,`module load foss/2022b Tcl/8.6.12`
at-spi2-atk,2.38.0,foss/2022b,`module load foss/2022b at-spi2-atk/2.38.0`
+ACTC,1.1,foss/2022b,`module load foss/2022b ACTC/1.1`
x265,3.5,foss/2022b,`module load foss/2022b x265/3.5`
libdrm,2.4.114,foss/2022b,`module load foss/2022b libdrm/2.4.114`
+Tkinter,3.10.8,foss/2022b,`module load foss/2022b Tkinter/3.10.8`
gzip,1.12,foss/2022b,`module load foss/2022b gzip/1.12`
+libxslt,1.1.37,foss/2022b,`module load foss/2022b libxslt/1.1.37`
SQLite,3.39.4,foss/2022b,`module load foss/2022b SQLite/3.39.4`
Graphene,1.10.8,foss/2022b,`module load foss/2022b Graphene/1.10.8`
libreadline,8.2,foss/2022b,`module load foss/2022b libreadline/8.2`
numactl,2.0.16,foss/2022b,`module load foss/2022b numactl/2.0.16`
+libvori,220621,foss/2022b,`module load foss/2022b libvori/220621`
intltool,0.51.0,foss/2022b,`module load foss/2022b intltool/0.51.0`
+Blosc,1.21.3,foss/2022b,`module load foss/2022b Blosc/1.21.3`
+CGAL,5.5.2,foss/2022b,`module load foss/2022b CGAL/5.5.2`
double-conversion,3.2.1,foss/2022b,`module load foss/2022b double-conversion/3.2.1`
libunwind,1.6.2,foss/2022b,`module load foss/2022b libunwind/1.6.2`
Python,3.10.8-bare,foss/2022b,`module load foss/2022b Python/3.10.8-bare`
Python,2.7.18-bare,foss/2022b,`module load foss/2022b Python/2.7.18-bare`
Python,3.10.8,foss/2022b,`module load foss/2022b Python/3.10.8`
+Brunsli,0.1,foss/2022b,`module load foss/2022b Brunsli/0.1`
+Ghostscript,10.0.0,foss/2022b,`module load foss/2022b Ghostscript/10.0.0`
help2man,1.49.2,foss/2022b,`module load foss/2022b help2man/1.49.2`
GDRCopy,2.3,foss/2022b,`module load foss/2022b GDRCopy/2.3`
JasPer,4.0.0,foss/2022b,`module load foss/2022b JasPer/4.0.0`
elfutils,0.189,foss/2022b,`module load foss/2022b elfutils/0.189`
+RapidJSON,1.1.0,foss/2022b,`module load foss/2022b RapidJSON/1.1.0`
flex,2.6.4,foss/2022b,`module load foss/2022b flex/2.6.4`
FriBidi,1.0.12,foss/2022b,`module load foss/2022b FriBidi/1.0.12`
UCC,1.1.0,foss/2022b,`module load foss/2022b UCC/1.1.0`
+json-c,0.16,foss/2022b,`module load foss/2022b json-c/0.16`
cURL,7.86.0,foss/2022b,`module load foss/2022b cURL/7.86.0`
util-linux,2.38.1,foss/2022b,`module load foss/2022b util-linux/2.38.1`
libfabric,1.16.1,foss/2022b,`module load foss/2022b libfabric/1.16.1`
+tqdm,4.64.1,foss/2022b,`module load foss/2022b tqdm/4.64.1`
+nettle,3.8.1,foss/2022b,`module load foss/2022b nettle/3.8.1`
libiconv,1.17,foss/2022b,`module load foss/2022b libiconv/1.17`
+MPFR,4.2.0,foss/2022b,`module load foss/2022b MPFR/4.2.0`
+xprop,1.2.5,foss/2022b,`module load foss/2022b xprop/1.2.5`
+Xerces-C++,3.2.4,foss/2022b,`module load foss/2022b Xerces-C++/3.2.4`
CMake,3.24.3,foss/2022b,`module load foss/2022b CMake/3.24.3`
fontconfig,2.14.1,foss/2022b,`module load foss/2022b fontconfig/2.14.1`
+libogg,1.3.5,foss/2022b,`module load foss/2022b libogg/1.3.5`
+libgit2,1.5.0,foss/2022b,`module load foss/2022b libgit2/1.5.0`
Mako,1.2.4,foss/2022b,`module load foss/2022b Mako/1.2.4`
libglvnd,1.6.0,foss/2022b,`module load foss/2022b libglvnd/1.6.0`
+Clang,16.0.4,foss/2022b,`module load foss/2022b Clang/16.0.4`
freetype,2.12.1,foss/2022b,`module load foss/2022b freetype/2.12.1`
+libgeotiff,1.7.1,foss/2022b,`module load foss/2022b libgeotiff/1.7.1`
+NLopt,2.7.1,foss/2022b,`module load foss/2022b NLopt/2.7.1`
+CFITSIO,4.2.0,foss/2022b,`module load foss/2022b CFITSIO/4.2.0`
ATK,2.38.0,foss/2022b,`module load foss/2022b ATK/2.38.0`
bzip2,1.0.8,foss/2022b,`module load foss/2022b bzip2/1.0.8`
zstd,1.5.2,foss/2022b,`module load foss/2022b zstd/1.5.2`
@@ -381,6 +759,7 @@ GST-plugins-base,1.22.1,foss/2022b,`module load foss/2022b GST-plugins-base/1.22
Automake,1.16.5,foss/2022b,`module load foss/2022b Automake/1.16.5`
libGLU,9.0.2,foss/2022b,`module load foss/2022b libGLU/9.0.2`
HarfBuzz,5.3.1,foss/2022b,`module load foss/2022b HarfBuzz/5.3.1`
+Blosc2,2.8.0,foss/2022b,`module load foss/2022b Blosc2/2.8.0`
GMP,6.2.1,foss/2022b,`module load foss/2022b GMP/6.2.1`
SDL2,2.26.3,foss/2022b,`module load foss/2022b SDL2/2.26.3`
libtool,2.4.7,foss/2022b,`module load foss/2022b libtool/2.4.7`
@@ -390,17 +769,25 @@ libjpeg-turbo,2.1.4,foss/2022b,`module load foss/2022b libjpeg-turbo/2.1.4`
Gdk-Pixbuf,2.42.10,foss/2022b,`module load foss/2022b Gdk-Pixbuf/2.42.10`
NSPR,4.35,foss/2022b,`module load foss/2022b NSPR/4.35`
x264,20230226,foss/2022b,`module load foss/2022b x264/20230226`
+libopus,1.3.1,foss/2022b,`module load foss/2022b libopus/1.3.1`
DBus,1.15.2,foss/2022b,`module load foss/2022b DBus/1.15.2`
+FLAC,1.4.2,foss/2022b,`module load foss/2022b FLAC/1.4.2`
pybind11,2.10.3,foss/2022b,`module load foss/2022b pybind11/2.10.3`
xorg-macros,1.19.3,foss/2022b,`module load foss/2022b xorg-macros/1.19.3`
GTK3,3.24.35,foss/2022b,`module load foss/2022b GTK3/3.24.35`
binutils,2.39,foss/2022b,`module load foss/2022b binutils/2.39`
Brotli,1.0.9,foss/2022b,`module load foss/2022b Brotli/1.0.9`
PCRE2,10.40,foss/2022b,`module load foss/2022b PCRE2/10.40`
+Imath,3.1.6,foss/2022b,`module load foss/2022b Imath/3.1.6`
+parallel,20230722,foss/2022b,`module load foss/2022b parallel/20230722`
libpciaccess,0.17,foss/2022b,`module load foss/2022b libpciaccess/0.17`
FFmpeg,5.1.2,foss/2022b,`module load foss/2022b FFmpeg/5.1.2`
+FFmpeg,6.1.1,foss/2022b,`module load foss/2022b FFmpeg/6.1.1`
libcerf,2.3,foss/2022b,`module load foss/2022b libcerf/2.3`
+Pillow,9.4.0,foss/2022b,`module load foss/2022b Pillow/9.4.0`
+pdsh,2.34,foss/2022b,`module load foss/2022b pdsh/2.34`
pixman,0.42.2,foss/2022b,`module load foss/2022b pixman/0.42.2`
+googletest,1.12.1,foss/2022b,`module load foss/2022b googletest/1.12.1`
METIS,5.1.0,foss/2022b,`module load foss/2022b METIS/5.1.0`
UCX,1.13.1,foss/2022b,`module load foss/2022b UCX/1.13.1`
PMIx,4.2.2,foss/2022b,`module load foss/2022b PMIx/4.2.2`
@@ -413,15 +800,29 @@ libepoxy,1.5.10,foss/2022b,`module load foss/2022b libepoxy/1.5.10`
Doxygen,1.9.5,foss/2022b,`module load foss/2022b Doxygen/1.9.5`
gnuplot,5.4.6,foss/2022b,`module load foss/2022b gnuplot/5.4.6`
hwloc,2.8.0,foss/2022b,`module load foss/2022b hwloc/2.8.0`
+HDF,4.2.15,foss/2022b,`module load foss/2022b HDF/4.2.15`
libxml2,2.10.3,foss/2022b,`module load foss/2022b libxml2/2.10.3`
NASM,2.15.05,foss/2022b,`module load foss/2022b NASM/2.15.05`
git,2.38.1-nodocs,foss/2022b,`module load foss/2022b git/2.38.1-nodocs`
+PyZMQ,25.1.0,foss/2022b,`module load foss/2022b PyZMQ/25.1.0`
UnZip,6.0,foss/2022b,`module load foss/2022b UnZip/6.0`
Qt5,5.15.7,foss/2022b,`module load foss/2022b Qt5/5.15.7`
graphite2,1.3.14,foss/2022b,`module load foss/2022b graphite2/1.3.14`
+PROJ,9.1.1,foss/2022b,`module load foss/2022b PROJ/9.1.1`
+LERC,4.0.0,foss/2022b,`module load foss/2022b LERC/4.0.0`
hypothesis,6.68.2,foss/2022b,`module load foss/2022b hypothesis/6.68.2`
+OpenJPEG,2.5.0,foss/2022b,`module load foss/2022b OpenJPEG/2.5.0`
+RE2,2023-03-01,foss/2022b,`module load foss/2022b RE2/2023-03-01`
+Highway,1.0.3,foss/2022b,`module load foss/2022b Highway/1.0.3`
UCX-CUDA,1.13.1-CUDA-12.0.0,foss/2022b,`module load foss/2022b UCX-CUDA/1.13.1-CUDA-12.0.0`
+UCX-CUDA,1.13.1-CUDA-12.4.0,foss/2022b,`module load foss/2022b UCX-CUDA/1.13.1-CUDA-12.4.0`
+FLTK,1.3.8,foss/2022b,`module load foss/2022b FLTK/1.3.8`
Qhull,2020.2,foss/2022b,`module load foss/2022b Qhull/2020.2`
+OpenPGM,5.2.122,foss/2022b,`module load foss/2022b OpenPGM/5.2.122`
DB,18.1.40,foss/2022b,`module load foss/2022b DB/18.1.40`
+ImageMagick,7.1.0-53,foss/2022b,`module load foss/2022b ImageMagick/7.1.0-53`
+Tk,8.6.12,foss/2022b,`module load foss/2022b Tk/8.6.12`
+nlohmann_json,3.11.2,foss/2022b,`module load foss/2022b nlohmann_json/3.11.2`
libdeflate,1.15,foss/2022b,`module load foss/2022b libdeflate/1.15`
pkgconf,1.9.3,foss/2022b,`module load foss/2022b pkgconf/1.9.3`
+Gurobi,10.0.1,foss/2022b,`module load foss/2022b Gurobi/10.0.1`
diff --git a/docs/assets/tables/module_wulver_rhel9.csv b/docs/assets/tables/module_wulver_rhel9.csv
new file mode 100644
index 000000000..c396ab6d1
--- /dev/null
+++ b/docs/assets/tables/module_wulver_rhel9.csv
@@ -0,0 +1,550 @@
+Software,Version,Dependent Toolchain,Module Load Command
+zlib,.1.3.1,-,`module load zlib/.1.3.1`
+zlib,.1.2.13,-,`module load zlib/.1.2.13`
+ffnvcodec,12.2.72.0,-,`module load ffnvcodec/12.2.72.0`
+ffnvcodec,13.0.19.0,-,`module load ffnvcodec/13.0.19.0`
+Perl,5.38.0,-,`module load Perl/5.38.0`
+M4,1.4.19,-,`module load M4/1.4.19`
+Altair,2023,-,`module load Altair/2023`
+CUDA,12.6.0,-,`module load CUDA/12.6.0`
+CUDA,12.8.0,-,`module load CUDA/12.8.0`
+gettext,0.22.5,-,`module load gettext/0.22.5`
+cuDNN,9.5.0.50-CUDA-12.6.0,-,`module load cuDNN/9.5.0.50-CUDA-12.6.0`
+NVHPC,25.3-CUDA-12.8.0,-,`module load NVHPC/25.3-CUDA-12.8.0`
+Bison,3.8.2,-,`module load Bison/3.8.2`
+intel,2024a,-,`module load intel/2024a`
+intel,2025a,-,`module load intel/2025a`
+ParaView,5.11.2-egl,-,`module load ParaView/5.11.2-egl`
+ParaView,5.11.2-osmesa,-,`module load ParaView/5.11.2-osmesa`
+flex,2.6.4,-,`module load flex/2.6.4`
+Miniforge3,24.11.3-0,-,`module load Miniforge3/24.11.3-0`
+foss,2024a,-,`module load foss/2024a`
+foss,2025a,-,`module load foss/2025a`
+EasyBuild,5.1.1,-,`module load EasyBuild/5.1.1`
+Avogadro2,1.97.0-linux-x86_64,-,`module load Avogadro2/1.97.0-linux-x86_64`
+imkl,2025.1.0,-,`module load imkl/2025.1.0`
+imkl,2024.2.0,-,`module load imkl/2024.2.0`
+OpenSSL,3,-,`module load OpenSSL/3`
+VESTA,3.90.5-gtk3,-,`module load VESTA/3.90.5-gtk3`
+Gaussian,16.C.03-AVX2,-,`module load Gaussian/16.C.03-AVX2`
+binutils,2.40,-,`module load binutils/2.40`
+binutils,2.42,-,`module load binutils/2.42`
+intel-compilers,2025.1.1,intel/2025a,`module load intel-compilers/2025.1.1`
+intel-compilers,2024.2.0,intel/2024a,`module load intel-compilers/2024.2.0`
+tecplot,2024R1,-,`module load tecplot/2024R1`
+VSCode,1.88.1,-,`module load VSCode/1.88.1`
+Java,17.0.15,-,`module load Java/17.0.15`
+Java,23.0.2,-,`module load Java/23.0.2`
+Java,.modulerc,-,`module load Java/.modulerc`
+gompi,2024a,-,`module load gompi/2024a`
+gompi,2025a,-,`module load gompi/2025a`
+ncurses,.6.5,-,`module load ncurses/.6.5`
+gfbf,2024a,-,`module load gfbf/2024a`
+gfbf,2025a,-,`module load gfbf/2025a`
+p7zip,17.04,-,`module load p7zip/17.04`
+libevent,2.1.12,-,`module load libevent/2.1.12`
+ABAQUS,2024-hotfix-2405,-,`module load ABAQUS/2024-hotfix-2405`
+GCC,14.2.0,foss/2025a,`module load GCC/14.2.0`
+GCC,13.3.0,foss/2024a,`module load GCC/13.3.0`
+rclone,1.68.1,-,`module load rclone/1.68.1`
+tmux,3.5a,-,`module load tmux/3.5a`
+iimpi,2024a,-,`module load iimpi/2024a`
+iimpi,2025a,-,`module load iimpi/2025a`
+MATLAB,2024a,-,`module load MATLAB/2024a`
+MATLAB,2025a,-,`module load MATLAB/2025a`
+Julia,1.11.6-linux-x86_64,-,`module load Julia/1.11.6-linux-x86_64`
+Julia,1.9.3-linux-x86_64,-,`module load Julia/1.9.3-linux-x86_64`
+ANSYS,2025R1,-,`module load ANSYS/2025R1`
+ANSYS,2024R1,-,`module load ANSYS/2024R1`
+GCCcore,14.2.0,-,`module load GCCcore/14.2.0`
+GCCcore,13.3.0,-,`module load GCCcore/13.3.0`
+Go,1.22.1,-,`module load Go/1.22.1`
+Go,1.23.6,-,`module load Go/1.23.6`
+pkgconf,1.8.0,-,`module load pkgconf/1.8.0`
+HDF5,1.14.6,intel/2025a,`module load intel/2025a HDF5/1.14.6`
+imkl-FFTW,2025.1.0,intel/2025a,`module load intel/2025a imkl-FFTW/2025.1.0`
+VASP,6.5.0,intel/2025a,`module load intel/2025a VASP/6.5.0`
+imkl-FFTW,2024.2.0,intel/2024a,`module load intel/2024a imkl-FFTW/2024.2.0`
+GROMACS,2025.2-CUDA-12.8.0,foss/2025a,`module load foss/2025a GROMACS/2025.2-CUDA-12.8.0`
+GROMACS,2025.2,foss/2025a,`module load foss/2025a GROMACS/2025.2`
+CP2K,2025.2,foss/2025a,`module load foss/2025a CP2K/2025.2`
+PLUMED,2.9.4,foss/2025a,`module load foss/2025a PLUMED/2.9.4`
+MDAnalysis,2.9.0,foss/2025a,`module load foss/2025a MDAnalysis/2.9.0`
+PnetCDF,1.14.0,foss/2025a,`module load foss/2025a PnetCDF/1.14.0`
+petsc4py,3.23.5,foss/2025a,`module load foss/2025a petsc4py/3.23.5`
+netCDF-Fortran,4.6.2,foss/2025a,`module load foss/2025a netCDF-Fortran/4.6.2`
+Biopython,1.85,foss/2025a,`module load foss/2025a Biopython/1.85`
+PyTables,3.10.2,foss/2025a,`module load foss/2025a PyTables/3.10.2`
+SuiteSparse,7.10.3,foss/2025a,`module load foss/2025a SuiteSparse/7.10.3`
+mpi4py,4.1.0,foss/2025a,`module load foss/2025a mpi4py/4.1.0`
+netCDF,4.9.3,foss/2025a,`module load foss/2025a netCDF/4.9.3`
+MUMPS,5.8.1-metis,foss/2025a,`module load foss/2025a MUMPS/5.8.1-metis`
+RDKit,2025.03.4,foss/2025a,`module load foss/2025a RDKit/2025.03.4`
+Chapel,2.5.0,foss/2025a,`module load foss/2025a Chapel/2.5.0`
+SuperLU_DIST,9.1.0,foss/2025a,`module load foss/2025a SuperLU_DIST/9.1.0`
+SCOTCH,7.0.8,foss/2025a,`module load foss/2025a SCOTCH/7.0.8`
+HDF5,1.14.6,foss/2025a,`module load foss/2025a HDF5/1.14.6`
+perm-md-count,main,foss/2025a,`module load foss/2025a perm-md-count/main`
+ParMETIS,4.0.3,foss/2025a,`module load foss/2025a ParMETIS/4.0.3`
+PETSc,3.23.5,foss/2025a,`module load foss/2025a PETSc/3.23.5`
+SLEPc,3.23.2,foss/2025a,`module load foss/2025a SLEPc/3.23.2`
+slepc4py,3.23.2,foss/2025a,`module load foss/2025a slepc4py/3.23.2`
+netcdf4-python,1.7.2,foss/2025a,`module load foss/2025a netcdf4-python/1.7.2`
+FFTW.MPI,3.3.10,foss/2025a,`module load foss/2025a FFTW.MPI/3.3.10`
+AmberTools,25.2,foss/2025a,`module load foss/2025a AmberTools/25.2`
+ScaLAPACK,2.2.2-fb,foss/2025a,`module load foss/2025a ScaLAPACK/2.2.2-fb`
+Hypre,2.33.0,foss/2025a,`module load foss/2025a Hypre/2.33.0`
+MDI,1.4.26,foss/2024a,`module load foss/2024a MDI/1.4.26`
+GROMACS,2024.4-CUDA-12.6.0,foss/2024a,`module load foss/2024a GROMACS/2024.4-CUDA-12.6.0`
+GROMACS,2024.4,foss/2024a,`module load foss/2024a GROMACS/2024.4`
+OpenFOAM,v2406,foss/2024a,`module load foss/2024a OpenFOAM/v2406`
+PLUMED,2.9.3,foss/2024a,`module load foss/2024a PLUMED/2.9.3`
+PnetCDF,1.14.0,foss/2024a,`module load foss/2024a PnetCDF/1.14.0`
+netCDF-Fortran,4.6.1,foss/2024a,`module load foss/2024a netCDF-Fortran/4.6.1`
+LAMMPS,29Aug2024_update2-kokkos,foss/2024a,`module load foss/2024a LAMMPS/29Aug2024_update2-kokkos`
+LAMMPS,29Aug2024_update2-kokkos-CUDA-12.6.0,foss/2024a,`module load foss/2024a LAMMPS/29Aug2024_update2-kokkos-CUDA-12.6.0`
+LAMMPS,2Aug2023_update2-kokkos-CUDA-12.6.0,foss/2024a,`module load foss/2024a LAMMPS/2Aug2023_update2-kokkos-CUDA-12.6.0`
+SuiteSparse,7.10.1,foss/2024a,`module load foss/2024a SuiteSparse/7.10.1`
+mpi4py,4.0.1,foss/2024a,`module load foss/2024a mpi4py/4.0.1`
+OVITO,3.12.3-basic,foss/2024a,`module load foss/2024a OVITO/3.12.3-basic`
+netCDF,4.9.2,foss/2024a,`module load foss/2024a netCDF/4.9.2`
+MUMPS,5.7.2-metis,foss/2024a,`module load foss/2024a MUMPS/5.7.2-metis`
+ParaView,5.13.2,foss/2024a,`module load foss/2024a ParaView/5.13.2`
+Chapel,2.4.0,foss/2024a,`module load foss/2024a Chapel/2.4.0`
+SuperLU_DIST,9.1.0,foss/2024a,`module load foss/2024a SuperLU_DIST/9.1.0`
+QuantumESPRESSO,7.4,foss/2024a,`module load foss/2024a QuantumESPRESSO/7.4`
+HPL,2.3,foss/2024a,`module load foss/2024a HPL/2.3`
+SCOTCH,7.0.6,foss/2024a,`module load foss/2024a SCOTCH/7.0.6`
+HDF5,1.14.5,foss/2024a,`module load foss/2024a HDF5/1.14.5`
+AFNI,25.1.01,foss/2024a,`module load foss/2024a AFNI/25.1.01`
+VTK,9.3.1,foss/2024a,`module load foss/2024a VTK/9.3.1`
+ParMETIS,4.0.3,foss/2024a,`module load foss/2024a ParMETIS/4.0.3`
+PETSc,3.23.5,foss/2024a,`module load foss/2024a PETSc/3.23.5`
+SLEPc,3.23.2,foss/2024a,`module load foss/2024a SLEPc/3.23.2`
+KaHIP,3.18,foss/2024a,`module load foss/2024a KaHIP/3.18`
+ORCA,6.0.1-avx2,foss/2024a,`module load foss/2024a ORCA/6.0.1-avx2`
+ELPA,2024.05.001,foss/2024a,`module load foss/2024a ELPA/2024.05.001`
+igraph,0.10.16,foss/2024a,`module load foss/2024a igraph/0.10.16`
+Siesta,5.4.0,foss/2024a,`module load foss/2024a Siesta/5.4.0`
+libGridXC,2.0.2,foss/2024a,`module load foss/2024a libGridXC/2.0.2`
+FFTW.MPI,3.3.10,foss/2024a,`module load foss/2024a FFTW.MPI/3.3.10`
+arpack-ng,3.9.1,foss/2024a,`module load foss/2024a arpack-ng/3.9.1`
+ScaFaCoS,1.0.4,foss/2024a,`module load foss/2024a ScaFaCoS/1.0.4`
+ScaLAPACK,2.2.0-fb,foss/2024a,`module load foss/2024a ScaLAPACK/2.2.0-fb`
+Hypre,2.32.0,foss/2024a,`module load foss/2024a Hypre/2.32.0`
+impi,2021.15.0,intel/2025a,`module load intel/2025a`
+impi,2021.13.0,intel/2024a,`module load intel/2024a`
+matplotlib,3.10.3,foss/2025a,`module load foss/2025a matplotlib/3.10.3`
+R,4.4.2,foss/2025a,`module load foss/2025a R/4.4.2`
+libxsmm,1.17,foss/2025a,`module load foss/2025a libxsmm/1.17`
+AutoDock-GPU,1.6-CUDA-12.8.0,foss/2025a,`module load foss/2025a AutoDock-GPU/1.6-CUDA-12.8.0`
+packmol,21.0.4,foss/2025a,`module load foss/2025a packmol/21.0.4`
+Boost.Python-NumPy,1.88.0,foss/2025a,`module load foss/2025a Boost.Python-NumPy/1.88.0`
+SciPy-bundle,2025.06,foss/2025a,`module load foss/2025a SciPy-bundle/2025.06`
+Libint,2.11.1-lmax-6-cp2k,foss/2025a,`module load foss/2025a Libint/2.11.1-lmax-6-cp2k`
+scikit-learn,1.7.0,foss/2025a,`module load foss/2025a scikit-learn/1.7.0`
+AOCL-BLAS,5.0,foss/2025a,`module load foss/2025a AOCL-BLAS/5.0`
+OpenMPI,5.0.7,foss/2025a,`module load foss/2025a`
+OpenBLAS,0.3.29,foss/2025a,`module load foss/2025a OpenBLAS/0.3.29`
+spglib-python,2.6.0,foss/2025a,`module load foss/2025a spglib-python/2.6.0`
+GSL,2.8,foss/2025a,`module load foss/2025a GSL/2.8`
+networkx,3.5,foss/2025a,`module load foss/2025a networkx/3.5`
+pybind11,2.13.6,foss/2025a,`module load foss/2025a pybind11/2.13.6`
+BLIS,1.1,foss/2025a,`module load foss/2025a BLIS/1.1`
+mrcfile,1.5.4,foss/2025a,`module load foss/2025a mrcfile/1.5.4`
+Arrow,18.0.0,foss/2025a,`module load foss/2025a Arrow/18.0.0`
+Boost,1.88.0,foss/2025a,`module load foss/2025a Boost/1.88.0`
+FFTW,3.3.10,foss/2025a,`module load foss/2025a FFTW/3.3.10`
+FlexiBLAS,3.4.5,foss/2025a,`module load foss/2025a FlexiBLAS/3.4.5`
+libxc,7.0.0,foss/2025a,`module load foss/2025a libxc/7.0.0`
+ASE,3.26.0,foss/2025a,`module load foss/2025a ASE/3.26.0`
+Seaborn,0.13.2,foss/2025a,`module load foss/2025a Seaborn/0.13.2`
+matplotlib,3.9.2,foss/2024a,`module load foss/2024a matplotlib/3.9.2`
+R,4.4.2,foss/2024a,`module load foss/2024a R/4.4.2`
+kim-api,2.4.1,foss/2024a,`module load foss/2024a kim-api/2.4.1`
+libxsmm,1.17,foss/2024a,`module load foss/2024a libxsmm/1.17`
+libfdf,0.5.1,foss/2024a,`module load foss/2024a libfdf/0.5.1`
+mctc-lib,0.3.1,foss/2024a,`module load foss/2024a mctc-lib/0.3.1`
+astropy,7.0.0,foss/2024a,`module load foss/2024a astropy/7.0.0`
+xmlf90,1.6.3,foss/2024a,`module load foss/2024a xmlf90/1.6.3`
+Simple-DFTD3,1.2.1,foss/2024a,`module load foss/2024a Simple-DFTD3/1.2.1`
+test-drive,0.5.0,foss/2024a,`module load foss/2024a test-drive/0.5.0`
+libPSML,2.1.0,foss/2024a,`module load foss/2024a libPSML/2.1.0`
+astropy-testing,7.0.0,foss/2024a,`module load foss/2024a astropy-testing/7.0.0`
+mstore,0.3.0,foss/2024a,`module load foss/2024a mstore/0.3.0`
+json-fortran,9.0.3,foss/2024a,`module load foss/2024a json-fortran/9.0.3`
+pocl,6.0,foss/2024a,`module load foss/2024a pocl/6.0`
+NTL,11.5.1,foss/2024a,`module load foss/2024a NTL/11.5.1`
+TOML-Fortran,0.4.2,foss/2024a,`module load foss/2024a TOML-Fortran/0.4.2`
+SciPy-bundle,2024.05,foss/2024a,`module load foss/2024a SciPy-bundle/2024.05`
+Libint,2.9.0-lmax-6-cp2k,foss/2024a,`module load foss/2024a Libint/2.9.0-lmax-6-cp2k`
+flook,0.8.4,foss/2024a,`module load foss/2024a flook/0.8.4`
+OpenMPI,5.0.3,foss/2024a,`module load foss/2024a`
+OpenBLAS,0.3.27,foss/2024a,`module load foss/2024a OpenBLAS/0.3.27`
+GSL,2.8,foss/2024a,`module load foss/2024a GSL/2.8`
+networkx,3.4.2,foss/2024a,`module load foss/2024a networkx/3.4.2`
+pybind11,2.12.0,foss/2024a,`module load foss/2024a pybind11/2.12.0`
+BLIS,1.0,foss/2024a,`module load foss/2024a BLIS/1.0`
+Boost,1.85.0,foss/2024a,`module load foss/2024a Boost/1.85.0`
+FFTW,3.3.10,foss/2024a,`module load foss/2024a FFTW/3.3.10`
+FlexiBLAS,3.4.4,foss/2024a,`module load foss/2024a FlexiBLAS/3.4.4`
+VirtualGL,3.1.1,foss/2024a,`module load foss/2024a VirtualGL/3.1.1`
+libxc,6.2.2,foss/2024a,`module load foss/2024a libxc/6.2.2`
+FLINT,3.1.2,foss/2024a,`module load foss/2024a FLINT/3.1.2`
+nodejs,22.16.0,foss/2025a,`module load foss/2025a nodejs/22.16.0`
+Rust,1.85.1,foss/2025a,`module load foss/2025a Rust/1.85.1`
+zlib,.1.3.1,foss/2025a,`module load foss/2025a zlib/.1.3.1`
+jbigkit,2.1,foss/2025a,`module load foss/2025a jbigkit/2.1`
+Qt6,6.9.1,foss/2025a,`module load foss/2025a Qt6/6.9.1`
+glslang,15.3.0,foss/2025a,`module load foss/2025a glslang/15.3.0`
+lz4,1.10.0,foss/2025a,`module load foss/2025a lz4/1.10.0`
+libarchive,3.7.7,foss/2025a,`module load foss/2025a libarchive/3.7.7`
+virtualenv,20.29.2,foss/2025a,`module load foss/2025a virtualenv/20.29.2`
+libidn2,2.3.7,foss/2025a,`module load foss/2025a libidn2/2.3.7`
+Ninja,1.12.1,foss/2025a,`module load foss/2025a Ninja/1.12.1`
+jupyter-server,2.16.0,foss/2025a,`module load foss/2025a jupyter-server/2.16.0`
+Perl,5.40.0,foss/2025a,`module load foss/2025a Perl/5.40.0`
+M4,1.4.19,foss/2025a,`module load foss/2025a M4/1.4.19`
+GLib,2.85.1,foss/2025a,`module load foss/2025a GLib/2.85.1`
+ICU,76.1,foss/2025a,`module load foss/2025a ICU/76.1`
+giflib,5.2.2,foss/2025a,`module load foss/2025a giflib/5.2.2`
+libsodium,1.0.20,foss/2025a,`module load foss/2025a libsodium/1.0.20`
+groff,1.23.0,foss/2025a,`module load foss/2025a groff/1.23.0`
+py-cpuinfo,9.0.0,foss/2025a,`module load foss/2025a py-cpuinfo/9.0.0`
+Meson,1.6.1,foss/2025a,`module load foss/2025a Meson/1.6.1`
+IPython,9.3.0,foss/2025a,`module load foss/2025a IPython/9.3.0`
+Z3,4.13.4,foss/2025a,`module load foss/2025a Z3/4.13.4`
+gettext,0.24,foss/2025a,`module load foss/2025a gettext/0.24`
+JupyterNotebook,7.4.4,foss/2025a,`module load foss/2025a JupyterNotebook/7.4.4`
+LittleCMS,2.17,foss/2025a,`module load foss/2025a LittleCMS/2.17`
+scikit-build,0.18.1,foss/2025a,`module load foss/2025a scikit-build/0.18.1`
+Szip,2.1.1,foss/2025a,`module load foss/2025a Szip/2.1.1`
+libffi,3.4.5,foss/2025a,`module load foss/2025a libffi/3.4.5`
+LZO,2.10,foss/2025a,`module load foss/2025a LZO/2.10`
+Eigen,3.4.0,foss/2025a,`module load foss/2025a Eigen/3.4.0`
+PyYAML,6.0.2,foss/2025a,`module load foss/2025a PyYAML/6.0.2`
+GObject-Introspection,1.84.0,foss/2025a,`module load foss/2025a GObject-Introspection/1.84.0`
+LAME,3.100,foss/2025a,`module load foss/2025a LAME/3.100`
+snappy,1.2.2,foss/2025a,`module load foss/2025a snappy/1.2.2`
+NSS,3.113,foss/2025a,`module load foss/2025a NSS/3.113`
+expat,2.6.4,foss/2025a,`module load foss/2025a expat/2.6.4`
+Abseil,20250512.1,foss/2025a,`module load foss/2025a Abseil/20250512.1`
+jedi,0.19.1,foss/2025a,`module load foss/2025a jedi/0.19.1`
+Autoconf,2.72,foss/2025a,`module load foss/2025a Autoconf/2.72`
+hatchling,1.27.0,foss/2025a,`module load foss/2025a hatchling/1.27.0`
+Xvfb,21.1.18,foss/2025a,`module load foss/2025a Xvfb/21.1.18`
+Flask,3.1.1,foss/2025a,`module load foss/2025a Flask/3.1.1`
+xxd,9.1.1457,foss/2025a,`module load foss/2025a xxd/9.1.1457`
+flit,3.10.1,foss/2025a,`module load foss/2025a flit/3.10.1`
+libunistring,1.3,foss/2025a,`module load foss/2025a libunistring/1.3`
+utf8proc,2.9.0,foss/2025a,`module load foss/2025a utf8proc/2.9.0`
+XZ,5.6.3,foss/2025a,`module load foss/2025a XZ/5.6.3`
+cppy,1.3.1,foss/2025a,`module load foss/2025a cppy/1.3.1`
+X11,20250521,foss/2025a,`module load foss/2025a X11/20250521`
+Mesa,24.1.3,foss/2025a,`module load foss/2025a Mesa/24.1.3`
+Mesa,25.1.3,foss/2025a,`module load foss/2025a Mesa/25.1.3`
+GLPK,5.0,foss/2025a,`module load foss/2025a GLPK/5.0`
+Bison,3.8.2,foss/2025a,`module load foss/2025a Bison/3.8.2`
+setuptools-rust,1.11.0,foss/2025a,`module load foss/2025a setuptools-rust/1.11.0`
+Autotools,20240712,foss/2025a,`module load foss/2025a Autotools/20240712`
+ZeroMQ,4.3.5,foss/2025a,`module load foss/2025a ZeroMQ/4.3.5`
+re2c,4.2,foss/2025a,`module load foss/2025a re2c/4.2`
+Wayland,1.23.92,foss/2025a,`module load foss/2025a Wayland/1.23.92`
+LibTIFF,4.7.0,foss/2025a,`module load foss/2025a LibTIFF/4.7.0`
+gperf,3.3,foss/2025a,`module load foss/2025a gperf/3.3`
+Tcl,8.6.16,foss/2025a,`module load foss/2025a Tcl/8.6.16`
+patchelf,0.18.0,foss/2025a,`module load foss/2025a patchelf/0.18.0`
+x265,4.1,foss/2025a,`module load foss/2025a x265/4.1`
+libdrm,2.4.125,foss/2025a,`module load foss/2025a libdrm/2.4.125`
+libheif,1.19.8,foss/2025a,`module load foss/2025a libheif/1.19.8`
+Tkinter,3.13.1,foss/2025a,`module load foss/2025a Tkinter/3.13.1`
+gzip,1.13,foss/2025a,`module load foss/2025a gzip/1.13`
+Perl-bundle-CPAN,5.40.0,foss/2025a,`module load foss/2025a Perl-bundle-CPAN/5.40.0`
+libxslt,1.1.42,foss/2025a,`module load foss/2025a libxslt/1.1.42`
+SQLite,3.47.2,foss/2025a,`module load foss/2025a SQLite/3.47.2`
+setuptools,80.9.0,foss/2025a,`module load foss/2025a setuptools/80.9.0`
+cryptography,44.0.2,foss/2025a,`module load foss/2025a cryptography/44.0.2`
+libreadline,8.2,foss/2025a,`module load foss/2025a libreadline/8.2`
+numactl,2.0.19,foss/2025a,`module load foss/2025a numactl/2.0.19`
+libvori,220621,foss/2025a,`module load foss/2025a libvori/220621`
+intltool,0.51.0,foss/2025a,`module load foss/2025a intltool/0.51.0`
+Blosc,1.21.6,foss/2025a,`module load foss/2025a Blosc/1.21.6`
+DMTCP,4.0.0,foss/2025a,`module load foss/2025a DMTCP/4.0.0`
+double-conversion,3.3.1,foss/2025a,`module load foss/2025a double-conversion/3.3.1`
+libunwind,1.8.1,foss/2025a,`module load foss/2025a libunwind/1.8.1`
+Python,3.13.1,foss/2025a,`module load foss/2025a Python/3.13.1`
+help2man,1.49.3,foss/2025a,`module load foss/2025a help2man/1.49.3`
+GDRCopy,2.4.4,foss/2025a,`module load foss/2025a GDRCopy/2.4.4`
+JasPer,4.2.5,foss/2025a,`module load foss/2025a JasPer/4.2.5`
+matlab-proxy,0.27.1,foss/2025a,`module load foss/2025a matlab-proxy/0.27.1`
+RapidJSON,1.1.0-20250205,foss/2025a,`module load foss/2025a RapidJSON/1.1.0-20250205`
+flex,2.6.4,foss/2025a,`module load foss/2025a flex/2.6.4`
+FriBidi,1.0.16,foss/2025a,`module load foss/2025a FriBidi/1.0.16`
+hatch-jupyter-builder,0.9.1,foss/2025a,`module load foss/2025a hatch-jupyter-builder/0.9.1`
+UCC,1.3.0,foss/2025a,`module load foss/2025a UCC/1.3.0`
+cURL,8.11.1,foss/2025a,`module load foss/2025a cURL/8.11.1`
+util-linux,2.41,foss/2025a,`module load foss/2025a util-linux/2.41`
+libfabric,2.0.0,foss/2025a,`module load foss/2025a libfabric/2.0.0`
+tqdm,4.67.1,foss/2025a,`module load foss/2025a tqdm/4.67.1`
+nettle,3.10.1,foss/2025a,`module load foss/2025a nettle/3.10.1`
+libiconv,1.18,foss/2025a,`module load foss/2025a libiconv/1.18`
+aiohttp,3.12.13,foss/2025a,`module load foss/2025a aiohttp/3.12.13`
+MPFR,4.2.2,foss/2025a,`module load foss/2025a MPFR/4.2.2`
+CMake,3.31.3,foss/2025a,`module load foss/2025a CMake/3.31.3`
+fontconfig,2.16.2,foss/2025a,`module load foss/2025a fontconfig/2.16.2`
+libgit2,1.9.1,foss/2025a,`module load foss/2025a libgit2/1.9.1`
+Mako,1.3.10,foss/2025a,`module load foss/2025a Mako/1.3.10`
+libglvnd,1.7.0,foss/2025a,`module load foss/2025a libglvnd/1.7.0`
+SPIRV-tools,2025.2.rc2,foss/2025a,`module load foss/2025a SPIRV-tools/2025.2.rc2`
+freetype,2.13.3,foss/2025a,`module load foss/2025a freetype/2.13.3`
+lxml,5.3.0,foss/2025a,`module load foss/2025a lxml/5.3.0`
+bzip2,1.0.8,foss/2025a,`module load foss/2025a bzip2/1.0.8`
+fonttools,4.58.4,foss/2025a,`module load foss/2025a fonttools/4.58.4`
+zstd,1.5.6,foss/2025a,`module load foss/2025a zstd/1.5.6`
+cairo,1.18.4,foss/2025a,`module load foss/2025a cairo/1.18.4`
+Automake,1.17,foss/2025a,`module load foss/2025a Automake/1.17`
+libGLU,9.0.3,foss/2025a,`module load foss/2025a libGLU/9.0.3`
+HarfBuzz,11.2.1,foss/2025a,`module load foss/2025a HarfBuzz/11.2.1`
+Blosc2,2.19.0,foss/2025a,`module load foss/2025a Blosc2/2.19.0`
+GMP,6.3.0,foss/2025a,`module load foss/2025a GMP/6.3.0`
+BeautifulSoup,4.13.4,foss/2025a,`module load foss/2025a BeautifulSoup/4.13.4`
+SDL2,2.32.8,foss/2025a,`module load foss/2025a SDL2/2.32.8`
+psutil,7.0.0,foss/2025a,`module load foss/2025a psutil/7.0.0`
+libtool,2.5.4,foss/2025a,`module load foss/2025a libtool/2.5.4`
+assimp,6.0.2,foss/2025a,`module load foss/2025a assimp/6.0.2`
+lit,18.1.8,foss/2025a,`module load foss/2025a lit/18.1.8`
+LLVM,20.1.5,foss/2025a,`module load foss/2025a LLVM/20.1.5`
+LLVM,20.1.8,foss/2025a,`module load foss/2025a LLVM/20.1.8`
+PRRTE,3.0.8,foss/2025a,`module load foss/2025a PRRTE/3.0.8`
+googlebenchmark,1.9.4,foss/2025a,`module load foss/2025a googlebenchmark/1.9.4`
+libjpeg-turbo,3.1.0,foss/2025a,`module load foss/2025a libjpeg-turbo/3.1.0`
+Gdk-Pixbuf,2.42.12,foss/2025a,`module load foss/2025a Gdk-Pixbuf/2.42.12`
+NSPR,4.36,foss/2025a,`module load foss/2025a NSPR/4.36`
+x264,20250619,foss/2025a,`module load foss/2025a x264/20250619`
+DBus,1.16.2,foss/2025a,`module load foss/2025a DBus/1.16.2`
+xorg-macros,1.20.2,foss/2025a,`module load foss/2025a xorg-macros/1.20.2`
+cffi,1.17.1,foss/2025a,`module load foss/2025a cffi/1.17.1`
+binutils,2.42,foss/2025a,`module load foss/2025a binutils/2.42`
+spin,0.14,foss/2025a,`module load foss/2025a spin/0.14`
+Brotli,1.1.0,foss/2025a,`module load foss/2025a Brotli/1.1.0`
+PCRE2,10.45,foss/2025a,`module load foss/2025a PCRE2/10.45`
+libpciaccess,0.18.1,foss/2025a,`module load foss/2025a libpciaccess/0.18.1`
+FFmpeg,7.1.1,foss/2025a,`module load foss/2025a FFmpeg/7.1.1`
+make,4.4.1,foss/2025a,`module load foss/2025a make/4.4.1`
+Pillow,11.3.0,foss/2025a,`module load foss/2025a Pillow/11.3.0`
+pixman,0.46.2,foss/2025a,`module load foss/2025a pixman/0.46.2`
+googletest,1.17.0,foss/2025a,`module load foss/2025a googletest/1.17.0`
+METIS,5.1.0,foss/2025a,`module load foss/2025a METIS/5.1.0`
+UCX,1.18.0,foss/2025a,`module load foss/2025a UCX/1.18.0`
+PMIx,5.0.6,foss/2025a,`module load foss/2025a PMIx/5.0.6`
+ncurses,.6.5,foss/2025a,`module load foss/2025a ncurses/.6.5`
+Cython,3.1.1,foss/2025a,`module load foss/2025a Cython/3.1.1`
+libpng,1.6.48,foss/2025a,`module load foss/2025a libpng/1.6.48`
+p7zip,17.05,foss/2025a,`module load foss/2025a p7zip/17.05`
+libevent,2.1.12,foss/2025a,`module load foss/2025a libevent/2.1.12`
+Yasm,1.3.0,foss/2025a,`module load foss/2025a Yasm/1.3.0`
+Doxygen,1.14.0,foss/2025a,`module load foss/2025a Doxygen/1.14.0`
+hwloc,2.11.2,foss/2025a,`module load foss/2025a hwloc/2.11.2`
+libde265,1.0.16,foss/2025a,`module load foss/2025a libde265/1.0.16`
+libxml2,2.13.4,foss/2025a,`module load foss/2025a libxml2/2.13.4`
+NASM,2.16.03,foss/2025a,`module load foss/2025a NASM/2.16.03`
+scikit-build-core,0.11.1,foss/2025a,`module load foss/2025a scikit-build-core/0.11.1`
+git,2.49.0,foss/2025a,`module load foss/2025a git/2.49.0`
+PyZMQ,27.0.0,foss/2025a,`module load foss/2025a PyZMQ/27.0.0`
+UnZip,6.0,foss/2025a,`module load foss/2025a UnZip/6.0`
+libpsl,0.21.5,foss/2025a,`module load foss/2025a libpsl/0.21.5`
+tornado,6.5.1,foss/2025a,`module load foss/2025a tornado/6.5.1`
+graphite2,1.3.14,foss/2025a,`module load foss/2025a graphite2/1.3.14`
+poetry,2.1.2,foss/2025a,`module load foss/2025a poetry/2.1.2`
+hypothesis,6.133.2,foss/2025a,`module load foss/2025a hypothesis/6.133.2`
+NCCL,2.26.6-CUDA-12.8.0,foss/2025a,`module load foss/2025a NCCL/2.26.6-CUDA-12.8.0`
+OpenJPEG,2.5.3,foss/2025a,`module load foss/2025a OpenJPEG/2.5.3`
+Catch2,2.13.10,foss/2025a,`module load foss/2025a Catch2/2.13.10`
+Catch2,3.8.1,foss/2025a,`module load foss/2025a Catch2/3.8.1`
+RE2,2024-07-02,foss/2025a,`module load foss/2025a RE2/2024-07-02`
+JupyterLab,4.4.4,foss/2025a,`module load foss/2025a JupyterLab/4.4.4`
+maturin,1.8.3,foss/2025a,`module load foss/2025a maturin/1.8.3`
+meson-python,0.18.0,foss/2025a,`module load foss/2025a meson-python/0.18.0`
+UCX-CUDA,1.18.0-CUDA-12.8.0,foss/2025a,`module load foss/2025a UCX-CUDA/1.18.0-CUDA-12.8.0`
+Qhull,2020.2,foss/2025a,`module load foss/2025a Qhull/2020.2`
+Python-bundle-PyPI,2025.04,foss/2025a,`module load foss/2025a Python-bundle-PyPI/2025.04`
+OpenPGM,5.2.122,foss/2025a,`module load foss/2025a OpenPGM/5.2.122`
+libwebp,1.5.0,foss/2025a,`module load foss/2025a libwebp/1.5.0`
+Tk,8.6.16,foss/2025a,`module load foss/2025a Tk/8.6.16`
+libdeflate,1.24,foss/2025a,`module load foss/2025a libdeflate/1.24`
+pkgconf,2.3.0,foss/2025a,`module load foss/2025a pkgconf/2.3.0`
+libyaml,0.2.5,foss/2025a,`module load foss/2025a libyaml/0.2.5`
+Gurobi,12.0.3,foss/2025a,`module load foss/2025a Gurobi/12.0.3`
+nodejs,20.13.1,foss/2024a,`module load foss/2024a nodejs/20.13.1`
+Rust,1.78.0,foss/2024a,`module load foss/2024a Rust/1.78.0`
+zlib,.1.3.1,foss/2024a,`module load foss/2024a zlib/.1.3.1`
+zlib,.1.2.13,foss/2024a,`module load foss/2024a zlib/.1.2.13`
+jbigkit,2.1,foss/2024a,`module load foss/2024a jbigkit/2.1`
+Qt6,6.7.2,foss/2024a,`module load foss/2024a Qt6/6.7.2`
+libgd,2.3.3,foss/2024a,`module load foss/2024a libgd/2.3.3`
+lz4,1.9.4,foss/2024a,`module load foss/2024a lz4/1.9.4`
+motif,2.3.8,foss/2024a,`module load foss/2024a motif/2.3.8`
+libarchive,3.7.4,foss/2024a,`module load foss/2024a libarchive/3.7.4`
+virtualenv,20.26.2,foss/2024a,`module load foss/2024a virtualenv/20.26.2`
+Ninja,1.12.1,foss/2024a,`module load foss/2024a Ninja/1.12.1`
+Perl,5.38.2,foss/2024a,`module load foss/2024a Perl/5.38.2`
+SIP,6.10.0,foss/2024a,`module load foss/2024a SIP/6.10.0`
+M4,1.4.19,foss/2024a,`module load foss/2024a M4/1.4.19`
+GLib,2.80.4,foss/2024a,`module load foss/2024a GLib/2.80.4`
+ICU,75.1,foss/2024a,`module load foss/2024a ICU/75.1`
+giflib,5.2.1,foss/2024a,`module load foss/2024a giflib/5.2.1`
+groff,1.23.0,foss/2024a,`module load foss/2024a groff/1.23.0`
+Meson,1.4.0,foss/2024a,`module load foss/2024a Meson/1.4.0`
+Z3,4.13.0,foss/2024a,`module load foss/2024a Z3/4.13.0`
+gettext,0.22.5,foss/2024a,`module load foss/2024a gettext/0.22.5`
+LittleCMS,2.16,foss/2024a,`module load foss/2024a LittleCMS/2.16`
+scikit-build,0.17.6,foss/2024a,`module load foss/2024a scikit-build/0.17.6`
+Szip,2.1.1,foss/2024a,`module load foss/2024a Szip/2.1.1`
+libffi,3.4.5,foss/2024a,`module load foss/2024a libffi/3.4.5`
+PCRE,8.45,foss/2024a,`module load foss/2024a PCRE/8.45`
+Eigen,3.4.0,foss/2024a,`module load foss/2024a Eigen/3.4.0`
+PyYAML,6.0.2,foss/2024a,`module load foss/2024a PyYAML/6.0.2`
+GObject-Introspection,1.80.1,foss/2024a,`module load foss/2024a GObject-Introspection/1.80.1`
+LAME,3.100,foss/2024a,`module load foss/2024a LAME/3.100`
+snappy,1.2.1,foss/2024a,`module load foss/2024a snappy/1.2.1`
+NSS,3.104,foss/2024a,`module load foss/2024a NSS/3.104`
+expat,2.6.2,foss/2024a,`module load foss/2024a expat/2.6.2`
+Abseil,20240722.0,foss/2024a,`module load foss/2024a Abseil/20240722.0`
+Autoconf,2.72,foss/2024a,`module load foss/2024a Autoconf/2.72`
+hatchling,1.24.2,foss/2024a,`module load foss/2024a hatchling/1.24.2`
+Xvfb,21.1.14,foss/2024a,`module load foss/2024a Xvfb/21.1.14`
+xxd,9.1.1275,foss/2024a,`module load foss/2024a xxd/9.1.1275`
+flit,3.9.0,foss/2024a,`module load foss/2024a flit/3.9.0`
+XZ,5.4.5,foss/2024a,`module load foss/2024a XZ/5.4.5`
+cppy,1.2.1,foss/2024a,`module load foss/2024a cppy/1.2.1`
+X11,20240607,foss/2024a,`module load foss/2024a X11/20240607`
+Mesa,24.1.3,foss/2024a,`module load foss/2024a Mesa/24.1.3`
+GLPK,5.0,foss/2024a,`module load foss/2024a GLPK/5.0`
+archspec,0.2.5,foss/2024a,`module load foss/2024a archspec/0.2.5`
+Bison,3.8.2,foss/2024a,`module load foss/2024a Bison/3.8.2`
+setuptools-rust,1.9.0,foss/2024a,`module load foss/2024a setuptools-rust/1.9.0`
+Lua,5.4.7,foss/2024a,`module load foss/2024a Lua/5.4.7`
+Autotools,20231222,foss/2024a,`module load foss/2024a Autotools/20231222`
+re2c,3.1,foss/2024a,`module load foss/2024a re2c/3.1`
+Wayland,1.23.0,foss/2024a,`module load foss/2024a Wayland/1.23.0`
+LibTIFF,4.6.0,foss/2024a,`module load foss/2024a LibTIFF/4.6.0`
+ruamel.yaml,0.18.6,foss/2024a,`module load foss/2024a ruamel.yaml/0.18.6`
+gperf,3.1,foss/2024a,`module load foss/2024a gperf/3.1`
+Tcl,8.6.14,foss/2024a,`module load foss/2024a Tcl/8.6.14`
+patchelf,0.18.0,foss/2024a,`module load foss/2024a patchelf/0.18.0`
+x265,3.6,foss/2024a,`module load foss/2024a x265/3.6`
+libdrm,2.4.122,foss/2024a,`module load foss/2024a libdrm/2.4.122`
+libheif,1.19.5,foss/2024a,`module load foss/2024a libheif/1.19.5`
+Tkinter,3.12.3,foss/2024a,`module load foss/2024a Tkinter/3.12.3`
+gzip,1.13,foss/2024a,`module load foss/2024a gzip/1.13`
+Perl-bundle-CPAN,5.38.2,foss/2024a,`module load foss/2024a Perl-bundle-CPAN/5.38.2`
+libxslt,1.1.42,foss/2024a,`module load foss/2024a libxslt/1.1.42`
+SQLite,3.45.3,foss/2024a,`module load foss/2024a SQLite/3.45.3`
+cryptography,42.0.8,foss/2024a,`module load foss/2024a cryptography/42.0.8`
+libreadline,8.2,foss/2024a,`module load foss/2024a libreadline/8.2`
+numactl,2.0.18,foss/2024a,`module load foss/2024a numactl/2.0.18`
+libvori,220621,foss/2024a,`module load foss/2024a libvori/220621`
+intltool,0.51.0,foss/2024a,`module load foss/2024a intltool/0.51.0`
+CGAL,5.6.1,foss/2024a,`module load foss/2024a CGAL/5.6.1`
+double-conversion,3.3.0,foss/2024a,`module load foss/2024a double-conversion/3.3.0`
+libunwind,1.8.1,foss/2024a,`module load foss/2024a libunwind/1.8.1`
+Python,3.12.3,foss/2024a,`module load foss/2024a Python/3.12.3`
+help2man,1.49.3,foss/2024a,`module load foss/2024a help2man/1.49.3`
+GDRCopy,2.4.1,foss/2024a,`module load foss/2024a GDRCopy/2.4.1`
+JasPer,4.2.4,foss/2024a,`module load foss/2024a JasPer/4.2.4`
+flex,2.6.4,foss/2024a,`module load foss/2024a flex/2.6.4`
+FriBidi,1.0.15,foss/2024a,`module load foss/2024a FriBidi/1.0.15`
+UCC,1.3.0,foss/2024a,`module load foss/2024a UCC/1.3.0`
+cURL,8.7.1,foss/2024a,`module load foss/2024a cURL/8.7.1`
+util-linux,2.40,foss/2024a,`module load foss/2024a util-linux/2.40`
+coverage,7.9.2,foss/2024a,`module load foss/2024a coverage/7.9.2`
+libfabric,1.21.0,foss/2024a,`module load foss/2024a libfabric/1.21.0`
+nettle,3.10,foss/2024a,`module load foss/2024a nettle/3.10`
+libiconv,1.17,foss/2024a,`module load foss/2024a libiconv/1.17`
+MPFR,4.2.1,foss/2024a,`module load foss/2024a MPFR/4.2.1`
+CMake,3.29.3,foss/2024a,`module load foss/2024a CMake/3.29.3`
+fontconfig,2.15.0,foss/2024a,`module load foss/2024a fontconfig/2.15.0`
+libogg,1.3.5,foss/2024a,`module load foss/2024a libogg/1.3.5`
+libgit2,1.8.1,foss/2024a,`module load foss/2024a libgit2/1.8.1`
+Mako,1.3.5,foss/2024a,`module load foss/2024a Mako/1.3.5`
+pre-commit,3.7.0,foss/2024a,`module load foss/2024a pre-commit/3.7.0`
+libglvnd,1.7.0,foss/2024a,`module load foss/2024a libglvnd/1.7.0`
+Clang,18.1.8,foss/2024a,`module load foss/2024a Clang/18.1.8`
+freetype,2.13.2,foss/2024a,`module load foss/2024a freetype/2.13.2`
+freeglut,3.6.0,foss/2024a,`module load foss/2024a freeglut/3.6.0`
+PyQt5,5.15.11,foss/2024a,`module load foss/2024a PyQt5/5.15.11`
+ATK,2.38.0,foss/2024a,`module load foss/2024a ATK/2.38.0`
+bzip2,1.0.8,foss/2024a,`module load foss/2024a bzip2/1.0.8`
+fonttools,4.53.1,foss/2024a,`module load foss/2024a fonttools/4.53.1`
+zstd,1.5.6,foss/2024a,`module load foss/2024a zstd/1.5.6`
+cairo,1.18.0,foss/2024a,`module load foss/2024a cairo/1.18.0`
+Automake,1.16.5,foss/2024a,`module load foss/2024a Automake/1.16.5`
+libGLU,9.0.3,foss/2024a,`module load foss/2024a libGLU/9.0.3`
+HarfBuzz,9.0.0,foss/2024a,`module load foss/2024a HarfBuzz/9.0.0`
+GMP,6.3.0,foss/2024a,`module load foss/2024a GMP/6.3.0`
+SDL2,2.30.6,foss/2024a,`module load foss/2024a SDL2/2.30.6`
+psutil,6.0.0,foss/2024a,`module load foss/2024a psutil/6.0.0`
+libtool,2.4.7,foss/2024a,`module load foss/2024a libtool/2.4.7`
+assimp,5.4.3,foss/2024a,`module load foss/2024a assimp/5.4.3`
+lit,18.1.8,foss/2024a,`module load foss/2024a lit/18.1.8`
+LLVM,20.1.5,foss/2024a,`module load foss/2024a LLVM/20.1.5`
+LLVM,18.1.8,foss/2024a,`module load foss/2024a LLVM/18.1.8`
+PRRTE,3.0.5,foss/2024a,`module load foss/2024a PRRTE/3.0.5`
+libjpeg-turbo,3.0.1,foss/2024a,`module load foss/2024a libjpeg-turbo/3.0.1`
+Gdk-Pixbuf,2.42.11,foss/2024a,`module load foss/2024a Gdk-Pixbuf/2.42.11`
+NSPR,4.35,foss/2024a,`module load foss/2024a NSPR/4.35`
+PyQt-builder,1.18.1,foss/2024a,`module load foss/2024a PyQt-builder/1.18.1`
+Voro++,0.4.6,foss/2024a,`module load foss/2024a Voro++/0.4.6`
+x264,20240513,foss/2024a,`module load foss/2024a x264/20240513`
+DBus,1.15.8,foss/2024a,`module load foss/2024a DBus/1.15.8`
+FLAC,1.4.3,foss/2024a,`module load foss/2024a FLAC/1.4.3`
+xorg-macros,1.20.1,foss/2024a,`module load foss/2024a xorg-macros/1.20.1`
+cffi,1.16.0,foss/2024a,`module load foss/2024a cffi/1.16.0`
+binutils,2.42,foss/2024a,`module load foss/2024a binutils/2.42`
+Brotli,1.1.0,foss/2024a,`module load foss/2024a Brotli/1.1.0`
+PCRE2,10.43,foss/2024a,`module load foss/2024a PCRE2/10.43`
+tbb,2021.13.0,foss/2024a,`module load foss/2024a tbb/2021.13.0`
+parallel,20240722,foss/2024a,`module load foss/2024a parallel/20240722`
+libpciaccess,0.18.1,foss/2024a,`module load foss/2024a libpciaccess/0.18.1`
+FFmpeg,7.0.2,foss/2024a,`module load foss/2024a FFmpeg/7.0.2`
+libcerf,2.4,foss/2024a,`module load foss/2024a libcerf/2.4`
+make,4.4.1,foss/2024a,`module load foss/2024a make/4.4.1`
+Pillow,10.4.0,foss/2024a,`module load foss/2024a Pillow/10.4.0`
+pixman,0.43.4,foss/2024a,`module load foss/2024a pixman/0.43.4`
+METIS,5.1.0,foss/2024a,`module load foss/2024a METIS/5.1.0`
+UCX,1.16.0,foss/2024a,`module load foss/2024a UCX/1.16.0`
+PMIx,5.0.2,foss/2024a,`module load foss/2024a PMIx/5.0.2`
+ncurses,.6.5,foss/2024a,`module load foss/2024a ncurses/.6.5`
+Cython,3.0.10,foss/2024a,`module load foss/2024a Cython/3.0.10`
+libpng,1.6.43,foss/2024a,`module load foss/2024a libpng/1.6.43`
+Pango,1.54.0,foss/2024a,`module load foss/2024a Pango/1.54.0`
+libevent,2.1.12,foss/2024a,`module load foss/2024a libevent/2.1.12`
+Yasm,1.3.0,foss/2024a,`module load foss/2024a Yasm/1.3.0`
+Doxygen,1.11.0,foss/2024a,`module load foss/2024a Doxygen/1.11.0`
+gnuplot,6.0.1,foss/2024a,`module load foss/2024a gnuplot/6.0.1`
+hwloc,2.10.0,foss/2024a,`module load foss/2024a hwloc/2.10.0`
+libde265,1.0.15,foss/2024a,`module load foss/2024a libde265/1.0.15`
+libxml2,2.12.7,foss/2024a,`module load foss/2024a libxml2/2.12.7`
+NASM,2.16.03,foss/2024a,`module load foss/2024a NASM/2.16.03`
+scikit-build-core,0.10.6,foss/2024a,`module load foss/2024a scikit-build-core/0.10.6`
+git,2.45.1,foss/2024a,`module load foss/2024a git/2.45.1`
+PLY,3.11,foss/2024a,`module load foss/2024a PLY/3.11`
+UnZip,6.0,foss/2024a,`module load foss/2024a UnZip/6.0`
+Qt5,5.15.16,foss/2024a,`module load foss/2024a Qt5/5.15.16`
+graphite2,1.3.14,foss/2024a,`module load foss/2024a graphite2/1.3.14`
+poetry,1.8.3,foss/2024a,`module load foss/2024a poetry/1.8.3`
+hypothesis,6.103.1,foss/2024a,`module load foss/2024a hypothesis/6.103.1`
+NCCL,2.22.3-CUDA-12.6.0,foss/2024a,`module load foss/2024a NCCL/2.22.3-CUDA-12.6.0`
+OpenJPEG,2.5.2,foss/2024a,`module load foss/2024a OpenJPEG/2.5.2`
+Catch2,2.13.10,foss/2024a,`module load foss/2024a Catch2/2.13.10`
+tcsh,6.24.13,foss/2024a,`module load foss/2024a tcsh/6.24.13`
+maturin,1.6.0,foss/2024a,`module load foss/2024a maturin/1.6.0`
+meson-python,0.16.0,foss/2024a,`module load foss/2024a meson-python/0.16.0`
+UCX-CUDA,1.16.0-CUDA-12.6.0,foss/2024a,`module load foss/2024a UCX-CUDA/1.16.0-CUDA-12.6.0`
+Qhull,2020.2,foss/2024a,`module load foss/2024a Qhull/2020.2`
+Python-bundle-PyPI,2024.06,foss/2024a,`module load foss/2024a Python-bundle-PyPI/2024.06`
+libwebp,1.4.0,foss/2024a,`module load foss/2024a libwebp/1.4.0`
+Tk,8.6.14,foss/2024a,`module load foss/2024a Tk/8.6.14`
+libdeflate,1.20,foss/2024a,`module load foss/2024a libdeflate/1.20`
+pkgconf,2.2.0,foss/2024a,`module load foss/2024a pkgconf/2.2.0`
+libyaml,0.2.5,foss/2024a,`module load foss/2024a libyaml/0.2.5`
diff --git a/docs/assets/tables/partitions.csv b/docs/assets/tables/partitions.csv
index c16cf3fcd..0fa0c9a25 100644
--- a/docs/assets/tables/partitions.csv
+++ b/docs/assets/tables/partitions.csv
@@ -1,4 +1,7 @@
-Partition,Nodes,Cores/Node,CPU,GPU,Memory,Service Unit (SU) Charge
-`--partition=general`,100,128,2.5G GHz AMD EPYC 7763 (2),NA,512 GB,1 SU per hour per cpu
-`--partition=gpu`,25,128,2.0 GHz AMD EPYC 7713 (2),NVIDIA A100 GPUs (4),512 GB,2 SU per hour per GPU
-`--partition=bigmem`,2,128,2.5G GHz AMD EPYC 7763 (2),NA,2 TB,1.5 SU per CPU hour
\ No newline at end of file
+Partition,Nodes,Cores per Node,CPU,GPU,Memory
+--partition=general,100,128,2.5G GHz AMD EPYC 7763 (2),NA,512 GB
+--partition=debug,1,4,2.5G GHz AMD EPYC 7763 (2),NA,16 GB
+--partition=debug_gpu,1,4,2.0 GHz AMD EPYC 7713 (2),"
MIG (10g, 20g, 40g)",16 GB
+--partition=gpu,25,128,2.0 GHz AMD EPYC 7713 (2),"NVIDIA A100 GPUs (4);
MIG (10g, 20g, 40g)",512 GB
+--partition=bigmem,2,128,2.5G GHz AMD EPYC 7763 (2),NA,2 TB
+
diff --git a/docs/assets/tables/slurm_qos.csv b/docs/assets/tables/slurm_qos.csv
index b1a78ecaa..b96ad2413 100644
--- a/docs/assets/tables/slurm_qos.csv
+++ b/docs/assets/tables/slurm_qos.csv
@@ -1,4 +1,5 @@
-QoS,Purpose,Rules,"Wall time limit, hours",Valid Users
-`--qos=standard`,"Normal jobs, similar to Lochness “public” access",SU charges based on node type (see partitions table above).,"72",Everyone
-`--qos=low`,"Free access, no SU charge, but jobs are pre-emptable",Jobs may be killed and put back in queue if resources are needed by high or standard QoS enqueued jobs. ,"72",Everyone
-`--qos=high`,Only available to owners/investors,"Highest Priority Jobs, No SU Charges.","30 days due to monthly downtime",owner/investor PI Groups
\ No newline at end of file
+"Qos","Purpose","Rules","Wall time limit (hours)",Valid Users
+"--qos=standard","Normal jobs. Faculty PIs are allocated 300,000 Service Units (SU) per year","SU charges based on node type (see
SU, jobs can be preempted by high QoS enqueued jobs","72","Everyone"
+"--qos=low","Free access, no SU charge","Jobs can be preempted by high or standard QoS enqueued jobs",72,Everyone
+"--qos=high_$PI","Replace
$PI with the UCID of PI, only available to owners/investors","Highest Priority Jobs, no SU Charges.",72,owner/investor PI Groups
+"--qos=debug","Intended for debugging and testing jobs","No SU Charges, maximum 4 CPUs and 16G Mem is allowed, must be used with
--partition=debug or
debug_gpu",8,Everyone
\ No newline at end of file
diff --git a/docs/assets/tables/trainings/2024_fall.csv b/docs/assets/tables/trainings/2024_fall.csv
new file mode 100644
index 000000000..552fd7474
--- /dev/null
+++ b/docs/assets/tables/trainings/2024_fall.csv
@@ -0,0 +1,5 @@
+Topic,Date,Recording/Slides,Instructor
+"
[Intro to MPI Workshop](../archived/2024/7_Intro_to_MPI_Workshop)
This workshop is intended to give C and Fortran programmers a hands-on introduction to MPI programming. Both days are compact, to accommodate multiple time zones, but packed with useful information and lab exercises. This workshop provides working knowledge of how to write scalable codes using MPI – the standard programming tool of scalable parallel computing.","December 10-11, 2024",[:fontawesome-regular-file-powerpoint:](https://www.psc.edu/resources/training/hpc-workshop-december-2024-mpi/),[Pittsburgh Supercomputing Center](https://www.psc.edu/)
+"
[Job Arrays and Advanced Submission Techniques for HPC](../archived/2024/3_slurm_advanced)
This session is designed for HPC users who are familiar with basic SLURM commands and are ready to dive into more sophisticated job management techniques.", Nov 20th 2024, [:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/archived/#job-arrays-and-advanced-submission-techniques-for-hpc),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
+"
[Introduction to Containers on Wulver](../archived/2024/2_containers)
The HPC training event on using Singularity containers provides participants with a comprehensive introduction to container technology and its advantages in high-performance computing environments. This workshop provides the fundamentals of Singularity, including installation, basic commands, and workflow, as well as how to create and build containers using definition files and existing Docker images. ", Oct 16th 2024, [:fontawesome-brands-youtube:](../../HPC_Events_and_Workshops/Workshop_and_Training_Videos/archived/#introduction-to-containers-on-wulver),"[Hui(Julia) Zhao](../../about/index.md#research-computing-facilitator)"
+"
[SLURM Batch System Basics](../archived/2024/1_slurm)
This workshop introduces researchers, scientists, and HPC users to the fundamentals of the SLURM (Simple Linux Utility for Resource Management) workload manager. This virtual session provides the information on effectively utilizing HPC resources through SLURM.", Sep 18th 2024,[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/archived/#slurm-batch-system-basics),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
\ No newline at end of file
diff --git a/docs/assets/tables/trainings/2024_summer.csv b/docs/assets/tables/trainings/2024_summer.csv
new file mode 100644
index 000000000..c673cba21
--- /dev/null
+++ b/docs/assets/tables/trainings/2024_summer.csv
@@ -0,0 +1,4 @@
+Topic,Date,Recording/Slides,Instructor
+"
[SLURM Workload Manager Workshop](../archived/2024/6_slurm_workshop)
This workshop helps in identifying commonalities between previously used resources and schedulers, offering increased understanding and adoption of SLURM job scheduling, resource management, and troubleshooting techniques.","August 13-14, 2024",In-person Workshop,[SchedMD](https://www.schedmd.com/)
+"
[HPC Research Symposium](../archived/2024/5_symposium)
The Symposium features a keynote from Anthony Dina, Global Field CTO for Unstructured Data Solutions at Dell Technologies, along with invited talks by Dibakar Datta from NJIT’s Department of Mechanical and Industrial Engineering, and Jose Alvarez from Cambridge Computer Services. It also includes several lightning talks by NJIT researchers showcasing their use of High Performance Computing resources in their work.", July 16th 2024, In-person,"-"
+"
[NVIDIA Workshop](../archived/2024/4_nvidia)
This workshop provides the information on GPU-accelerated resources to analyze data ", July 15th 2024, [:fontawesome-regular-file-pdf:](https://www.nvidia.com/content/dam/en-zz/Solutions/deep-learning/deep-learning-education/DLI-Workshop-Fundamentals-of-Accelerated-Data-Science-with-RAPIDS.pdf),[NVIDIA](https://www.nvidia.com/)
\ No newline at end of file
diff --git a/docs/assets/tables/trainings/2025_fall.csv b/docs/assets/tables/trainings/2025_fall.csv
new file mode 100644
index 000000000..28fbcfffb
--- /dev/null
+++ b/docs/assets/tables/trainings/2025_fall.csv
@@ -0,0 +1,10 @@
+Topic,Date,Recording,Slides,Instructor/
Facilitator
+"
[Intro to Wulver: Resources & HPC]()
This virtual session will provide essential information about the Wulver cluster, how to get an account, and allocation details, accessing installed software.","September 17
2:30pm - 3:30pm",[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/index.md#intro-to-wulver-resources-hpc), [:fontawesome-regular-file-powerpoint:](../../assets/slides/Intro_to_Wulver_I_09_17_2025.pdf),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
+"
[HPC User Meeting - MIG & SUs]()
Please join us for this in-person session, where we will provide information on the new SU policies and how to use [MIG](/Docs/MIG/).","September 24
2:30pm - 3:30pm",In-Person Event,[:fontawesome-regular-file-powerpoint:](../../assets/slides/Wulver_MIG_SU_2025.pdf),"[Hui(Julia) Zhao](../../about/index.md#research-computing-facilitator)"
+"
[Intro to Wulver: Job Scheduler & Submitting Jobs]()
This webinar will introduce researchers, scientists, and HPC users to the fundamentals of the SLURM (Simple Linux Utility for Resource Management) workload manager. This virtual session will provide the information on effectively utilizing HPC resources through SLURM. ","October 1
2:30pm - 3:30pm",[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/index.md#intro-to-wulver-job-scheduler-submitting-jobs),[:fontawesome-regular-file-powerpoint:](../../assets/slides/Intro_to_Wulver_II_10_01_2025.pdf),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
+"
[Intro to Wulver: Focus on Job Efficiency]()
This virtual session will provide an overview of best practices for running jobs efficiently on the cluster. Topics will include the use of job arrays, dependency jobs, and advanced tools to monitor and analyze job efficiency, helping users optimize their workflows and resource usage.","October 8
2:30pm - 3:30pm",[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/index.md#intro-to-wulver-focus-on-job-efficiency),[:fontawesome-regular-file-powerpoint:](../../assets/slides/Intro_to_Wulver_III_10_08_2025.pdf),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
+"
[Machine Learning and Big Data]()
This workshop will focus on topics including big data analytics and machine learning with Spark, and deep learning using Tensorflow. Hands-on exercises are included to give attendees practice with the concepts presented","October 14-15
11am - 5pm",In-Person Event,[:fontawesome-regular-file-powerpoint:](https://support.access-ci.org/events/8529),"[Pittsburgh Supercomputing Center](https://www.psc.edu/)"
+"
[HPC User Meeting - Cluster Tools & Monitoring]()
Join us for this in-person session to learn the easiest and most user-friendly ways to monitor key statistics, including job status, SU consumption, and account allocation details on Wulver.","October 22
2:30pm - 3:30pm",In-Person Event,[:fontawesome-regular-file-powerpoint:](../../assets/slides/Wulver_ClusterTools-Monitoring_10-22-2025.pdf),"[Hui(Julia) Zhao](../../about/index.md#research-computing-facilitator)"
+"
[Conda for Shared Environments]()
This webinar will provide an introductory understanding of using Python for HPC and effectively managing their Python environments using Conda. This knowledge will empower them to leverage the power of Python for their scientific computing needs on HPC systems.",November 5
2:30pm - 3:30pm,[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/index.md#conda-for-shared-environments),[:fontawesome-regular-file-powerpoint:](../../assets/slides/conda_training_11-05-2025.pdf),"[Hui(Julia) Zhao](../../about/index.md#research-computing-facilitator)"
+
[HPC User Meeting - Introduction to MIG]()
Join us for this in-person and virtual session to learn more about Multi-Instance GPUs (MIGs).,"December 3
2:30pm - 3:30pm",[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/index.md#hpc-user-meeting-introduction-to-mig),[:fontawesome-regular-file-powerpoint:](../../assets/slides/Wulver_MIG_Dec03_2025.pdf),[Hui(Julia) Zhao](../../about/index.md#research-computing-facilitator)
+,
\ No newline at end of file
diff --git a/docs/assets/tables/trainings/2025_spring.csv b/docs/assets/tables/trainings/2025_spring.csv
new file mode 100644
index 000000000..4c3ce70cd
--- /dev/null
+++ b/docs/assets/tables/trainings/2025_spring.csv
@@ -0,0 +1,7 @@
+Topic,Date,Recording/Slides,Instructor
+"
[Open OnDemand on Wulver](../archived/2025/6_intro_to_OnDemand)
Provides information on NJIT’s Open OnDemand portal, a browser-based gateway to the Wulver cluster and shared storage. With a focus on streamlining your HPC workflows, you will explore common scenarios and tasks through interactive demos. This webinar provides a detailed understanding of how to manage your files on the cluster, run interactive applications like Jupyter Notebook and RStudio, launch a full Linux desktop environment in a browser, and submit and monitor SLURM jobs. Additionally, it provides information on how to track resource usage and optimize your job performance for efficient computing on the Wulver cluster.", April 30th 2025,[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/archived/#open-ondemand-on-wulver),"[Hui(Julia) Zhao](../../about/index.md#research-computing-facilitator)"
+"
[Parallel Computing with MATLAB: Hands on workshop](../archived/2025/5_parallel_computing_with_matlab)
This hands-on workshop introduces parallel and distributed computing in MATLAB with a focus on speeding up application codes and offloading computers. By working through common scenarios and workflows using hands-on demos, this webinar provides a detailed understanding of the parallel constructs in MATLAB, their capabilities, and some of the common hurdles that users encounter when using them.", April 16th 2025, [:fontawesome-brands-youtube:](https://www.mathworks.com/company/events/webinars/upcoming/parallel-computing-with-matlab-hands-on-workshop-4777000.html),"[Evan Cosgrove](mailto:ecosgrov@mathworks.com)"
+"
[Introduction to Linux](../archived/2025/4_intro_to_linux)
This webinar introducing the basics of Linux, essential for working in High-Performance Computing (HPC) environments. ", March 26th 2025, [:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/archived/#introduction-to-linux),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
+"
[Python and Conda Environments in HPC: From Basics to Best Practices](../archived/2025/3_conda_training.md)
This workshop offers a basic concept of using Python for High-Performance Computing (HPC) and effectively managing Python environments with Conda. This webinar empowers participants to leverage the power of Python for their scientific computing needs on HPC systems.", Feb 26th 2025,[:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/archived/#python-and-conda-environments-in-hpc-from-basics-to-best-practices),"[Hui(Julia) Zhao](../../about/index.md#research-computing-facilitator)"
+"
[Introduction to Wulver: Accessing System & Running Jobs](../archived/2025/2_intro_to_Wulver_II)
The HPC training event focuses on providing the fundamentals of SLURM (Simple Linux Utility for Resource Management), a workload manager. This virtual session will equip you with the essential skills needed to effectively utilize HPC resources using SLURM.", Jan 29th 2025, [:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/archived/#introduction-to-wulver-accessing-system-running-jobs),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
+"
[Introduction to Wulver: Getting Started](../archived/2025/1_intro_to_Wulver_I)
This webinar introduces NJIT's HPC environment, Wulver. This virtual session will provide essential information about the Wulver cluster, how to get an account, and allocation details.", Jan 22nd 2025, [:fontawesome-brands-youtube:](../Workshop_and_Training_Videos/archived/#introduction-to-wulver-getting-started),"[Abhishek Mukherjee](../../about/index.md#research-computing-facilitator_1)"
\ No newline at end of file
diff --git a/docs/assets/tables/trainings/2025_summer.csv b/docs/assets/tables/trainings/2025_summer.csv
new file mode 100644
index 000000000..8e37956fc
--- /dev/null
+++ b/docs/assets/tables/trainings/2025_summer.csv
@@ -0,0 +1,3 @@
+Topic,Date,Recording/Slides,Instructor
+"
[Machine Learning and Big Data Workshop](../archived/2025/8_PSC_Machine_Learning_workshop)
This workshop This workshop focuses on topics including big data analytics and machine learning with Spark, and deep learning using Tensorflow.","July 29-30, 2025", [:fontawesome-regular-file-powerpoint:](https://support.access-ci.org/events/8089),[Pittsburgh Supercomputing Center](https://www.psc.edu/)
+"
[MATLAB Parallel Computing Hands-On Using Wulver](../archived/2025/7_MATLAB_on_Wulver)
This interactive webinar guides participants through practical techniques for accelerating code and workflows using MATLAB’s parallel computing tools. Through live demonstrations and guided examples, this webinar provides a solid understanding of how to parallelize MATLAB code, overcome common challenges, and optimize performance across distributed computing environments.", June 12th 2025, Check the materials [here](https://content.mathworks.com/viewer/68011d833a4cc495e37e5b7c),"[Evan Cosgrove](mailto:ecosgrov@mathworks.com) (MATLAB)"
diff --git a/docs/assets/tables/trainings/2026_spring.csv b/docs/assets/tables/trainings/2026_spring.csv
new file mode 100644
index 000000000..36e98558b
--- /dev/null
+++ b/docs/assets/tables/trainings/2026_spring.csv
@@ -0,0 +1,8 @@
+Topic,Date,"Registration
Link",Location,Recording,Slides,Instructor/
Facilitator
+"[Intro to Wulver: HPC Resources & Allocations]()
This virtual session will provide essential information about the Wulver cluster, how to get an account, and allocation details, accessing installed software. ","January 28
2:30pm - 3:30pm","[:fontawesome-solid-user-plus:](https://njit-edu.zoom.us/webinar/register/WN_ZuEAlu2tTCm2nfMUhPH0sg)",Online ,-, -,"[Abhishek Mukherjee](../about/index.md#research-computing-facilitator_1)"
+"[COMPLECS: Intermediate Linux]()
Linux command line interface (CLI) skills are essential for advanced cyberinfrastructure (CI). This session covers filesystem hierarchy, permissions, links, wildcards, finding files, environment variables, modules, config files, aliases, history & Bash scripting tips. ","January 29
2:00pm - 3:30pm","[:fontawesome-solid-user-plus:](https://na.eventscloud.com/intermediate-linux-01-29-26)",Online,-,-,"External Event,
hosted by [SDSC](https://www.sdsc.edu/)"
+"[Intro to Wulver: Job Scheduler & Running Jobs]()
This webinar will introduce researchers, scientists, and HPC users to the fundamentals of the SLURM (Simple Linux Utility for Resource Management) workload manager. This virtual session will provide the information on effectively utilizing HPC resources through SLURM. ","February 4
2:30pm - 3:30pm",[:fontawesome-solid-user-plus:](https://njit-edu.zoom.us/webinar/register/WN_IO5oMXbkREqW5aWsf3f-Wg),Online,-,-,"[Abhishek Mukherjee](../about/index.md#research-computing-facilitator_1)"
+"[HPC User Meeting - HPC Updates]()
The details will be updated soon ","February 17
2:30pm - 3:30pm",Will be updated soon,"Will be updated soon",In-Person Event,-,"[Hui(Julia) Zhao](../about/index.md#research-computing-facilitator)"
+"[HPC User Meeting]()
The details will be updated soon. ",March 24
2:30pm - 3:30pm,Will be updated soon,Will be updated soon,In-Person Event,-,"[Hui(Julia) Zhao](../about/index.md#research-computing-facilitator)"
+"[Introduction to Containers]()
This webinar will provide an introductory understanding of container technology and its advantages in high-performance computing environments ","April 15
2:30pm - 3:30pm",Will be updated soon,Online,-,-,"[Hui(Julia) Zhao](../about/index.md#research-computing-facilitator)"
+"[HPC User Meeting - Containers Hands On]()
Join us for this in-person and virtual session to gain the hands on experience with containers. ","April 21
2:30pm - 3:30pm",Will be updated soon,Will be updated soon,-,-,[Hui(Julia) Zhao](../about/index.md#research-computing-facilitator)
diff --git a/docs/assets/tables/wulver.csv b/docs/assets/tables/wulver.csv
index f03d42851..6683cec92 100644
--- a/docs/assets/tables/wulver.csv
+++ b/docs/assets/tables/wulver.csv
@@ -1,5 +1,5 @@
- ,Regular,Bigmem,GPU
-Partition,regular,bigmem,gpu
+ ,General,Bigmem,GPU
+Partition,`general`,`bigmem`,`gpu`
Nodes,100,2,25
CPU Type,AMD EPYC 7753,AMD EPYC 7753,AMD EPYC 7713
CPU Speed (GHZ),2.45,2.45,2.0
diff --git a/docs/clusters/Wulver_filesystems.md b/docs/clusters/Wulver_filesystems.md
new file mode 100644
index 000000000..6c197384c
--- /dev/null
+++ b/docs/clusters/Wulver_filesystems.md
@@ -0,0 +1,16 @@
+# Wulver Filesystems
+
+The Wulver environment is quite a bit like Lochness, but there are some key differences, especially in filesystems and SLURM partitions and priorities.
+
+ Wulver Filesystems are deployed with more attention to PI ownership / group efforts:
+
+1. The `$HOME` directory is not intended for primary storage and has a 50GB quota. The main location for storing files is the group project directory which has 2TB of storage per PI group. To run the simulations, compilations, etc., users need to use a project directory which has 2TB of storage per PI group. Students can store their files under their corresponding PI’s UCID in the `/project` directory. For example, if PI’s UCID is `doctorx`, then students need to use the `/project/doctorx/` directory.
+2. Users can also store temporary files under the `/scratch` directory, likewise under a PI-group directory. For example, PI’s UCID is `doctorx`, so students need to use the `/scratch/doctorx/` directory. Please note that the files under `/scratch` will be periodically deleted. To store files for longer than computations, please use the `/project` directory. Files under `/scratch` are not backed up. For best performance simulations should be performed in the `/scratch` directory. Once the simulation is complete, the results should be copied into the `$HOME` or `/project` directory. Files are deleted from `/scratch` after they are 30 days old. Users will get notified prior deletion so that they can review the files and move them to `/project` if required.
+
+```python exec="on"
+import pandas as pd
+
+df = pd.read_csv('docs/assets/tables/Wulver_filesystems.csv', keep_default_na=False, na_filter=False)
+print(df.to_markdown(index=False))
+```
+
diff --git a/docs/clusters/cluster_access.md b/docs/clusters/cluster_access.md
index bf914fdfc..4ef1b8670 100644
--- a/docs/clusters/cluster_access.md
+++ b/docs/clusters/cluster_access.md
@@ -8,37 +8,43 @@ Faculty can obtain a login to NJIT's HPC by sending an email to [hpc@njit.edu](m
Make sure the user is connected to `NJITsecure` if the user is on campus. If working off campus, NJIT VPN is required. Please find the details [here](https://ist.njit.edu/vpn).
Here we will provide instructions for connecting to NJIT HPC on Mac/Linux and Windows OS.
+!!! Update
+ In the recent update (Sep 10th 2024) Cisco two-factor authentication (TFA) is deployed, similar to what is already deployed on NJIT websites and VPN.
=== "Mac/Linux"
Open terminal from Launchpad and select terminal.
- Type the following in the terminal by substituting `wulver` or `lochness` for `HPC_HOST` and `ucid` with your NJIT UCID.
+ Type the following in the terminal and replace `$UCID` with your NJIT UCID..
```
- localhost> ssh -X -Y ucid@HPC_HOST.njit.edu
+ localhost> ssh -X -Y $UCID@wulver.njit.edu
```
- If you don’t yet have a public SSH key for your local machine, you need to initialize one. The process of doing so differs across operating systems. The Linux and Mac system users simply need to run the command `ssh-keygen`, which will store the keys in the `~/.ssh` folder.
- Users will be prompted for your password. Enter your NJIT UCID password. Users can omit the `-X -Y` if you are not using a graphic interface.
- Once the password is provided, the user will see the following
+ Users will be prompted for your password. Enter your NJIT UCID password. Users can omit the `-X -Y` if you are not using a graphic interface. Once the password is provided, User will authenticate via Cisco two-factor authentication (TFA)
```
- The authenticity of host 'user@lochness.njit.edu' cannot be established.
- DSA key fingerprint is 01:23:45:67:89:ab:cd:ef:ff:fe:dc:ba:98:76:54:32:10.
- Are you sure you want to continue connecting (yes/no)?
+ (guest@wulver.njit.edu) Duo two-factor login for guest
+
+ Enter a passcode or select one of the following options:
+
+ 1. Duo Push to XXX-XXX-2332
+ 2. Phone call to XXX-XXX-2332
+ 3. SMS passcodes to XXX-XXX-2332 (next code starts with: 1)
+
+ Passcode or option (1-3):
```
- Answering `yes` to the prompt will cause the session to continue. Once the host key has been stored in the known_hosts file, the client system can connect directly to that server again without the need for any approvals.
+ Based on the option provided, user will be logged in after succesfull authetication.
=== "Windows"
- Download the MobaXterm from this [link](https://ist.njit.edu/software-available-download/).
+ Download the MobaXterm from this [link](https://mobaxterm.mobatek.net/download-home-edition.html).
Open MobaXterm after installation is completed.
Select Start local terminal to open the terminal.

- Type `ssh ucid@HPC_HOST.njit.edu`. Replace `ucid` with your NJIT UCID and substitute `wulver` or `lochness` for `HPC_HOST`.
+ Type `ssh $UCID@wulver.njit.edu`. Replace `$UCID` with your NJIT UCID.
.
@@ -67,33 +73,39 @@ Here we will provide instructions for connecting to NJIT HPC on Mac/Linux and Wi
login-1-41 ~ >:
```
-## Key based Authentication to the NJIT Cluster
-
+## Transfer the Data from the Local Machine to Clusters or vice versa
=== "Mac/Linux"
- To access to NJIT cluster without the password, you need to have a public ssh key on your Mac or Linux system. If you don’t yet have a public SSH key for your local machine, you need to initialize one. The process of doing so differs across operating systems. The Linux and Mac system users simply need to run the command `ssh-keygen`, which will store the keys in the `~/.ssh` folder. The public key is typically `id_rsa.pub` which is located in `~/.ssh/`.
- Once you have a public SSH key, copy it to the set of authorized keys on the computing cluster. Since you’ve already connected to the cluster in the previous step, simply navigate to the `/.ssh` folder on the computing cluster and open the file `~/.ssh/authorized_keys` and paste the content of your public key from your local machine. Double-check that the pasted key begins with `ssh-rsa`.
+ Users need to use the command in the terminal to transfer in and out the data.
-=== "Windows"
-
- Windows users can save the public key through MobaXterm settings.
-
-## Transfer the Data from the Local Machine to Clusters or vice versa
-
-=== "Mac/Linux"
+ ### `rsync`:
+ * Transfer the data from local machine to HPC cluster
+ ```
+ rsync -avzP /path/to/local/machine $UCID@wulver.njit.edu:/path/to/destination
+ ```
+
+ * To transfer the data from HPC cluster to local machine use
+ ```
+ rsync -avzP $UCID@wulver.njit.edu:/path/to/source /path/to/local/machine
+ ```
+ ### `scp`:
+ * Copy files from remote machine to local machine
+ ```
+ scp [option] [$UCID@wulver.njit.edu:path/to/source/file] [target/path]
+ ```
- Users need to use the command in the terminal to transfer in and out the data
-
+ * Copy files from local machine to remote machine
```
- rsync -avzP /path/to/local/machine ucid@HPC_HOST.njit.edu:/path/to/destination
+ scp [option] [path/to/source/file] [$UCID@wulver.njit.edu:target/path]
```
- Replace `HPC_HOST` with `lochness` or `wulver`. This will transfer the data from the ocal machine to HPC cluster.
- To transfer the data from HPC cluster to local machine use
-
+
+ * Example of scp:
```
- rsync -avzP ucid@HPC_HOST.njit.edu:/path/to/source /path/to/local/machine
+ scp -r example $UCID@wulver.njit.edu:/home/dir
```
+ Copy the “example” folder recursively to `/home/dir`
+
=== "Windows"
@@ -101,10 +113,10 @@ Here we will provide instructions for connecting to NJIT HPC on Mac/Linux and Wi

- Next, to transfer the data from the local machine to Lochness user needs to select the `Upload to current folder` option, as shown below. Selecting this option will open a dialogue box where user needs to select the files to upload.
+ Next, to transfer the data from the local machine to Wulver user needs to select the `Upload to current folder` option, as shown below. Selecting this option will open a dialogue box where user needs to select the files to upload.

- For transferring the data from Lochness to the local machine user needs to select the directory or the data from the left pane and then select `Download selected files`.
+ For transferring the data from Wulver to the local machine user needs to select the directory or the data from the left pane and then select `Download selected files`.
diff --git a/docs/clusters/lochness.md b/docs/clusters/decommissioned/lochness.md
similarity index 92%
rename from docs/clusters/lochness.md
rename to docs/clusters/decommissioned/lochness.md
index e78b1e5e0..77429c6a0 100644
--- a/docs/clusters/lochness.md
+++ b/docs/clusters/decommissioned/lochness.md
@@ -2,7 +2,7 @@
This very heterogeneous cluster is a mix of manufacturers, components, and capacities as it was built up in incremental purchases spanning several years.
-!!! warning "Much of lochness will be incorporated into the new Wulver cluster 3Q 2023."
+!!! warning "Lochness was decommissioned on March 2024. Much of lochness nodes will be incorporated into [Wulver cluster](wulver.md) 2Q 2024."
## Specifications:
diff --git a/docs/clusters/get_started_on_Wulver.md b/docs/clusters/get_started_on_Wulver.md
index 87b82d0d8..861a0a6eb 100644
--- a/docs/clusters/get_started_on_Wulver.md
+++ b/docs/clusters/get_started_on_Wulver.md
@@ -1,13 +1,45 @@
-# Access to Wulver
-Wulver is a new cluster which will be available by the end of 2023. To see the specifications of Wulver cluster, please see the [Wulver Documentation](wulver.md).
-If you already have access to existing clusters such as Lochness, you can log in to Wulver using `ssh ucid@wulver.njit.edu`. Replace `ucid` with your UCID. If you don't have prior access to NJIT cluster, you need to request for access.
-Faculty can obtain a login to NJIT's HPC & BD systems by sending an email to [hpc@njit.edu](mailto:hpc@njit.edu). Students can obtain a login either by taking a class that uses one of the systems or by asking their faculty adviser to [contact](mailto:hpc@njit.edu) on their behalf. Your login and password are the same as for any NJIT AFS system.
+# Getting Started on Wulver
+[Wulver](wulver.md) is a high performance computing (HPC) cluster – a collection of computers and data storage connected with high-speed low-latency networks. We refer to individual computers in this network as nodes. Wulver is only accessible to researchers remotely; your gateways to the cluster are the **login nodes**. From these nodes, you can view and edit files and dispatch jobs to computers configured for computation, called **compute nodes**. The tool we use to manage these jobs is called a **job scheduler**. All compute nodes on a cluster mount several shared **filesystems**; a file server or set of servers store files on a large array of disks. This allows your jobs to access and edit your data from any compute node.
-## Login to Wulver
-Please see the documentation on [cluster access](cluster_access.md) for details.
+{ width=50% height=50%}
+
+## Being a Good HPC Citizen
+While using HPC resources, here are some important things to remember:
+
+* Do not run jobs or computation on a login node, instead submit jobs to compute nodes. You should be using `sbatch`, `srun`, or OnDemand to run your jobs.
+* Never give your password to anyone else.
+* Do not run larger numbers of very short (less than a minute) jobs
+Use of the clusters is also governed by our official guidelines. Violating the guidelines might result in having your access to Wulver revoked, but more often the result is your jobs will run painfully slower.
+
+## Remote Access
+All users access the Wulver cluster remotely, either through ssh or a browser using the Open OnDemand portal. See these detailed [login instructions](cluster_access.md). NB: If you want to access the clusters from outside NJIT’s network, you must use the VPN.
+
+## Schedule a Job
+On our cluster, you control your jobs using a job scheduling system called **Slurm** that allocates and manages compute resources for you. You can submit your jobs in one of two ways. For testing and small jobs, you may want to run a job interactively. This way you can directly interact with the compute node(s) in real time. The other way, which is the preferred way for multiple jobs or long-running jobs, involves writing your job commands in a script and submitting that to the job scheduler. Please see our [Running Jobs](../Running_jobs/index.md) or review our [training materials](../HPC_Events_and_Workshops/index.md) for more details.
+
+## Use Software
+To best serve the diverse needs of all our researchers, we use software modules to make multiple versions of popular software available. **Modules** allow you to swap between different applications and versions of those applications. The software can be loaded via `module load` command. You see the following modules are loaded once you log in to the Wulver. Use the `module li` command to see the modules.
+```bash
+ 1) easybuild 2) slurm/wulver 3) null
+```
+If you cannot find certain software or libraries on the Wulver cluster, please submit a request for [HPC Software Installation](https://njit.service-now.com/sp?id=sc_cat_item&sys_id=0746c1f31b6691d04c82cddf034bcbe2&sysparm_category=405f99b41b5b1d507241400abc4bcb6b) by visiting the [Service Catalog](https://njit.service-now.com/sp?id=sc_category). The list of installed software or packages on the Wulver HPC cluster can be found in the [Software List](../Software/index.md#software-list).
+
+## Shared Filesystems
+A critical component of Wulver is its shared filesystem, which facilitates the efficient storage, retrieval, and sharing of data among the various compute nodes. It enables multiple users and applications to read from and write to a common storage pool, ensuring data consistency and accessibility across the entire system.
+See [Wulver Filesystems](Wulver_filesystems.md) for a different type of shared filesystems.
+
+## Transfer Your Files
+As part of setting up and running jobs and collecting results, you will want to copy files between your computer and the clusters. We have a few options, and the best for each situation usually depends on the size and number of files you would like to transfer. For most situations, uploading a small number of smaller files through Open OnDemand's upload interface is the best option. This can be done directly through the file viewer interface by clicking the Upload button and dragging and dropping your files into the upload window. Check the [Ondemand file transfer](../OnDemand/index.md#files) for more details. For more information on other upload methods, see our [transferring data instructions](cluster_access.md#transfer-your-files).
+
+## Workshop and Training Videos
+Each semester, we host webinars and training sessions. Please check the list of events and register from [HPC Events](../HPC_Events_and_Workshops/index.md). You can also check the recordings of previously hosted webinars at [HPC Training](../HPC_Events_and_Workshops/Workshop_and_Training_Videos/index.md).
+
+## Linux
+Our cluster runs Red Hat Enterprise Linux 8, utilizing the bash (or zsh set via https://myucid.njit.edu) command line interface (CLI). A basic familiarity with Linux commands is required for interacting with the clusters. We have a list of commonly used commands here. We periodically run an Intro to Linux Training to get you started, see our Events for upcoming training. There are also many excellent beginner tutorials available for free online, including the following:
+
+* [Unix Tutorial for Beginners](https://info-ee.surrey.ac.uk/Teaching/Unix/index.html)
+* [Cornell Virtual Workshop: An Introduction to Linux](https://cvw.cac.cornell.edu/Linux/)
-## General Linux Commands
-Since the operating system (OS) HPC cluster is linux based RHEL 8, users need to know linux to use the cluster.
In the table below, you can find the basic linux commands required to use the cluster. For more details on linux commands, please see the [Linux commands cheat sheets](https://www.linuxtrainingacademy.com/linux-commands-cheat-sheet).
```python exec="on"
@@ -17,22 +49,17 @@ df = pd.read_csv('docs/assets/tables/commands.csv')
print(df.to_markdown(index=False))
```
-## Wulver Filesystems
-
-The Wulver environment is quite a bit like Lochness, but there are some key differences, especially in filesystems and SLURM partitions and priorities.
+## Get Help
+If you have additional questions, please email us at hpc@njit.edu. If you are having a problem with `sbatch` or `srun`, please include the following information:
- Wulver Filesystems are deployed with more attention to PI ownership / group efforts:
+* Job ID#(s)
+* Error messages
+* Command used to submit the job(s)
+* Path(s) to scripts called by the submission command
+* Path(s) to output files from your jobs
-1. The `$HOME` directory is not intended for primary storage and will have only 50GB quota. To run the simulations, compilations, etc., users need to use a project directory which has 2TB of storage per PI group. Students can store their files under their corresponding PI’s UCID in the `/project` directory. For example, if PI’s UCID is `doctorx`, then students need to use the `/project/doctorx/` directory.
-2. Users can also store temporary files under the `/scratch` directory, likewise under a PI-group directory. For example, PI’s UCID is `doctorx`, so students need to use the `/scratch/doctorx/` directory. Please note that the files under `/scratch` will be periodically deleted. To store files for longer than computations, please use the `/project` directory. Files under `/scratch` are not backed up. For best performance simulations should be performed in the `/scratch` directory. Once the simulation is complete, the results should be copied into the `$HOME` or `/project` directory. Files are deleted from `/scratch` after they are 10 days old.
-
-## Software Availability
-The software can be loaded via `module load` command. You will see the following modules are loaded once you log in to the Wulver (Use the `module li` command to see the modules).
-```bash
- 1) easybuild 2) slurm/wulver 3) null
-```
-Please check [Software](../Software/index.md) to see how to load specific applications. If you somehow used `module purge` then to check available modules you need to load `module load wulver` command and this will load all the default modules.
+Here are some tips to get help faster:
-## Slurm Configuration
+* The fastest way to get help is by sending an email to HPC@NJIT.EDU, this is routed right to us at [ARCS HPC](../about/index.md) and will be answered by the most knowledgeable team member based on your question.
+* Never reply to an earlier HPC email incident as the ticketing system does not make a new ticket when you respond to an already closed ticket.
-Please see [SLURM](slurm.md) for details in the slurm configuration.
diff --git a/docs/clusters/index.md b/docs/clusters/index.md
new file mode 100644
index 000000000..36fe78c56
--- /dev/null
+++ b/docs/clusters/index.md
@@ -0,0 +1,16 @@
+# Current cluster
+
+* [Wulver](wulver.md) is NJIT's newest High Performance Cluster made available to users in Jan 2024.
+
+{ width=80% height=80%}
+
+
+- ## Virtual Tour of NJIT Data Center
+
+ ---
+
+ Wulver is built through a partnership with [DataBank](https://www.databank.com/), which is live in DataBank’s Piscataway, N.J. data center (EWR2) and will support NJIT’s research efforts. This infrastructure will bolster NJIT’s research initiatives. You can access the 3D virtual tour of HPC data center below:
+
+
+
+
\ No newline at end of file
diff --git a/docs/clusters/wulver.md b/docs/clusters/wulver.md
index e2987c980..b1d377540 100644
--- a/docs/clusters/wulver.md
+++ b/docs/clusters/wulver.md
@@ -1,6 +1,6 @@
# Wulver
-Wulver is a new cluster anticipated to be available at the end of 2023.
+Wulver is NJIT's newest cluster, which went into production in early 2024. To get started with Wulver, please check the details in [get started on Wulver](get_started_on_Wulver.md)
## Specifications:
diff --git a/docs/facilities/facilities.md b/docs/facilities/facilities.md
index 49e2336d4..64353a78c 100644
--- a/docs/facilities/facilities.md
+++ b/docs/facilities/facilities.md
@@ -1,11 +1,11 @@
-# Introduction
+# Facilities
As New Jersey’s Science and Technology University, New Jersey Institute of Technology (NJIT) has developed a local cyberinfrastructure well positioned to allow NJIT faculty and students to collaborate at local, national, and global levels on many issues at the forefront of science and engineering research.
## Cyberinfrastructure
NJIT’s Information Services and Technology (IST) resources provide members of the university community with universal access to a wealth of resources and services available over the NJIT network. NJIT's multi-100 gigabit backbone and multi-gigabit user network provides access to classrooms, laboratories, residence halls, faculty and staff offices, the library, student organization offices and others. 50% of these locations are provided with speeds of 5Gb/s or more. The campus wireless network 2,900+ access points blanket the university’s public areas, classrooms, offices, collaboration spaces and outdoor areas. Over 60% of the Wi-Fi network is Wi-fi 6 capable, enabling NJIT’s mobile users connectivity with multiple devices at increasing speeds. NJIT’s connectivity to NJEdge, NJ’s state-wide higher education network, provides access to the Internet and Internet 2. Students have the opportunity to work closely with faculty and researchers as new families of advanced applications are developed for an increasingly networked and information-based society. NJIT is also directly connected to cloud service providers such as AWS (Amazon Web Services) to provide low latency high speed access to cloud resources. A redundant diverse 100Gb network connection is being provisioned to support NJIT’s new HPC co-location facility in Piscataway NJ.
## Compute Resources
-NJIT’s Advanced Research Computing Services (ARCS) group presently maintains two HPC clusters, a 224 node heterogenous condominium cluster Lochness. In addition, multiple IST teams are working on deploying Wulver, a 127 node heterogeneous condominium cluster at the new HPC co-location facility, Databank in Piscataway NJ, expected to be available for general use by the end of 2023.
+NJIT’s Advanced Research Computing Services (ARCS) group presently maintains a 127-node heterogeneous condominium cluster at the new HPC co-location facility, Databank in Piscataway, NJ, which went into production for general use in early 2024. The previous 224-node heterogeneous condominium cluster, Lochness, was decommissioned; however, a few nodes from Lochness will soon be integrated into the Wulver for academic use.
```python exec="on"
import pandas as pd
@@ -22,13 +22,15 @@ The ARCS team provides 50TB of research and academic storage via the Andrew File
## AFS Storage
AFS is a distributed file system. Its principal components are :
-Database servers : provide information on authorization, and directory and file locations on file servers File servers : store all data in discrete "volumes", with associated quotas Client software : connects AFS clients to database servers and file servers over a network. Every client has access to every file in AFS, subject to the permissions attached to the identity of the user logged into the client. Client software is available for Linux, MacOS, and Windows. Single global name space for all clients : all clients see the identical path names and permissions.
+* Database servers : provide information on authorization, and directory and file locations on file servers
+* File servers : store all data in discrete "volumes", with associated quotas
+* Client software : connects AFS clients to database servers and file servers over a network. Every client has access to every file in AFS, subject to the permissions attached to the identity of the user logged into the client. Client software is available for Linux, MacOS, and Windows. Single global name space for all clients. See the identical path names and permissions.
An AFS "cell" is a group of database servers and file servers sharing the same cell name.
AFS was designed for highly-distributed wide area network (WAN) use. A cell can be concentrated in one physical location, or be widely geographically dispersed. A client in one cell can be given fine-grained levels of access to the data in other cells.
-The NJIT cell name is "cad.njit.edu"; all file and directory paths begin with `/afs/cad.njit.edu/`(abbreviated to `/afs/cad/`). This cell currently contains about 27TB of research data, 4TB of user data,and 1.4TB of applications data, in about 47,700 volumes.
+The NJIT cell name is `cad.njit.edu`; all file and directory paths begin with `/afs/cad.njit.edu/`(abbreviated to `/afs/cad/`). This cell currently contains about 27TB of research data, 4TB of user data,and 1.4TB of applications data, in about 47,700 volumes.
The current AFS implementation, OpenAFS, which is open source, will be replaced with a commercial implementation during the 2022-2023 academic year, providing important enhancements in performance, security,capacities, authorization, permissions, and administration as well as bug fixes and technical support.
@@ -38,7 +40,8 @@ All of `/afs/cad/` is backed up daily via IST enterprise backup.
Access to Cloud Computing is provided via Rescale, a cloud computing platform combining scientific software with high performance computing. Rescale takes advantage of commercial cloud computing vendors such as AWS, Azure and Google Cloud to provide compute cycles as well as storage. The Rescale services also include applications setup, and billing and provides a pay-as-you-go method for researchers to use commercial cloud services - e.g., Amazon Web Services, Azure, Google Cloud Platform.
## Data Center: Space, Power and Cooling
-NJIT’s recent purchase of the Wulver cluster exceeds the capacity of NJIT’s current computing facilities in terms of power and cooling. To accommodate Wulver and future expansion, NJIT has partnered with Databank, a leader in colocation facilities. Databank has more than 65 datacenters in over 27 metropolitan areas, supporting many industries including very large HPC deployments. The Databank location in Piscataway NJ will provide NJIT with 100% uptime SLA due to redundant power, cooling, and network facilities. The facility also provides water-cooling instead of traditional air-conditioned cooling in order to support far denser equipment needed for modern HPC.
+NJIT’s recent purchase of the Wulver cluster exceeds the capacity of NJIT’s current computing facilities in terms of power and cooling. To accommodate Wulver and future expansion, NJIT has partnered with Databank, a leader in collocation facilities. Databank has more than 65 datacenters in over 27 metropolitan areas, supporting many industries including very large HPC deployments. The Databank location in Piscataway NJ will provide NJIT with 100% uptime SLA due to redundant power, cooling, and network facilities. The facility also provides water-cooling instead of traditional air-conditioned cooling in order to support far denser equipment needed for modern HPC.
+
## Services
-NJIT employs five FTE staff members with a budget to hire three additional FTE’s to administer and support research computing resources. Services available to the user community include system design, installation, and administration of research computing resources, application support, assistance with software purchasing and consulting services to faculty members, their research associates, and students.
\ No newline at end of file
+NJIT employs nine FTE staff members to administer and support research computing resources. Services available to the user community include system design, installation, and administration of research computing resources, application support, assistance with software purchasing and consulting services to faculty members, their research associates, and students.
diff --git a/docs/faq/faq.md b/docs/faq/faq.md
new file mode 100644
index 000000000..ff8cce6ba
--- /dev/null
+++ b/docs/faq/faq.md
@@ -0,0 +1,153 @@
+# FAQs
+
+Welcome to our most frequently asked questions. If you cannot find an entry related to your question, please [contact us](contact.md), and we will add it.
+
+## Login Issues / Access
+
+??? question "How do I get access to the Wulver HPC cluster?"
+ * If you are a student or researcher, your research/faculty advisor will need to request an account on your behalf.
+ * If you are a faculty member, then you can directly email us at hpc@njit.edu for an account.
+ * For individuals non affiliated with NJIT have to contact a faculty member in NJIT to sponsor you a guest account
+ * If you are taking a course that requires computation on Wulver please ask your course instructor to email hpc@njit.edu to request access for the students.
+ * For detail information please click [here](cluster_access.md).
+
+??? question "How do I connect to the Wulver HPC cluster?"
+ * For detail information please click [here](cluster_access.md).
+
+??? question "Why can’t I log in to the HPC?"
+ - It’s possible that your account is not activated on Wulver. Make sure to follow the instructions in [How do I get access to the Wulver HPC cluster?](). Once your account is created, you will receive a confirmation in your NJIT email.
+ - It’s possible that your account is not activated on Wulver. Make sure to follow the instructions in [How do I get access to the Wulver HPC cluster?](). Once your account is created, you will receive a confirmation in your NJIT email.
+ - Ensure that your UCID and password are correct.
+ - If you are working off-campus, make sure to connect to NJIT VPN before logging into Wulver. Visit https://ist.njit.edu/vpn for more information on VPN installation.
+
+??? question "Why is my password not showing in the terminal when I type?"
+ - The Command Line Interface hides passwords as you type to protect against visual exposure and enhance security.
+
+??? question "Why am I getting “permission denied” ?"
+ - You might not have access to the HPC yet. For requesting access, please check our [user access](cluster_access.md).
+ - Your instructor/PI might not have added you into the group yet.
+ - Cluster might be having downtime due to maintenance or other reasons.
+
+??? question "How do I transfer data to and from the HPC cluster?"
+ - Detailed instructions are given [here](cluster_access.md#transfer-the-data-from-the-local-machine-to-clusters-or-vice-versa).
+
+??? question "What security measures should I be aware of when using the HPC cluster?"
+ - Do not share your login information with anyone else or allow anyone to login with your account.
+
+??? question "Which directory will I land on when I log in?"
+ - You will enter into your home directory named under your UCID.
+ - Please note that you are on the login node of Wulver. Do not run any computations on the login nodes, as CPU and memory usage are limited per user on these nodes. To perform computations, you need to request resources from the compute nodes via [SLURM](../Running_jobs/index.md).
+
+## File Systems / Storage
+
+??? question "What are the different file storage systems available on Wulver?"
+ - Please visit our [file system](Wulver_filesystems.md) page for detailed information.
+
+??? question "How can I get more storage?"
+ - When your account is active on Wulver, you will have access to following file systems
+ - **$HOME** directory - Default quota of 50GB per user and cannot be increased
+ - **/project** - Default quota of 2TB per PI group and can be increased by purchasing additional storage at $200/TB for 5 years.
+ - **/scratch** - No quota but files will be deleted after 30 days or sooner if the directory reaches 80% capacity. Users will be notified prior to deletion so they can review and move important files to `/project` if necessary.
+ - For more details, visit [file system](Wulver_filesystems.md).
+
+??? question "I have an error “disk quota exceeded” while running a job, how can I solve this error?"
+ - First, check which filesystem is causing the error. If it’s in `$HOME`, it means the 50GB quota has been exceeded. You can either delete files, compress them, or move them to `/project`. If the error is in `/project`, you need to compress or delete some files. Alternatively, you can ask your PI to purchase an increase in the `/project` quota by emailing us at hpc@njit.edu. You can also run your code in `/scratch` and, after the simulation, transfer the important files to `/project`.
+ - Use `quota_info` command to check the filesystem quota.
+ - If you are encountering this error while running Python or Jupyter Notebook via Conda, the issue is likely due to Conda packages or cache files stored in `$HOME`. Use the ```sn p $HOME``` command to view a detailed breakdown of each directory in $HOME, showing how much space it is consuming. If you notice that the `.cache` directory is consuming a significant amount of space, you should remove it. If the `.conda` directory is taking up too much space, you need to move the Conda environment to `/project`. Refer to [this guide](conda.md#importing-to-a-different-location) for detailed steps.
+
+??? question "What is `/scratch` used for?"
+ - Run your code in `/scratch` and, after the simulation, transfer the important files to `/project`.
+ - **NB** : This space is not backed up; files will be deleted after 30 days.
+
+## Jobs and scheduling
+
+??? question "How do I submit and manage jobs on the Wulver HPC cluster?"
+ - We use the SLURM resource manager and scheduler on Wulver for submitting and managing jobs on the compute nodes.
+ - Check our [Running Jobs page](../Running_jobs/index.md) on the website for more detailed guidance.
+
+??? question "What is Walltime?"
+ - Walltime refers to the maximum amount of real-world time a job is allowed to run on the cluster. The actual job may finish earlier.
+ - To set the walltime, you'll typically specify it in the job submission script:
+ ```bash
+ #SBATCH --time=01:00:00 # Request 1 hour of walltime
+ ```
+
+??? question "How can I monitor the status of my jobs?"
+ - For checking and monitoring the status of your job please use [this guide](managing-jobs.md) for detailed information.
+
+??? question "Where will my output appear after I submit a job?"
+ - Your output will appear in the file which you initialized in your job script which look like below:
+ ```bash
+ #SBATCH --output=file_name.%j.out # %j expands to slurm JobID
+ ```
+ - By default, this file is in the same directory as the job script, unless you specify the working directory via `sbatch`
+
+??? question "How can I see the status of the cluster?"
+ - For checking the cluster load use this [link](https://hpc.njit.edu/Monitor/load.html)
+
+## Maintenance
+
+??? question "When does maintenance occur?"
+ - Second Tuesday 9AM - 9PM monthly.
+
+??? question "I submitted my job before maintenance, but why did it go to pending status as *"ReqNodeNotAvail, Reserved for maintenance"* ?"
+ - If your job walltime request indicates the job will not finish before the maintenance starts then the scheduler will hold the job.
+ - If you resubmit the job with a shorter walltime, it will not be held.
+ - For example: If you submit a job on Monday with a walltime of 2 days and maintenance is scheduled on Tuesday then your job overlaps with the maintenance schedule. Hence, SLURM will immediately put it on hold until the maintenance is completed and start it later.
+
+??? question "How will maintenance affect me?"
+ - During the maintenance downtime, logins will be disabled, users will not have access to their stored data in `/project`, `/home` and `/scratch`.
+ - All jobs that do not end **before 9AM** will be held by the scheduler until the downtime is complete and the systems are returned to service.
+
+??? question "Will I receive a notification before maintenance?"
+ - Our regular monthly maintenance cycle is every 2nd Tuesday. If there is a change to this cycle, all users will receive notification.
+
+??? question "What happens to my jobs during maintenance?"
+ - Jobs queued just before the maintenance will be held in **Pending State** and then later gets continued.
+
+## Software and Hardware Specifications
+
+??? question "Is there any installed software?"
+ - We have a variety of software packages already installed on Wulver.
+ - For more information, please visit our [Software guide](../Software/index.md).
+
+??? question "I want to install new software?"
+ - Please first check our already installed [software list](../Software/index.md#software-list), and if you still don’t find it then visit our [guide](../Software/index.md) for detailed guidance on software installation.
+
+??? question "What are the hardware specifications of the Wulver HPC cluster?"
+ - Please check out our [Wulver](wulver.md) page for complete details on hardware specifications.
+
+??? question "What programming languages are commonly used on the HPC cluster?"
+ - Most common ones are C, C++, Fortran, Python, R
+
+??? question "What are the tools and compilers available on HPC cluster?"
+ - Common programming tools include Intel and GNU compilers as well as MPI for multi-node jobs.
+ - For GPU acceleration we have CUDA and OpenACC.
+ - Details on these resources are available [here](compilers.md).
+
+??? question "How can I optimize my code for parallel processing?"
+ - To optimize your code, you can use different parallelization techniques depending on your setup:
+ - **MPI** and **OpenMP** for parallelizing code and then specifying the cores.
+ - **CUDA** and **OpenACC** for GPU acceleration, especially for compute-heavy tasks.
+ - Focus on optimizing **CPU**, **I/O**, **memory**, and **parallelism**.
+ - If you have specific questions about this, please email HPC Help at [hpc@njit.edu](mailto:hpc@njit.edu) to request support.
+
+??? question "Can I request additional resources or customization for my projects?"
+ - Please see [Wulver Usage and Condo Policy](wulver_policies.md) for resource allocation details.
+ - The Research Computing Advisory Board is working on establishing policies and procedures for requesting additional computing time beyond the standard **300K SU/year**.
+
+??? question "My software requires a license, how should I proceed?"
+ - License is purchased by the user and his/her department.
+ - Please visit the software’s documentation for your specific software.
+
+## Miscellaneous
+
+??? question "What documentation and support resources are available for Wulver users?"
+ - Please visit the [Education and Training tab](../HPC_Events_and_Workshops/index.md) on our website.
+ - If you have any specific questions which are not covered, please contact us at [hpc@njit.edu](mailto:hpc@njit.edu)
+
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/faq/rhel_9_faq.md b/docs/faq/rhel_9_faq.md
new file mode 100644
index 000000000..12f4da110
--- /dev/null
+++ b/docs/faq/rhel_9_faq.md
@@ -0,0 +1,55 @@
+# FAQ: RHEL8 to RHEL9 OS Upgrade
+
+We are upgrading Wulver's operating system from RHEL8 to RHEL9 during the [September maintenance cycle](../news/posts/2025-07-29.md/#wulver-maintenance).
If you have software installed for your group, complete this [form](https://forms.gle/6RzQE3hB93hMiong7) to request access to the test cluster.
+Please review these frequently asked questions. Please contact us if you have further questions.
+
+??? question "What is changing on Wulver with the OS upgrade from RHEL8 to RHEL9?"
+ * The cluster’s operating system is being upgraded from Red Hat Enterprise Linux 8 (RHEL8) to Red Hat Enterprise Linux 9 (RHEL9).
+ * This affects system libraries, compilers, security tools, and the default environment for all users.
+ * A new software stack is being deployed, replacing or upgrading many applications and modules.
+
+
+??? question "What is included in the new software stack?"
+ * Key libraries (e.g., GCC, OpenMPI, Python, R, etc.) and applications have been rebuilt or upgraded for RHEL9 compatibility.
+ * Software compiled under RHEL8 will still be available to use at your own risk by loading the older version of foss or Intel modules However, the default path will point to the software compiled under RHEL 9. If users intend to use the older version of the software, the path will be shared upon request.
+ * Up-to-date documentation and module lists will be provided—check the cluster’s software page or module system for specifics.
+ * The default toolchain for newer RHEL 9 applications is `foss/2025a` and `foss/2024a`
+
+
+??? question "How do I access the new modules or software?"
+ * Use the `module avail` and `module load` commands to explore new and updated software on RHEL9.
+ * Some modules or software stacks may have different names, versions, or paths—review documentation.
+
+
+??? question "How will the upgrade affect my jobs, software, and workflows?"
+ * Jobs started on RHEL8 nodes may not continue without modification on RHEL9 nodes, especially if they rely on system libraries or modules that have changed.
+ * Paths, environment variables, or application behavior may differ. Review/retest your scripts.
+ * Newer versions of libraries and tools may impact software compatibility and performance.
+ * If you were using software installed by the admins and available via modules, those software versions might have changed, as we have built new versions of the software against the updated compilers.
+ * Users are advised to create or modify their input scripts based on the new version of the software, if required. However, if you intend to use the old version of the software (compiled under RHEL 8), please contact us and we will share the path..
+ * Please note that we will not provide any support for the old versions of the software, as they have not been tested. Use them at your own risk. We will only provide support for the new software that was built with the updated compilers for RHEL 9.
+
+
+??? question "How can I use software that I built or compiled on RHEL8?"
+ * Direct execution is not recommended: Binaries or environments built for RHEL8 may run into compatibility issues on RHEL9 (missing libraries, ABI mismatches).
+ * Best Practice: Rebuild or reinstall your software on RHEL9 when possible.
+ * If you have software installed for your group, complete this [form](https://forms.gle/6RzQE3hB93hMiong7) to request access to the test cluster.
+ * For critical cases:
+ - **Containers**: Use Singularity/Apptainer to encapsulate your RHEL8 application and its dependencies.
+ - **Compatibility Libraries**: If available, modules with compatibility libraries for RHEL8 runtime can be loaded (contact support for availability).
+ - **Long-running jobs**: Jobs checkpointed under RHEL8 should ideally resume on RHEL8, or require validation/testing if resuming on RHEL9.
+
+
+??? question "What about jobs that were running for a long time and need to restart after upgrade?"
+ * Checkpoint/Restart: If using application-level checkpointing, confirm that your checkpoint files are forward-compatible with the new software/libraries.
+ * Software upgrades may invalidate previous job states or results. Test recovery procedures in a development environment before production.
+ * For critical, long-running simulations or legacy applications dependent on the RHEL8 stack, consult with the cluster support team about feasible solutions like containers or legacy compatibility modules.
+
+
+### Quick tips for a smooth transition
+
+- **Test**: Validate scripts and workflows on RHEL9 nodes before large-scale jobs.
+- **Rebuild**: Prefer fresh builds/reinstalls of your applications and conda environments.
+- **Stay Informed**: Monitor cluster announcements, documentation, and user forums for updates.
+
+
diff --git a/docs/index.md b/docs/index.md
index 26b95e0a1..95808a89d 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,23 +1,94 @@
-# HPC
+---
+hide:
+ - toc
+---
-Welcome to HPC at NJIT.
+# High Performance Computing (HPC)
-NJIT provides High Performance Computing resources to support scientific computing for faculty and students. These resources include CPU nodes, GPU nodes, parallel storage, high speed, low latency Infiniband networking and a fully optimized scientific software stack.
+Welcome to HPC at [New Jersey Institute of Technology (NJIT)](https://www.njit.edu).
+
+- :octicons-info-24: __NJIT__ provides High Performance Computing resources to support scientific computing for faculty and students. These resources include CPU nodes, GPU nodes, parallel storage, high speed, low latency Infiniband networking and a fully optimized scientific software stack.
+
+- :material-server: __Click [here](clusters/index.md#virtual-tour-of-njit-data-center) for a virtual tour of the Data Center!__
+{ width="300" loading=lazy }
+
+
+## HPC latest News!
+---
+
+
+- :material-folder-wrench:{ .lg .middle } __Wulver Scheduled Maintenance__
+
+ ---
+ Wulver will be out of service for maintenance once a month for updates, repairs, and upgrades. The schedule is 9 a.m. to 9 p.m. the second Tuesday of every month. During the maintenance period, the logins will be disabled and the jobs that do not end before the maintenance window begins will be held until the maintenance is complete and the cluster is returned to production.
+
For example, if you submit a job the day before maintenance, your job will enter a pending state (you will see job status `PD` when using `squeue -u $LOGNAME`). You can either adjust the walltime or wait until maintenance ends. Please stay informed about maintenance updates at [Cluster Maintenance Updates and News](news/index.md).
+
+- :fontawesome-solid-door-open:{ .lg .middle } __Open Office Hours__
+
+ ---
+ This spring semester, we are offering drop-in office hours every Monday and Wednesday from 2:00 to 4:00 p.m starting January 21 in a **New Location**. Stop by to meet with our student consultants and ask any questions you have about using HPC resources. There's no need to create a ticket in advance; if follow-up is needed, the student consultants will open a ticket on your behalf, and you'll receive further instructions.
+
+ - Date: **Every Monday and Wednesday**
+ - Location: **GITC 5302N**
+ - Time: **2:00 PM - 4:00 PM**
+
+
+
+- :fontawesome-solid-calendar:{ .lg .middle } __HPC Spring Events, 2026__
+
+ ---
+ Check out our event schedule for spring [here](HPC_Events_and_Workshops/index.md)! If you have suggestions for webinar topics, please feel free to contact us at [hpc@njit.edu](mailto:hpc@njit.edu).
-!!! Info "Cluster Maintenance Updates"
- To see the latest maintenance updates on NJIT cluster, please visit [Cluster Maintenance](news/index.md).
+- :fontawesome-solid-users:{ .lg .middle } __Monthly HPC User Meeting__
+
+ ---
+ We are currently offering a new monthly event for HPC researchers at NJIT: **The HPC Monthly User Meeting**. This event is open to all NJIT students, faculty, and staff who use or are interested in NJIT's HPC resources and services. No prior registration is required.
+
+ * Date: Check the [spring schudule](HPC_Events_and_Workshops/index.md).
+
+
+
+
+## HPC Highlights!
+---
+
+
+:octicons-law-24: __Policies__
+
See our updated [Policies](Policies/index.md) for cluster resource allocation and investment.
+{ .card }
+
+:fontawesome-solid-rocket: __Getting started with Wulver__
+
If you are new to Wulver, and want to know how to get started, visit [Wulver Quickstart](get_started_on_Wulver.md).
+{ .card }
+
+:material-bullhorn: __Cluster Updates__
+
To see the latest updates on NJIT cluster, please visit [Cluster Maintenance Updates and News](news/index.md).
+{ .card }
+
+:material-layers: __Software Modules__
See [Software Modules](Software/index.md) for list of software packages installed on our cluster.
+{ .card }
-## Current clusters
+:material-console: __Running jobs on Wulver__
+
See [Running jobs](Running_jobs/index.md) for differenet partitions, QoS and sample jobs scripts.
+{ .card }
-* [Lochness](lochness.md)
-
+:material-calendar: __HPC Events__
+
Check the upcoming events hosted by HPC team and register from [HPC Events](HPC_Events_and_Workshops/index.md).
+{ .card }
-## Incoming cluster
+:fontawesome-solid-chalkboard-user: __HPC Trainings and Webinars__
+
Make sure to stay up-to-date with the latest webinar sessions on HPC by visiting the [HPC Training](HPC_Events_and_Workshops/Workshop_and_Training_Videos/index.md).
+{ .card }
-* [Wulver](wulver.md)
+:material-frequently-asked-questions: __FAQs__
+
For any queries regarding the usage of our cluster, please visit the [FAQs](faq.md) which are organized by topic.
+{ .card }
-Please direct all HPC related issues to [hpc@njit.edu](mailto:hpc@njit.edu).
+:material-email: __Contact Us__
+
To create a ticket or request for software installation, visit [Contact Us](contact.md).
+{ .card }
+
\ No newline at end of file
diff --git a/docs/news/index.md b/docs/news/index.md
index 77fee312e..dc5a9e433 100644
--- a/docs/news/index.md
+++ b/docs/news/index.md
@@ -1,2 +1,5 @@
+---
+title: News and Announcements
+---
# Cluster Maintenance Updates and News
diff --git a/docs/news/posts/2024-01-22.md b/docs/news/posts/2024-01-22.md
new file mode 100644
index 000000000..1404c4ad0
--- /dev/null
+++ b/docs/news/posts/2024-01-22.md
@@ -0,0 +1,12 @@
+---
+date: 2024-01-22
+---
+
+# Wulver Maintenance
+
+!!! important "Wulver Monthly Maintenance"
+
+
+Beginning Feb 1, 2024, ARCS HPC will be instituting a monthly maintenance downtime on all HPC systems on the second Tuesday from 9AM - 9PM. Wulver and the associated GPFS storage will be taken out of service for maintenance, repairs, patches and upgrades. During the maintenance downtime, logins will be disabled, users will not have access to their stored data in `/project`, `/home` and `/scratch`. All jobs that do not end before 9AM will be held by the scheduler until the downtime is complete and the systems are returned to service.
+
+We anticipate maintenance to be completed by the scheduled time. However, occasionally the maintenance may be completed earlier than scheduled or could be extended to the following days. A notification will be sent to the user mailing list when the systems are returned to service or the maintenance window is extended. Additionally, users will encounter the cluster service information upon logging in to Wulver during maintenance. Please pay attention to the Message of the Day when logging in, as it will serve as a reminder for upcoming downtimes or other crucial cluster-related information. Users should take into account the maintenance window when scheduling jobs and developing plans to meet various deadlines. Please do not contact the help desk, HPC staff or open SNOW tickets for access to the cluster or data during the maintenance downtime.
\ No newline at end of file
diff --git a/docs/news/posts/2024-06-18.md b/docs/news/posts/2024-06-18.md
new file mode 100644
index 000000000..af090cc4c
--- /dev/null
+++ b/docs/news/posts/2024-06-18.md
@@ -0,0 +1,43 @@
+---
+date: 2024-06-18
+---
+
+# HPC Summer Events
+
+ARCS HPC invites you to our upcoming events. Please register for the events you plan to attend.
+
+
+## NVIDIA Workshop — Fundamentals of Accelerated Data Science
+!!! important "Save the Date"
+
+ - Date: July 15, 2024
+ - Location: GITC 3700
+ - Time: 9 a.m. - 5 p.m.
+
+Learn to use GPU-accelerated resources to analyze data. This is an intermediate level workshop that is intended for those who have some familiarity with Python, especially NumPy and SciPy libraries. See more detail about the workshop [here](https://www.nvidia.com/content/dam/en-zz/Solutions/deep-learning/deep-learning-education/DLI-Workshop-Fundamentals-of-Accelerated-Data-Science-with-RAPIDS.pdf).
+
+[Registration](https://forms.gle/NhtvEUiY2st3eQoT6)
+
+
+## HPC Research Symposium
+!!! important "Save the Date"
+
+ - Date: July 16, 2024
+ - Location: Student Center Atrium
+ - Time: 9 a.m. - 5 p.m.
+
+This past year has been transformative for HPC Research at NJIT. The introduction of our new shared HPC cluster, Wulver, has expanded our computational capacity and made the research into vital areas more accessible to our faculty. Please join us to highlight the work of researchers using the HPC resources and connect with the NJIT HPC community.
+
+Please register for the symposium [here](https://forms.gle/NhtvEUiY2st3eQoT6), you can also sign up to present your HPC research as a lightning talk or poster presentation:
+
+
+## SLURM Workload Manager Workshop
+!!! important "Save the Date"
+
+ - Date: August 13 & 14, 2024
+ - Location: GITC 3700
+ - Time: 9 a.m. - 5 p.m.
+
+This immersive 2-day experience will take you through comprehensive technical scenarios with lectures, demos, and workshop lab environments. The Slurm trainer will assist in identifying commonalities between previously used resources and schedulers, offering increased understanding and adoption of [SLURM](slurm.md) job scheduling, resource management, and troubleshooting techniques.
+
+Registration is now closed.
diff --git a/docs/news/posts/2024-09-11.md b/docs/news/posts/2024-09-11.md
new file mode 100644
index 000000000..2e93ad76f
--- /dev/null
+++ b/docs/news/posts/2024-09-11.md
@@ -0,0 +1,43 @@
+---
+date: 2024-09-11
+---
+
+# HPC Fall Events
+
+ARCS HPC invites you to our upcoming events. Please register for the events you plan to attend.
+
+
+## SLURM Batch System Basics
+!!! important "Save the Date"
+
+ - Date: Sep 18th, 2024
+ - Location: Virtual
+ - Time: 2:30 PM - 3:30 PM
+
+Join us for an informative webinar designed to introduce researchers, scientists, and HPC users to the fundamentals of the SLURM (Simple Linux Utility for Resource Management) workload manager. This virtual session will equip you with essential skills to effectively utilize HPC resources through SLURM.
+
+Registration is now closed. Check the [HPC training](../../HPC_Events_and_Workshops/Workshop_and_Training_Videos/index.md#slurm-batch-system-basics) for the webinar recording and slides.
+
+
+## Introduction to Containers on Wulver
+!!! important "Save the Date"
+
+ - Date: Oct 16th, 2024
+ - Location: Virtual
+ - Time: 2:30 PM - 3:30 PM
+
+The HPC training event on using Singularity containers provides participants with a comprehensive introduction to container technology and its advantages in high-performance computing environments. Attendees will learn the fundamentals of Singularity, including installation, basic commands, and workflow, as well as how to create and build containers using definition files and existing Docker images.
+
+Registration is now closed. Check the [HPC training](../../HPC_Events_and_Workshops/Workshop_and_Training_Videos/index.md#introduction-to-containers-on-wulver) for the webinar recording and slides.
+
+
+## Job Arrays and Advanced Submission Techniques for HPC
+!!! important "Save the Date"
+
+ - Date: Nov 20th, 2024
+ - Location: Virtual
+ - Time: 2:30 PM - 3:30 PM
+
+Elevate your High-Performance Computing skills with our advanced SLURM webinar! This session is designed for HPC users who are familiar with basic SLURM commands and are ready to dive into more sophisticated job management techniques.
+
+Registration is now closed. Check the [HPC training](../../HPC_Events_and_Workshops/Workshop_and_Training_Videos/index.md#job-arrays-and-advanced-submission-techniques-for-hpc) for the webinar recording and slides.
diff --git a/docs/news/posts/2025-01-14.md b/docs/news/posts/2025-01-14.md
new file mode 100644
index 000000000..6c3a46974
--- /dev/null
+++ b/docs/news/posts/2025-01-14.md
@@ -0,0 +1,31 @@
+---
+date: 2025-01-14
+---
+
+# HPC 2025 Spring Events
+
+ARCS HPC invites you to our upcoming events. Please register for the events you plan to attend.
+
+
+## Introduction to Wulver: Getting Started
+!!! njit "Save the Date"
+
+ - Date: Jan 22nd 2025
+ - Location: Virtual
+ - Time: 2:30 PM - 3:30 PM
+
+Join us for an informative webinar designed to introduce NJIT's HPC environment, Wulver. This virtual session will provide essential information about the Wulver cluster, how to get an account, and allocation details.
+
+Registration is now closed.
+
+
+## Introduction to Wulver: Accessing System & Running Jobs
+!!! njit "Save the Date"
+
+ - Date: Jan 29th 2025
+ - Location: Virtual
+ - Time: 2:30 PM - 3:30 PM
+
+This HPC training event focuses on providing the fundamentals of SLURM (Simple Linux Utility for Resource Management), a workload manager. This virtual session will equip you with the essential skills needed to effectively utilize HPC resources using SLURM.
+
+Registration is now closed.
diff --git a/docs/news/posts/2025-07-29.md b/docs/news/posts/2025-07-29.md
new file mode 100644
index 000000000..d3d98a45a
--- /dev/null
+++ b/docs/news/posts/2025-07-29.md
@@ -0,0 +1,15 @@
+---
+date: 2025-07-29
+---
+
+# Wulver Maintenance
+
+Wulver will be out of service on Tuesday, September 9th, for OS and SLURM updates.
+
+## Maintenance Plans
+* Upgrade the OS: Upgrade from RHEL 8 to RHEL 9: This will resolve the glibc error users encounter when compiling the latest packages. For details, see [FAQ](rhel_9_faq.md).
+* Upgrade SLURM version.
+* Implement new SU calculation: This will allow users to consume fewer SUs when using a single GPU instead of all 4 GPUs on a node.
+* Implement [MIG](../../MIG/index.md).
+* Add Lochness nodes for course-related usage.
+
diff --git a/docs/news/posts/2025-08-22.md b/docs/news/posts/2025-08-22.md
new file mode 100644
index 000000000..d626b52d9
--- /dev/null
+++ b/docs/news/posts/2025-08-22.md
@@ -0,0 +1,20 @@
+---
+date: 2025-08-22
+---
+
+# Wulver Outage
+
+As part of NJIT's migration to a new Virtual Machine (VM) platform, Nutanix from (now ultra expensive) VMWare, Wulver will undergo an unplanned but required downtime to migrate critical virtual infrastructure hosting head, login, Open OnDemand and Slurm nodes starting at 8:00 AM on Friday, August 29. The cluster will be unavailable until all migration work is completed.
+
+* Expected duration: Up to 12 hours (work may finish sooner)
+* Reason: Migration to the Nutanix VM platform
+
+## Important Information:
+
+* Any jobs submitted before the outage that would not finish in time will be held in the queue and will resume after the cluster is back online. Please plan your usage and submissions accordingly.
+* There is a minor risk that **queued jobs will be lost** during migration. We will monitor this and inform affected users, if necessary.
+* Updates will be provided if there is any change to the expected downtime window.
+
+
+We apologize for any inconvenience and appreciate your understanding as we make this important upgrade.
+
diff --git a/docs/news/posts/2025-08-29.md b/docs/news/posts/2025-08-29.md
new file mode 100644
index 000000000..5c0c179fc
--- /dev/null
+++ b/docs/news/posts/2025-08-29.md
@@ -0,0 +1,39 @@
+---
+date: 2025-08-29
+---
+
+# MIG GPU Testing on Wulver Now Available
+
+We’re excited to announce that MIG-enabled GPUs are now available on Wulver for workflow testing!
+
+We currently have 4 GPUs configured with MIG (Multi-Instance GPU) profiles as follows:
+
+* 40 GB profile – 1 MIG instance per GPU
+* 20 GB profile – 1 MIG instance per GPU
+* 10 GB profile – 2 MIG instances per GPU
+
+This effectively allows the 4 GPUs to perform as 16 GPUs of varying RAM sizes. These MIG instances allow you to run multiple workloads in parallel with dedicated GPU resources, improving efficiency for smaller jobs and testing scenarios.
+
+???+ question "Who can use them?"
+
+ All Wulver users are welcome to test their workflows on these new MIG profiles. Using the `debug_gpu` partition no Service Units (SUs) will be charged for these jobs. Modify your batch scripts to include these directives:
+
+ ```slurm
+ #SBATCH --partition=debug_gpu
+ #SBATCH --qos=debug
+ #SBATCH --gres=gpu:a100_10g:1 # Change to 20g or 40g as needed
+ #SBATCH --time=59:00 # Debug_gpu partition has a 12 hour walltime limit
+ ```
+
+
+???+ question "What should you do?"
+
+ * Read through the MIG documentation on the HPC website.
+ * Test your GPU-enabled workflows with these MIG resources.
+ * Verify that your job scripts and containers handle MIG devices correctly.
+ * Share feedback on performance and any issues you encounter.
+
+This is a testing phase, so configurations may change based on usage and feedback. For more details, check the [MIG documentation](../../MIG/index.md).
+
+
+
diff --git a/docs/news/posts/2025-10-04.md b/docs/news/posts/2025-10-04.md
new file mode 100644
index 000000000..d56263b38
--- /dev/null
+++ b/docs/news/posts/2025-10-04.md
@@ -0,0 +1,13 @@
+---
+date: 2025-10-04
+---
+
+# Office Hours
+
+We currently offer drop-in office hours every Tuesday and Friday from 2:00–4:00 p.m. Stop by to meet with our student consultants and ask any questions you have about using HPC resources. Whether you’re just getting started or need help with a specific issue, feel free to bring your laptop to walk us through any problems you're facing. There's no need to create a ticket in advance; if follow-up is needed, the student consultants will open a ticket on your behalf, and you'll receive further instructions.
+
+!!! important "Consulting Hours"
+
+ - Date: Every Tuesday and Friday
+ - Location: GITC 2404
+ - Time: 2:00 PM - 4:00 PM
diff --git a/docs/news/posts/2026-01-07.md b/docs/news/posts/2026-01-07.md
new file mode 100644
index 000000000..4704dab5b
--- /dev/null
+++ b/docs/news/posts/2026-01-07.md
@@ -0,0 +1,13 @@
+---
+date: 2026-01-07
+---
+
+# Office Hours for 2026 Spring
+
+For this spring, we offer drop-in office hours every Monday and Wednesday from 2:00–4:00 p.m. Stop by to meet with our student consultants and ask any questions you have about using HPC resources. Whether you’re just getting started or need help with a specific issue, feel free to bring your laptop to walk us through any problems you're facing. There's no need to create a ticket in advance; if follow-up is needed, the student consultants will open a ticket on your behalf, and you'll receive further instructions.
+
+!!! important "Consulting Hours"
+
+ - Date: Every Monday and Wednesday
+ - Location: GITC 5302N
+ - Time: 2:00 PM - 4:00 PM
diff --git a/docs/stylesheets/extra.css b/docs/stylesheets/extra.css
index f81c425ef..d344379ca 100644
--- a/docs/stylesheets/extra.css
+++ b/docs/stylesheets/extra.css
@@ -1,74 +1,78 @@
-#:root {
-# --md-default-fg-color: rgb(0,0,0);
-# --md-typeset-mark-color: rgb(0,0,0);
-# --md-code-hl-color: hsla(#{hex2hsl($clr-blue-a200)}, 0.15);
-
-
-# --md-primary-fg-color: #d8272d;
-# --md-primary-fg-color--light: #d8272d;
-# --md-primary-fg-color--dark: rgb(255,255,0);
-# --md-footer-fg-color: #ffffff;
-# --md-footer-bg-color: #d8272d;
-# Not sure why this works in light mode when it specifies dark
-# --md-footer-bg-color--dark: #d8272d;
-# --md-footer--fg-color--transparent: rgb(0,0,29,0.25);
-# --md-footer--fg-color--dark: indigo;
-
- /* Accent color shades */
-
-# --md-accent-fg-color: #d8272d;
-# --md-accent-fg-color--transparent: rgb(0,0,29,0.25);
-
-# --md-accent-fg-color--dark: indigo;
-
-# --md-accent-bg-color: #d8272d;
-# --md-accent-bg-color--light: rgb(0,0,235,0.5);
-#}
+:root {
+ --md-primary-fg-color: #AB0520;
+ --md-accent-fg-color: #EF4056;
+ --md-typeset-a-color: #d8272d;
+ --md-typeset-mark-color: #d8272d;
+}
[data-md-color-scheme="njit"] {
--md-primary-fg-color: #d8272d;
--md-primary-fg-color--light: #d8272d;
-
- /* Code highlighting color shades
- --md-code-hl-color: hsla(#{hex2hsl($clr-red-a200)}, 1);
- --md-code-hl-color--light: hsla(#{hex2hsl($clr-red-a200)}, 0.1);
- --md-code-hl-number-color: hsla(0, 67%, 50%, 1);
- --md-code-hl-special-color: hsla(340, 83%, 47%, 1);
- --md-code-hl-function-color: hsla(291, 45%, 50%, 1);
- --md-code-hl-constant-color: hsla(250, 63%, 60%, 1);
- --md-code-hl-keyword-color: hsla(219, 54%, 51%, 1);
- --md-code-hl-string-color: hsla(150, 63%, 30%, 1);
- --md-code-hl-name-color: var(--md-code-fg-color);
- --md-code-hl-operator-color: var(--md-default-fg-color--light);
- --md-code-hl-punctuation-color: var(--md-default-fg-color--light);
- --md-code-hl-comment-color: var(--md-default-fg-color--light);
- --md-code-hl-generic-color: var(--md-default-fg-color--light);
- --md-code-hl-variable-color: var(--md-default-fg-color--light);
- */
-
+ --md-admonition-icon--njit: url('../img/Logo_NJIT.png');
--md-typeset-mark-color: #d8272d;
- --md-default-fg-color: rgb(0,0,0);
-
- --md-footer-fg-color: #d8272d;
--md-footer-bg-color: #d8272d;
-
+ --md-footer-fg-color: #d8272d;
--md-accent-fg-color: #d8272d;
+ --md-typeset-a-color: #d8272d;
+ --md-primary-fg-color--dark: #90030C;
}
[data-md-color-scheme="slate"] {
--md-primary-fg-color: #d8272d;
- --md-primary-fg-color--light: #d8272d;
- --md-code-hl-color: hsla(#{hex2hsl($clr-blue-a200)}, 0.15);
- --md-typeset-mark-color: rgb(0,0,0);
+ --md-admonition-icon--njit: url('../img/Logo_NJIT.png');
--md-accent-fg-color: #d8272d;
- --md-hue: 360;
+ --md-hue: 100;
+ --md-background
+ --md-typeset-a-color: #d8272d;
+ --md-typeset-mark-color: #d8272d;
}
-
.md-grid {
max-width: initial;
+ width: 90%;
+ margin: 0 auto;
+}
+.grid.cards > ul > li, .md-typeset .grid > .card {
+ background-color: rgba(84, 15, 17, 0.05);
}
+md-typeset .admonition.njit,
+.md-typeset details.njit {
+ border-color: rgb(84, 15, 17);
+}
+.md-typeset .njit > .admonition-title,
+.md-typeset .njit > summary {
+ background-color: rgba(84, 15, 17, 0.1);
+}
+.md-typeset .njit > .admonition-title::before,
+.md-typeset .njit > summary::before {
+ background-color: #d8272d;
+ -webkit-mask-image: var(--md-admonition-icon--njit);
+ mask-image: var(--md-admonition-icon--njit);
+}
+md-typeset .admonition.question,
+.md-typeset details.question {
+ border-color: rgb(84, 15, 17);
+}
+.md-typeset .question > .admonition-title,
+.md-typeset .question > summary {
+ background-color: rgba(84, 15, 17, 0.1);
+}
+
+.md-typeset .question > .admonition-title::before,
+.md-typeset .question > summary::before {
+ background-color: #d8272d;
+}
+
+.octicon--arrow-right-24 {
+ display: inline-block;
+ width: 1.2rem;
+ height: 1.2rem;
+ background-repeat: no-repeat;
+ background-size: 100% 100%;
+ background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24'%3E%3Cpath fill='%23d8272d' d='M13.22 19.03a.75.75 0 0 1 0-1.06L18.19 13H3.75a.75.75 0 0 1 0-1.5h14.44l-4.97-4.97a.749.749 0 0 1 .326-1.275a.75.75 0 0 1 .734.215l6.25 6.25a.75.75 0 0 1 0 1.06l-6.25 6.25a.75.75 0 0 1-1.06 0'/%3E%3C/svg%3E");
+ vertical-align: -0.4em;
+}
diff --git a/docs/website_version.txt b/docs/website_version.txt
new file mode 100644
index 000000000..05ad491ce
--- /dev/null
+++ b/docs/website_version.txt
@@ -0,0 +1,38 @@
+Update approved 06.13.2024 1:27PM KC
+Update approved 06.19.2024 KC
+Update approved 06.27.2024 KC
+Update approved 07.10.2024 KC
+Update approved 07.12.2024 KC
+Update approved 07.14.2024 KC
+Update approved 07.16.2024 KC
+Update approved 07.30.2024 GW
+Update approved 07.30.2024 AP
+Update approved 09.12.2024 KC
+Update approved 09.13.2024 KC
+Update approved 09.18.2024 KC
+Update approved 10.11.2024 KC
+Update approved 11.06.2024 KC
+Update approved 12.23.2024 KC
+Update approved 01.16.2025 KC
+Update approved 01.31.2025 KC
+Update approved 02.20.2025 KC
+Update approved 03.06.2025 KC
+Update approved 04.03.2025 KC
+Update approved 04.22.2025 KC
+Update approved 04.30.2025 KC
+Update approved 06.05.2025 KC
+Update approved 07.09.2025 KC
+Update approved 08.05.2025 KC
+Update approved 08.18.2025 KC
+Update approved 08.22.2025 KC
+Update approved 09.03.2025 KC
+Update approved 09.10.2025 KC
+Update approved 09.17.2025 KC
+Update approved 09.24.2025 KC
+Update approved 10.06.2025 KC
+Update approved 10.08.2025 KC
+Update approved 10.10.2025 KC
+Update approved 11.05.2025 KC
+Update approved 11.25.2025 KC
+Update approved 12.11.2025 KC
+Update approved 01.14.2026 KC
diff --git a/mkdocs.yml b/mkdocs.yml
index 0eda3c77d..22a117f4b 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -1,62 +1,132 @@
-site_name: NJIT HPC
+site_name: NJIT HPC Documentation
site_url: https://arcs-njit-edu.github.io/Docs/
repo_url: https://github.com/arcs-njit-edu/Docs
-repo_name: arcs-njit-edu/Docs
+#repo_name: arcs-njit-edu/Docs
+
nav:
- - Home: index.md
+ - NJIT HPC: index.md
- 'About Us':
- 'about/index.md'
- 'Contact Us': 'about/contact.md'
- Clusters:
+ - 'clusters/index.md'
- 'Wulver': 'clusters/wulver.md'
- - 'Lochness': 'clusters/lochness.md'
- 'Decommissioned':
- 'Stheno': 'clusters/decommissioned/stheno.md'
+ - 'Lochness': 'clusters/decommissioned/lochness.md'
- 'User access to clusters': 'clusters/cluster_access.md'
- 'Getting started on Wulver': 'clusters/get_started_on_Wulver.md'
- - 'Policies':
- - 'Lochness user policies': 'Policies/lochness_policies.md'
- - 'Wulver usage and Condo policies': 'Policies/wulver_policies.md'
+ - 'Wulver Filesystems': 'clusters/Wulver_filesystems.md'
+ - 'Policies':
+ - 'Policies/index.md'
+ - 'Wulver policies': 'Policies/wulver_policies.md'
+ - 'Condo policies': 'Policies/condo_policies.md'
- 'Facilities': 'facilities/facilities.md'
- - ...
+ - 'Services': 'Services/hpc-services.md'
+ - Status (on-campus or VPN):
+ - 'Current Load': 'https://hpc.njit.edu/Monitor/load.html'
+ - 'Usage Summary': 'https://hpc.njit.edu/Monitor/summary.html'
+ - 'Usage (this year so far)': 'https://hpc.njit.edu/Monitor/cy2026.html'
+ - HPC Events and Workshops : HPC_Events_and_Workshops/
+ - Running Jobs:
+ - Overview : Running_jobs/index.md
+ - Nodes and Memory: Running_jobs/node-memory-config.md
+ - Service Units (SU): Running_jobs/service-units.md
+ - Batch Jobs: Running_jobs/batch-jobs.md
+ - Interactive Jobs: Running_jobs/interactive-jobs.md
+ - Managing and Monitoring Jobs: Running_jobs/managing-jobs.md
+ - Job Limitations: Running_jobs/job_limitation.md
+ - Array Jobs: Running_jobs/array-jobs.md
+ - Dependency Jobs: Running_jobs/dependency-jobs.md
+ - Checkpointing: Running_jobs/checkpointing.md
+ - Common Problems and Misconceptions: Running_jobs/problems-and-misconceptions.md
+ - Jobs on OnDemand: Running_jobs/ondemand-jobs.md
+ - Software : Software/
+ - MIG :
+ - MIG Overview: MIG/index.md
+ - Profile Comparison: MIG/profile-comparison.md
+ - Job Submission and SU Charges: MIG/job-submission-and-su-charges.md
+ - Performance Testing: MIG/performance_testing.md
+ - OnDemand :
+ - OnDemand Overview: OnDemand/index.md
+ - Interactive Apps: OnDemand/Interactive_Apps/
+ - 'Files': 'OnDemand/2_files.md'
+ - 'Clusters': 'OnDemand/3_clusters.md'
+ - 'Tools': 'OnDemand/4_tools.md'
+ - 'My Interactive Sessions': 'OnDemand/5_My_Interactive_Sessions.md'
+ - 'Jobs': 'OnDemand/6_jobs.md'
+ - HPC for Courses :
+ - Overview: Courses/index.md
+ - Course Configuration: Courses/course-resource-config.md
+ - Submitting Jobs: Courses/course-job-submission.md
+ - 'RHEL 9 Upgrade FAQs' : 'faq/rhel_9_faq.md'
+ - 'FAQs' : 'faq/faq.md'
+ - 'News and Announcements' : news/index.md
+ # - '' : 'website_version.txt'
+ # - '' : 'archived/'
+
theme:
name: material
logo: img/Logo_NJIT.png
+ custom_dir: overrides
favicon: favicon.ico
palette:
+ # Palette toggle for automatic mode
+ #- media: "(prefers-color-scheme)"
+ # toggle:
+ # icon: material/brightness-auto
+ # name: Switch to light mode
+
# Palette toggle for light mode
- - scheme: njit #rgb(204,0,0)
- toggle:
- icon: material/toggle-switch
- name: Switch to dark mode
+ #- media: "(prefers-color-scheme: light)"
+ scheme: njit
+ # toggle:
+ # icon: material/toggle-switch
+ # name: Switch to dark mode
# Palette toggle for dark mode
- - scheme: slate
- toggle:
- icon: material/toggle-switch-off-outline
- name: Switch to light mode
+ #- media: "(prefers-color-scheme: dark)"
+ # scheme: slate
+ # toggle:
+ # icon: material/toggle-switch-off-outline
+ # name: Switch to light mode
features:
- navigation.instant
+ - navigation.instant.prefetch
- navigation.tracking
+ #- navigation.tabs
- navigation.tabs.sticky
- navigation.path
- navigation.indexes
- navigation.top
+ - navigation.footer
- toc.follow
- content.code.copy
- content.code.select
- content.code.annotate
- content.tabs.link
+ - announce.dismiss
+ - content.tooltips
+ - header.autohide
+
extra_css:
- stylesheets/extra.css
extra:
social:
- - icon: fontawesome/brands/github
- link: https://github.com/arcs-njit-edu/Docs
+ - icon: material/help-box-multiple
+ link: https://nexus.njit.edu/highlander_nexus?id=sc_cat_item&sys_id=3f1dd0320a0a0b99000a53f7604a2ef9
+ name: Contact HPC support
- icon: fontawesome/solid/paper-plane
link: mailto:hpc@njit.edu
+ name: Email HPC support
+ - icon: material/web
+ link: https://ondemand.njit.edu
+ name: Open OnDemand
+ - icon: material/chart-areaspline
+ link: https://hpc.njit.edu/Monitor/summary.html
+ name: HPC job metrics
# Extensions
markdown_extensions:
@@ -76,7 +146,7 @@ markdown_extensions:
- pymdownx.critic
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
- emoji_generator: !!python/name:pymdownx.emoji.to_svg
+ emoji_generator: !!python/name:material.extensions.emoji.to_svg
- pymdownx.inlinehilite
- pymdownx.magiclink
- pymdownx.mark
@@ -101,8 +171,12 @@ plugins:
- table-reader
- mkdocstrings
- awesome-pages
+ - literate-nav
- autolinks
- blog:
blog_dir: news
- authors: false
- archive: false
+ #authors: false
+ #archive: false
+ - tags
+
+copyright: New Jersey Institute of Technology High Performance Computing © 2026
\ No newline at end of file
diff --git a/overrides/404.html b/overrides/404.html
new file mode 100644
index 000000000..078241ee6
--- /dev/null
+++ b/overrides/404.html
@@ -0,0 +1,6 @@
+{% extends "main.html" %}
+
+
+{% block content %}
+ 404 - Not found
+{% endblock %}
\ No newline at end of file
diff --git a/overrides/main.html b/overrides/main.html
new file mode 100644
index 000000000..d37e3f886
--- /dev/null
+++ b/overrides/main.html
@@ -0,0 +1,39 @@
+
+{% extends "base.html" %}
+
+
+{% block htmltitle %}
+ NJIT HPC Documentation
+{% endblock %}
+
+{#
+{% block announce %}
+
+ Wulver OS has been upgraded to RHEL9.
+Please check the details at RHEL9 OS Upgrade FAQ.
+The software packages, on the Wulver new OS, have also been updated. Check the list of software at Software list.
+
+
+{% endblock %}
+#}
+
+{% block footer %}
+
+{% endblock %}
\ No newline at end of file
diff --git a/overrides/partials/footer.html b/overrides/partials/footer.html
new file mode 100644
index 000000000..35d441942
--- /dev/null
+++ b/overrides/partials/footer.html
@@ -0,0 +1,18 @@
+
\ No newline at end of file
diff --git a/requirements.txt b/requirements.txt
index 9e8dcfa21..f63f46c1c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -2,14 +2,20 @@
jinja2~=3.0
markdown~=3.2
mkdocs~=1.5,>=1.5.3
-mkdocs-material>= 8.1.2
+mkdocs-material>=8.1.2
mkdocs-material-extensions~=1.2
pygments~=2.16
pymdown-extensions~=10.2
+markdown-exec~=1.8.0
+PyYAML>=5.4.1
# Requirements for plugins
babel~=2.10
colorama~=0.4
paginate~=0.5
regex>=2022.4
-requests~=2.26
\ No newline at end of file
+requests~=2.26
+mkdocs-table-reader-plugin~=2.0.3
+mkdocstrings~=0.23.0
+mkdocs-awesome-pages-plugin
+mkdocs-autolinks-plugin