-
Notifications
You must be signed in to change notification settings - Fork 861
WeeklyTelcon_20210928
Geoffrey Paulsen edited this page Nov 2, 2021
·
1 revision
- Brendan Cunningham (Cornelis Networks)
- Brian Barrett (AWS) - Welcome Back!
- David Bernholdt (ORNL)
- Geoffrey Paulsen (IBM)
- Harumi Kuno (HPE)
- Hessam Mirsadeghi (NVIDIA))
- Howard Pritchard (LANL)
- Jeff Squyres (Cisco)
- Josh Hursey (IBM)
- Matthew Dosanjh (Sandia)
- Michael Heinz (Cornelis Networks)
- Raghu Raja
- Sam Gutierrez (LANL)
- Sriraj Paul (Intel)
- Thomas Naughton (ORNL)
- Todd Kordenbrock (Sandia)
- Akshay Venkatesh (NVIDIA)
- Artem Polyakov (NVIDIA)
- Aurelien Bouteiller (UTK)
- Austen Lauria (IBM)
- Brandon Yates (Intel)
- Charles Shereda (LLNL)
- Christoph Niethammer (HLRS)
- Edgar Gabriel (UH)
- Erik Zeiske (HPE)
- Geoffroy Vallee (ARM)
- George Bosilca (UTK)
- Joseph Schuchart (HLRS)
- Joshua Ladd (NVIDIA)
- Marisa Roman (Cornelius)
- Mark Allen (IBM)
- Matias Cabral (Intel)
- Nathan Hjelm (Google)
- Noah Evans (Sandia)
- Ralph Castain (Intel)
- Scott Breyer (Sandia?)
- Shintaro iwasaki
- Tomislav Janjusic (NVIDIA)
- William Zhang (AWS)
- Xin Zhao (NVIDIA)
- Does Fortran Fixes affect API? (i.e. needed for v5.0.0?)
- PR https://github.com/open-mpi/ompi/pull/9259
- Jeff reviewed 16 days ago, looks incomplete.
- Think that 9367 addresses the issue with 9259.
- and PR https://github.com/open-mpi/ompi/pull/9367
- Question: should we close either 9259 or 9367? Should we move them both to Draft for now and wait on FORTRAN community?
- PR https://github.com/open-mpi/ompi/pull/9259
- Schedule: Pushed to October for 4.0.7
- --cpu-set - Geoff working on PR for nice warning/docs
- Fortran PR 9259, 9367 probably affect v4.0.x branch as well.
- Schedule:
- Made a v4.1.2rc1 - Please TEST.
- Two outstanding:
- ROMIO update.
- --cpu-set PR from Geoff (see above)
- One more pending on v4.1.x Jenkins had some issues that Brian is looking at.
- ROMIO 3.2.1 based PR 8371 do we want to take this?
- v4.1.x does this need to go back to v4.0.x?
- Issue #9432 - MPI4Py testing see ROMIO issue.
- Schedule: aiming for rc1 on Sept 23rd.
- George was able to verify the BTL+OSC RDMA failures is not only IBM.
- Blocker v4.1.x blocker also in v5.0.x Common/OFI
- Tommy's still pushing on UCX Onesided.
- PMIx and/or PRRTE are releasing a new minor rev that we'll pickup for v5.0.x
- Did we update yet?
- Think there are other issues than just one sided.
- 5 in issues, only 2 are one-sided.
- One is static linking, Austen will reverify
- Talk about gcc v4.7 and RHEL6
- PMIx and PRRTE just don't compile on RHEL6, but because of this, do we even care about RHEL6? specifically gcc v4.4.7
- RHEL7 v4.8.5 works fine.
- Not interested in testing all of those gcc version.
- Jeff will post a Pull Request.
- Will officially truncate support as well.
- No issues with glibc issues, so no hard check.
- Documentation
- Got a change in sphynx tools needed. No sure if there's a release yet.
- This fixes outputting issues in manpages.
- Process to update FAQ is to talk to Jeff or Harumi.
- Any changes in README or FAQ let them know to make changes in NEW docs.
- For now, make changes in ompi-www and README as usual and let them know.
- Got a change in sphynx tools needed. No sure if there's a release yet.
- v5.0.x requires pandoc. If user downloads from .tarball they do NOT need pandoc installed.
- If user runs
make dist
ormake dist-check
they WILL need pandoc.- This is a strange quirk, but seems fine.
- If user runs
- Problem with OFI and Open MPI
- No discussion
- Github Project of [critical v5.0.x issues|https://github.com/open-mpi/ompi/projects/3]
- Issue 8983 - Nathan volunteered to put out a fix.
- If we partially disable OSC/TCP BTL - Not breaking MPI compliance, just breaking One-sided performance badly.
- https://github.com/open-mpi/ompi/pull/8984
- https://github.com/open-mpi/ompi/issues/7830
- users could fall back to using UCX or OFI, and not the BTLs.
- But that's a different can-of-worms
- Brian will take a look at issue.
- Described approach of rc1 on Sept 23, disabling any functionality that are blockers to allow for the rc.
- Worried that blockers might not be fixed in time, so will put in code to issue an error at runtime to prevent getting into those paths, and document it heavily.
- MPIAlltoallw needs to go in. Is a PR from Giles George
- https://github.com/open-mpi/ompi/pull/9329
- Test has been merged not a fix.
-
https://github.com/open-mpi/ompi/pull/9330
- George thinks it's ready to go.
- Jeff will review.
- Portals bugfixes incomming.
- Todd's working on this. Hasn't posted yet. Will post this week.
- 9391
-
https://github.com/open-mpi/ompi/pull/9326 should get into 5.0 too
- This fixes a correctness issues, and George is concerned about performance.
- Is argobots now unsupported?
- no. Our integration allow users to call MPI withing a blocking argobot function and this still works.
- What we think is a thread that will block in libevent, because libevent isn't aware of argobots, so libeven will block entire thread.
- George joined about this time. I think he said this was ready or that he'd re-read.
- Was accepted for Open MPI
- Our Hybrid BoF will be mostly VIRTUAL BoF
- George may be there in person for tutorial (tho other tutorials will be fully-virtual)
- Bird of a Feather will be Virtual.
- George sent out an email to Amazon, Cisco, IBM, nVidia
- Our Hybrid BoF will be mostly VIRTUAL BoF
-
Reviewed and Approved against master: https://github.com/open-mpi/ompi/pulls?q=is%3Apr+is%3Aopen+base%3Amaster+review%3Aapproved
-
Awaiting Review: https://github.com/open-mpi/ompi/pulls?q=is%3Apr+is%3Aopen+base%3Amaster+review%3Anone
- Most reviewers are NOT
- No update
- Don't do the old system, use this new system for v5.0.0
- No discussion [Open MPI 4.0 API Compliance Github Project|https://github.com/open-mpi/ompi/projects/2]
- Jeff's going to review PR 9246
- Howard will review 7985
- Need to decide what to do with 8057
- Sessions branch, don't want to merge into master until possibly v5.0.1 gets out.
- It will complicate things in finalize/initialize code.
- Looking okay.
- Looks like something was wrong with MTT.
- That machine just got upgraded.
- Install fail is kinda weird.
- No discussion.