Skip to content

Conversation

chensuyue
Copy link
Contributor

@chensuyue chensuyue commented Oct 10, 2025

User description

Type of Change

documentation and validation

Description

  1. Remove 2x example test from CI.
  2. Remove 2x ut baseline test and coverage collect.
  3. Update 3x example list.

Expected Behavior & Potential Risk

the expected behavior that triggered by this PR

How has this PR been tested?

how to reproduce the test (including hardware information)

Dependency Change?

any library dependency introduced or removed


PR Type

Enhancement, Documentation


Description

  • Removed 2 example tests from CI

  • Removed 2 UT baseline tests

  • Updated example list in README


Diagram Walkthrough

flowchart LR
  A["Remove CI tests"] -- "update .azure-pipelines/ut-basic.yml" --> B["Remove UT baselines"]
  B -- "update README.md" --> C["Update example list"]
Loading

File Walkthrough

Relevant files
Documentation
README.md
Add new examples to README                                                             

examples/README.md

  • Added Llama-4-Scout-17B-16E-Instruct example
  • Added Llama-3.3-70B-Instruct example with Mixed Precision
  • Added Quantization (MXFP4/MXFP8) for Llama-3.3-70B-Instruct
+16/-1   
Configuration changes
ut-basic.yml
Remove CI tests                                                                                   

.azure-pipelines/ut-basic.yml

  • Removed 2 example tests from CI pipeline
+0/-162 

Signed-off-by: chensuyue <[email protected]>
Signed-off-by: chensuyue <[email protected]>
@PRAgent4INC
Copy link
Collaborator

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Duplicate Entry

There is a duplicate entry for 'Llama-3.3-70B-Instruct' with different quantization methods. This could lead to confusion in the documentation.

    <td rowspan="2">Natural Language Processing</td>
    <td>Mixed Precision (MXFP4+MXFP8)</td>
    <td><a href="./pytorch/nlp/huggingface_models/language-modeling/quantization/mix-precision#mix-precision-quantization-mxfp4--mxfp8">link</a></td>
</tr>
<tr>
    <td>Quantization (MXFP4/MXFP8)</td>
    <td><a href="./pytorch/nlp/huggingface_models/language-modeling/quantization/mix-precision#mxfp4--mxfp8">link</a></td>
</tr>

@PRAgent4INC
Copy link
Collaborator

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix table alignment

Correct the table row to ensure it aligns with the previous rows by adding the
necessary columns.

examples/README.md [75-77]

 <tr>
     <td>Quantization (MXFP4/MXFP8)</td>
+    <td>Natural Language Processing</td>
     <td><a href="./pytorch/nlp/huggingface_models/language-modeling/quantization/mix-precision#mxfp4--mxfp8">link</a></td>
 </tr>
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies the need to add a column to align with previous rows, improving table consistency. However, it does not address any critical issues and offers a minor improvement.

Medium

Signed-off-by: chensuyue <[email protected]>
@chensuyue chensuyue merged commit 24871ad into master Oct 10, 2025
18 checks passed
@chensuyue chensuyue deleted the suyue/update branch October 10, 2025 11:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants