Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[X86][AVX10.2] Remove YMM rounding from VCVTTP.*QS #132414

Merged
merged 2 commits into from
Mar 21, 2025

Conversation

phoebewang
Copy link
Contributor

@llvmbot llvmbot added clang Clang issues not falling into any other category backend:X86 clang:frontend Language frontend issues, e.g. anything involving "Sema" clang:headers Headers provided by Clang, e.g. for intrinsics mc Machine (object) code llvm:ir labels Mar 21, 2025
@llvmbot
Copy link
Member

llvmbot commented Mar 21, 2025

@llvm/pr-subscribers-backend-x86

@llvm/pr-subscribers-llvm-ir

Author: Phoebe Wang (phoebewang)

Changes

Ref: https://cdrdv2.intel.com/v1/dl/getContent/784343


Patch is 100.56 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/132414.diff

16 Files Affected:

  • (modified) clang/include/clang/Basic/BuiltinsX86.td (+8-8)
  • (modified) clang/lib/Headers/avx10_2satcvtdsintrin.h (+48-172)
  • (modified) clang/lib/Sema/SemaX86.cpp (-8)
  • (removed) clang/test/CodeGen/X86/avx10_2satcvtds-builtins-errors.c (-57)
  • (modified) clang/test/CodeGen/X86/avx10_2satcvtds-builtins-x64.c (+12-84)
  • (modified) clang/test/CodeGen/X86/avx10_2satcvtds-builtins.c (+14-89)
  • (modified) llvm/include/llvm/IR/IntrinsicsX86.td (+24-24)
  • (modified) llvm/lib/Target/X86/X86InstrAVX10.td (-5)
  • (modified) llvm/lib/Target/X86/X86IntrinsicsInfo.h (+16-16)
  • (modified) llvm/test/CodeGen/X86/avx10_2satcvtds-intrinsics.ll (+38-38)
  • (modified) llvm/test/MC/Disassembler/X86/avx10.2-satcvtds-32.txt (-48)
  • (modified) llvm/test/MC/Disassembler/X86/avx10.2-satcvtds-64.txt (-48)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-32-att.s (-48)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-32-intel.s (-64)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-64-att.s (-48)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-64-intel.s (-64)
diff --git a/clang/include/clang/Basic/BuiltinsX86.td b/clang/include/clang/Basic/BuiltinsX86.td
index ea0d6df4a33c2..583f4534dfab2 100644
--- a/clang/include/clang/Basic/BuiltinsX86.td
+++ b/clang/include/clang/Basic/BuiltinsX86.td
@@ -4615,7 +4615,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2dqs256_round_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char, _Constant int)">;
+  def vcvttpd2dqs256_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4627,7 +4627,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2udqs256_round_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char, _Constant int)">;
+  def vcvttpd2udqs256_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4639,7 +4639,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2qqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttpd2qqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4651,7 +4651,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2uqqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttpd2uqqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4663,7 +4663,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2dqs256_round_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char, _Constant int)">;
+  def vcvttps2dqs256_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4675,7 +4675,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2udqs256_round_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char, _Constant int)">;
+  def vcvttps2udqs256_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4687,7 +4687,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2qqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttps2qqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4699,7 +4699,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2uqqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttps2uqqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
diff --git a/clang/lib/Headers/avx10_2satcvtdsintrin.h b/clang/lib/Headers/avx10_2satcvtdsintrin.h
index 9dbfed42667ef..6509a4ebf9c77 100644
--- a/clang/lib/Headers/avx10_2satcvtdsintrin.h
+++ b/clang/lib/Headers/avx10_2satcvtdsintrin.h
@@ -92,37 +92,22 @@ _mm_maskz_cvtts_pd_epi32(__mmask16 __U, __m128d __A) {
 // 256 Bit : Double -> int
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epi32(__m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_mask(
+      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epi32(__m128i __W, __mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(
-      (__v4df)__A, (__v4si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_mask(
+      (__v4df)__A, (__v4si)__W, __U));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epi32(__mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_mask(
+      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epi32(__A, __R)                                   \
-  ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(                          \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_undefined_si128(),            \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundpd_epi32(__W, __U, __A, __R)                    \
-  ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(                          \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundpd_epi32(__U, __A, __R)                        \
-  ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(                          \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_setzero_si128(),              \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 Bit : Double -> uint
 static __inline__ __m128i __DEFAULT_FN_ATTRS128
 _mm_cvtts_pd_epu32(__m128d __A) {
@@ -145,37 +130,22 @@ _mm_maskz_cvtts_pd_epu32(__mmask8 __U, __m128d __A) {
 // 256 Bit : Double -> uint
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epu32(__m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_mask(
+      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epu32(__m128i __W, __mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(
-      (__v4df)__A, (__v4si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_mask(
+      (__v4df)__A, (__v4si)__W, __U));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epu32(__mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_mask(
+      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epu32(__A, __R)                                   \
-  ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(                         \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_undefined_si128(),            \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundpd_epu32(__W, __U, __A, __R)                    \
-  ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(                         \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundpd_epu32(__U, __A, __R)                        \
-  ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(                         \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_setzero_si128(),              \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 Bit : Double -> long
 static __inline__ __m128i __DEFAULT_FN_ATTRS128
 _mm_cvtts_pd_epi64(__m128d __A) {
@@ -198,37 +168,22 @@ _mm_maskz_cvtts_pd_epi64(__mmask8 __U, __m128d __A) {
 // 256 Bit : Double -> long
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epi64(__m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epi64(__m256i __W, __mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(
-      (__v4df)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_mask(
+      (__v4df)__A, (__v4di)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epi64(__mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epi64(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(                          \
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8) - 1,           \
-      (int)__R))
-
-#define _mm256_mask_cvtts_roundpd_epi64(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask((__v4df)__A, (__v4di)__W, \
-                                                     (__mmask8)__U, (int)__R))
-
-#define _mm256_maskz_cvtts_roundpd_epi64(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(                          \
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), (__mmask8)__U, (int)__R))
-
 // 128 Bit : Double -> ulong
 static __inline__ __m128i __DEFAULT_FN_ATTRS128
 _mm_cvtts_pd_epu64(__m128d __A) {
@@ -252,37 +207,22 @@ _mm_maskz_cvtts_pd_epu64(__mmask8 __U, __m128d __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epu64(__m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epu64(__m256i __W, __mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(
-      (__v4df)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_mask(
+      (__v4df)__A, (__v4di)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epu64(__mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epu64(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(                         \
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8) - 1,           \
-      (int)__R))
-
-#define _mm256_mask_cvtts_roundpd_epu64(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(                         \
-      (__v4df)__A, (__v4di)__W, (__mmask8)__U, (int)__R))
-
-#define _mm256_maskz_cvtts_roundpd_epu64(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(                         \
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), (__mmask8)__U, (int)__R))
-
 // 128 Bit : float -> int
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epi32(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2dqs128_mask(
@@ -304,38 +244,22 @@ _mm_maskz_cvtts_ps_epi32(__mmask8 __U, __m128 __A) {
 // 256 Bit : float -> int
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epi32(__m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2dqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epi32(__m256i __W, __mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(
-      (__v8sf)__A, (__v8si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2dqs256_mask(
+      (__v8sf)__A, (__v8si)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_ps_epi32(__mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2dqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundps_epi32(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(                          \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_undefined_si256(),          \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundps_epi32(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(                          \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundps_epi32(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(                          \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_setzero_si256(),            \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 Bit : float -> uint
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epu32(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2udqs128_mask(
@@ -358,38 +282,22 @@ _mm_maskz_cvtts_ps_epu32(__mmask8 __U, __m128 __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epu32(__m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2udqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epu32(__m256i __W, __mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(
-      (__v8sf)__A, (__v8si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2udqs256_mask(
+      (__v8sf)__A, (__v8si)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_ps_epu32(__mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2udqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundps_epu32(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(                         \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_undefined_si256(),          \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundps_epu32(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(                         \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundps_epu32(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(                         \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_setzero_si256(),            \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 bit : float -> long
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epi64(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2qqs128_mask(
@@ -411,37 +319,21 @@ _mm_maskz_cvtts_ps_epi64(__mmask8 __U, __m128 __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epi64(__m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(
-      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2qqs256_mask(
+      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epi64(__m256i __W, __mmask8 __U, __m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(
-      (__v4sf)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2qqs256_mask(
+      (__v4sf)__A, (__v4di)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_ps_epi64(__mmask8 __U, __m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(
-      (__v4sf)__A, (__v4di)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2qqs256_mask(
+      (__v4sf)__A, (__v4di)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundps_epi64(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(                          \
-      (__v4sf)(__m128)__A, (__v4di)_mm256_undefined_si256(), (__mmask8) - 1,   \
-      (int)__R))
-
-#define _mm256_mask_cvtts_roundps_epi64(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(                          \
-      (__v4sf)(__m128)__A, (__v4di)__W, (__mmask8)__U, (int)__R))
-
-#define _mm256_maskz_cvtts_roundps_epi64(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(                          \
-      (__v4sf)(__m128)__A, (__v4di)_mm256_setzero_si256(), (__mmask8)__U,      \
-      (int)__R))
-
 // 128 bit : float -> ulong
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epu64(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2uqqs128_mask(
@@ -463,38 +355,22 @@ _mm_maskz_cvtts_ps_epu64(__mmask8 __U, __m128 __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epu64(__m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2uqqs256_round_mask(
-      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2uqqs256_mask(
+      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epu64(__m256i __W, __mmask8 __U, __m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2uqqs256_round_mask(
-      (__v4sf)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Mar 21, 2025

@llvm/pr-subscribers-mc

Author: Phoebe Wang (phoebewang)

Changes

Ref: https://cdrdv2.intel.com/v1/dl/getContent/784343


Patch is 100.56 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/132414.diff

16 Files Affected:

  • (modified) clang/include/clang/Basic/BuiltinsX86.td (+8-8)
  • (modified) clang/lib/Headers/avx10_2satcvtdsintrin.h (+48-172)
  • (modified) clang/lib/Sema/SemaX86.cpp (-8)
  • (removed) clang/test/CodeGen/X86/avx10_2satcvtds-builtins-errors.c (-57)
  • (modified) clang/test/CodeGen/X86/avx10_2satcvtds-builtins-x64.c (+12-84)
  • (modified) clang/test/CodeGen/X86/avx10_2satcvtds-builtins.c (+14-89)
  • (modified) llvm/include/llvm/IR/IntrinsicsX86.td (+24-24)
  • (modified) llvm/lib/Target/X86/X86InstrAVX10.td (-5)
  • (modified) llvm/lib/Target/X86/X86IntrinsicsInfo.h (+16-16)
  • (modified) llvm/test/CodeGen/X86/avx10_2satcvtds-intrinsics.ll (+38-38)
  • (modified) llvm/test/MC/Disassembler/X86/avx10.2-satcvtds-32.txt (-48)
  • (modified) llvm/test/MC/Disassembler/X86/avx10.2-satcvtds-64.txt (-48)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-32-att.s (-48)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-32-intel.s (-64)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-64-att.s (-48)
  • (modified) llvm/test/MC/X86/avx10_2satcvtds-64-intel.s (-64)
diff --git a/clang/include/clang/Basic/BuiltinsX86.td b/clang/include/clang/Basic/BuiltinsX86.td
index ea0d6df4a33c2..583f4534dfab2 100644
--- a/clang/include/clang/Basic/BuiltinsX86.td
+++ b/clang/include/clang/Basic/BuiltinsX86.td
@@ -4615,7 +4615,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2dqs256_round_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char, _Constant int)">;
+  def vcvttpd2dqs256_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4627,7 +4627,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2udqs256_round_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char, _Constant int)">;
+  def vcvttpd2udqs256_mask : X86Builtin<"_Vector<4, int>(_Vector<4, double>, _Vector<4, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4639,7 +4639,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2qqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttpd2qqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4651,7 +4651,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttpd2uqqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttpd2uqqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, double>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4663,7 +4663,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2dqs256_round_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char, _Constant int)">;
+  def vcvttps2dqs256_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4675,7 +4675,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2udqs256_round_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char, _Constant int)">;
+  def vcvttps2udqs256_mask : X86Builtin<"_Vector<8, int>(_Vector<8, float>, _Vector<8, int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4687,7 +4687,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2qqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttps2qqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
@@ -4699,7 +4699,7 @@ let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<128>] i
 }
 
 let Features = "avx10.2-256", Attributes = [NoThrow, RequiredVectorWidth<256>] in {
-  def vcvttps2uqqs256_round_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char, _Constant int)">;
+  def vcvttps2uqqs256_mask : X86Builtin<"_Vector<4, long long int>(_Vector<4, float>, _Vector<4, long long int>, unsigned char)">;
 }
 
 let Features = "avx10.2-512", Attributes = [NoThrow, RequiredVectorWidth<512>] in {
diff --git a/clang/lib/Headers/avx10_2satcvtdsintrin.h b/clang/lib/Headers/avx10_2satcvtdsintrin.h
index 9dbfed42667ef..6509a4ebf9c77 100644
--- a/clang/lib/Headers/avx10_2satcvtdsintrin.h
+++ b/clang/lib/Headers/avx10_2satcvtdsintrin.h
@@ -92,37 +92,22 @@ _mm_maskz_cvtts_pd_epi32(__mmask16 __U, __m128d __A) {
 // 256 Bit : Double -> int
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epi32(__m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_mask(
+      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epi32(__m128i __W, __mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(
-      (__v4df)__A, (__v4si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_mask(
+      (__v4df)__A, (__v4si)__W, __U));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epi32(__mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2dqs256_mask(
+      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epi32(__A, __R)                                   \
-  ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(                          \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_undefined_si128(),            \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundpd_epi32(__W, __U, __A, __R)                    \
-  ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(                          \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundpd_epi32(__U, __A, __R)                        \
-  ((__m128i)__builtin_ia32_vcvttpd2dqs256_round_mask(                          \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_setzero_si128(),              \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 Bit : Double -> uint
 static __inline__ __m128i __DEFAULT_FN_ATTRS128
 _mm_cvtts_pd_epu32(__m128d __A) {
@@ -145,37 +130,22 @@ _mm_maskz_cvtts_pd_epu32(__mmask8 __U, __m128d __A) {
 // 256 Bit : Double -> uint
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epu32(__m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_mask(
+      (__v4df)__A, (__v4si)_mm_undefined_si128(), (__mmask8)-1));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epu32(__m128i __W, __mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(
-      (__v4df)__A, (__v4si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_mask(
+      (__v4df)__A, (__v4si)__W, __U));
 }
 
 static __inline__ __m128i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epu32(__mmask8 __U, __m256d __A) {
-  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(
-      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m128i)__builtin_ia32_vcvttpd2udqs256_mask(
+      (__v4df)__A, (__v4si)_mm_setzero_si128(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epu32(__A, __R)                                   \
-  ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(                         \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_undefined_si128(),            \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundpd_epu32(__W, __U, __A, __R)                    \
-  ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(                         \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundpd_epu32(__U, __A, __R)                        \
-  ((__m128i)__builtin_ia32_vcvttpd2udqs256_round_mask(                         \
-      (__v4df)(__m256d)__A, (__v4si)(__m128i)_mm_setzero_si128(),              \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 Bit : Double -> long
 static __inline__ __m128i __DEFAULT_FN_ATTRS128
 _mm_cvtts_pd_epi64(__m128d __A) {
@@ -198,37 +168,22 @@ _mm_maskz_cvtts_pd_epi64(__mmask8 __U, __m128d __A) {
 // 256 Bit : Double -> long
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epi64(__m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epi64(__m256i __W, __mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(
-      (__v4df)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_mask(
+      (__v4df)__A, (__v4di)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epi64(__mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2qqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epi64(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(                          \
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8) - 1,           \
-      (int)__R))
-
-#define _mm256_mask_cvtts_roundpd_epi64(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask((__v4df)__A, (__v4di)__W, \
-                                                     (__mmask8)__U, (int)__R))
-
-#define _mm256_maskz_cvtts_roundpd_epi64(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttpd2qqs256_round_mask(                          \
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), (__mmask8)__U, (int)__R))
-
 // 128 Bit : Double -> ulong
 static __inline__ __m128i __DEFAULT_FN_ATTRS128
 _mm_cvtts_pd_epu64(__m128d __A) {
@@ -252,37 +207,22 @@ _mm_maskz_cvtts_pd_epu64(__mmask8 __U, __m128d __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_pd_epu64(__m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_pd_epu64(__m256i __W, __mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(
-      (__v4df)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_mask(
+      (__v4df)__A, (__v4di)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_pd_epu64(__mmask8 __U, __m256d __A) {
-  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttpd2uqqs256_mask(
+      (__v4df)__A, (__v4di)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundpd_epu64(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(                         \
-      (__v4df)__A, (__v4di)_mm256_undefined_si256(), (__mmask8) - 1,           \
-      (int)__R))
-
-#define _mm256_mask_cvtts_roundpd_epu64(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(                         \
-      (__v4df)__A, (__v4di)__W, (__mmask8)__U, (int)__R))
-
-#define _mm256_maskz_cvtts_roundpd_epu64(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttpd2uqqs256_round_mask(                         \
-      (__v4df)__A, (__v4di)_mm256_setzero_si256(), (__mmask8)__U, (int)__R))
-
 // 128 Bit : float -> int
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epi32(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2dqs128_mask(
@@ -304,38 +244,22 @@ _mm_maskz_cvtts_ps_epi32(__mmask8 __U, __m128 __A) {
 // 256 Bit : float -> int
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epi32(__m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2dqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epi32(__m256i __W, __mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(
-      (__v8sf)__A, (__v8si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2dqs256_mask(
+      (__v8sf)__A, (__v8si)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_ps_epi32(__mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2dqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundps_epi32(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(                          \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_undefined_si256(),          \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundps_epi32(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(                          \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundps_epi32(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttps2dqs256_round_mask(                          \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_setzero_si256(),            \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 Bit : float -> uint
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epu32(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2udqs128_mask(
@@ -358,38 +282,22 @@ _mm_maskz_cvtts_ps_epu32(__mmask8 __U, __m128 __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epu32(__m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2udqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epu32(__m256i __W, __mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(
-      (__v8sf)__A, (__v8si)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2udqs256_mask(
+      (__v8sf)__A, (__v8si)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_ps_epu32(__mmask8 __U, __m256 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(
-      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2udqs256_mask(
+      (__v8sf)__A, (__v8si)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundps_epu32(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(                         \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_undefined_si256(),          \
-      (__mmask8) - 1, (int)(__R)))
-
-#define _mm256_mask_cvtts_roundps_epu32(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(                         \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)__W, (__mmask8)__U, (int)(__R)))
-
-#define _mm256_maskz_cvtts_roundps_epu32(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttps2udqs256_round_mask(                         \
-      (__v8sf)(__m256)__A, (__v8si)(__m256i)_mm256_setzero_si256(),            \
-      (__mmask8)__U, (int)(__R)))
-
 // 128 bit : float -> long
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epi64(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2qqs128_mask(
@@ -411,37 +319,21 @@ _mm_maskz_cvtts_ps_epi64(__mmask8 __U, __m128 __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epi64(__m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(
-      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2qqs256_mask(
+      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epi64(__m256i __W, __mmask8 __U, __m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(
-      (__v4sf)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2qqs256_mask(
+      (__v4sf)__A, (__v4di)__W, __U));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_maskz_cvtts_ps_epi64(__mmask8 __U, __m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(
-      (__v4sf)__A, (__v4di)_mm256_setzero_si256(), __U,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2qqs256_mask(
+      (__v4sf)__A, (__v4di)_mm256_setzero_si256(), __U));
 }
 
-#define _mm256_cvtts_roundps_epi64(__A, __R)                                   \
-  ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(                          \
-      (__v4sf)(__m128)__A, (__v4di)_mm256_undefined_si256(), (__mmask8) - 1,   \
-      (int)__R))
-
-#define _mm256_mask_cvtts_roundps_epi64(__W, __U, __A, __R)                    \
-  ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(                          \
-      (__v4sf)(__m128)__A, (__v4di)__W, (__mmask8)__U, (int)__R))
-
-#define _mm256_maskz_cvtts_roundps_epi64(__U, __A, __R)                        \
-  ((__m256i)__builtin_ia32_vcvttps2qqs256_round_mask(                          \
-      (__v4sf)(__m128)__A, (__v4di)_mm256_setzero_si256(), (__mmask8)__U,      \
-      (int)__R))
-
 // 128 bit : float -> ulong
 static __inline__ __m128i __DEFAULT_FN_ATTRS128 _mm_cvtts_ps_epu64(__m128 __A) {
   return ((__m128i)__builtin_ia32_vcvttps2uqqs128_mask(
@@ -463,38 +355,22 @@ _mm_maskz_cvtts_ps_epu64(__mmask8 __U, __m128 __A) {
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_cvtts_ps_epu64(__m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2uqqs256_round_mask(
-      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1,
-      _MM_FROUND_CUR_DIRECTION));
+  return ((__m256i)__builtin_ia32_vcvttps2uqqs256_mask(
+      (__v4sf)__A, (__v4di)_mm256_undefined_si256(), (__mmask8)-1));
 }
 
 static __inline__ __m256i __DEFAULT_FN_ATTRS256
 _mm256_mask_cvtts_ps_epu64(__m256i __W, __mmask8 __U, __m128 __A) {
-  return ((__m256i)__builtin_ia32_vcvttps2uqqs256_round_mask(
-      (__v4sf)__A, (__v4di)__W, __U, _MM_FROUND_CUR_DIRECTION));
+  return ...
[truncated]

Copy link

github-actions bot commented Mar 21, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with the clang-format warning fix on the header

@phoebewang phoebewang merged commit e1a1603 into llvm:main Mar 21, 2025
6 of 10 checks passed
@phoebewang phoebewang deleted the AVX10-SATCVT branch March 21, 2025 17:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:X86 clang:frontend Language frontend issues, e.g. anything involving "Sema" clang:headers Headers provided by Clang, e.g. for intrinsics clang Clang issues not falling into any other category llvm:ir mc Machine (object) code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants